Musk may ban Apple devices from companies over ChatGPT integration

Musk’s comments indicate that he believes OpenAI is deeply woven into Apple’s operating system and is therefore capable of capturing all personal and private data.

  • Musk may ban companies from Apple devices over AI integration
    Elon Musk appears at an event in London on November 2, 2023 (AP)

Tesla, SpaceX and xAI CEO Elon Musk threatened to ban iPhones across all his companies in response to Apple’s newly announced OpenAI integration at WWDC 2024 on Monday.

Musk revealed in posts on

In response to Apple CEO Tim Cook, he warned that Apple devices would be banned from his companies if he “failed to stop this terrifying spyware.”

Both Apple and OpenAI said that users are consulted before “posting any questions to ChatGPT” and before uploading any documents or photos. However, Musk’s comments suggest that he believes OpenAI is deeply woven into Apple’s operating system and is therefore capable of capturing all personal and private data.

Apple announced that users will be able to ask Siri questions in iOS 18, and if Siri thinks ChatGPT might be helpful, it will ask you to share the query and provide the answer. This allows users to receive a response from ChatGPT without having to launch the ChatGPT iOS app. This also applies to photos, PDF files and other materials.

Musk, in turn, prefers OpenAI’s capabilities to be limited to a separate application rather than a Siri connection.

Musk responded to VC and CTO Sam Pullar of Sutter Hill Ventures, who stated that the user accepts a specific request depending on the request and that OpenAI does not have access to the device, saying: “So leave it as an app. This is nonsense.”

Pullara stated that the way ChatGPT was enabled was almost the same as the ChatGPT application currently works. The AI ​​models on devices are either owned by Apple or models using Apple’s private cloud.

Apple also revealed another connection that will give users system-wide access to ChatGPT using the “create” writing tools feature. For example, Apple advised that you ask ChatGPT to compose a bedtime story for your child in your document. You can also ask ChatGPT to create images in different styles to accompany your message. With these capabilities, users will effectively have free access to ChatGPT, eliminating the need to register an account.

According to TechCrunchMusk raises concerns, citing the fact that Apple consumers may not be familiar with the complexities of privacy issues.

According to Apple’s statement, user requests and information are not tracked; however, ChatGPT members can connect their accounts and use premium services directly within Apple’s AI solutions.

Craig Federighi, Apple’s senior vice president of software engineering, assured that the user “controls when ChatGPT is used and will be asked before sharing any information.”

OpenAI stated in a blog post that “requests are not stored by OpenAI and user IP addresses are hidden.” Users can also link their ChatGPT accounts, which means the data they choose will apply to ChatGPT regulations. The latter refers to the possibility (as in the case of opt-in) of linking the function to a paid membership.

ChatGPT under fire: Austria complains of ‘errors that cannot be fixed’

In April, a Vienna-based privacy group announced its intention to file a complaint against ChatGPT in Austria, alleging that the AI ​​tool, known for generating “hallucinatory” responses, produces incorrect responses that its creator, OpenAI, is unable to correct.

NOYB (“None Your Business”) said there was no confidence in the program’s ability to provide accurate information, emphasizing that “ChatGPT keeps hallucinating – and not even OpenAI can stop it.”

The group criticized OpenAI for openly admitting its inability to correct inaccuracies generated by its generative artificial intelligence tool and for failing to explain the sources of the data it uses and information about people stored by ChatGPT.

According to NOYB, errors of this type are considered unacceptable in the context of personal data because EU legislation requires personal data to be accurate.

“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals,” said Maartje de Graaf, data protection lawyer at NOYB, as quoted by AFP.

“Technology must meet legal requirements, not the other way around,” Graaf added.