close
close

Apple Intelligence can introduce security threats to your device

Apple’s long-awaited announcement of its generative artificial intelligence (GenAI) capabilities coincided with an in-depth discussion about the company’s security considerations for the platform. However, the tech industry’s past focus on collecting user data from almost every product and service has raised many concerns about the data security and privacy implications of Apple’s move. Fortunately, there are some proactive ways to deal with potential threats.

Apple’s approach to GenAI integration — dubbed Apple Intelligence — includes contextual search, email editing for tone, and easy graphics creation, with Apple saying these advanced features only require local processing on mobile devices to protect user and business data. The company detailed a five-step approach to strengthen the privacy and security of the platform, with most of the processing taking place on the user’s device using Apple Silicon. However, more complex queries will be sent to the company’s private cloud and will benefit from the services of OpenAI and its large language model (LLM).

While companies will have to wait to see how Apple’s commitment to security plays out, the company has put a lot of thought into how GenAI services are handled on devices and how it protects information, says Joseph Thacker, principal artificial intelligence and security engineer and researcher at AppOmni, a company that security in the cloud.

“Apple’s focus on privacy and security by design is definitely a good sign,” he says. “Features like preventing privileged access at runtime and preventing user targeting show that they are thinking about potential abuse cases.”

Apple spent a lot of time during its announcement reinforcing the belief that the company takes security seriously and published an article on the Internet which describes the company’s five requirements for Private Cloud Compute, such as no privileged access at runtime and hardening the system to prevent targeting specific users.

Still, large language models (LLMs) like ChatGPT and other forms of GenAI are new enough that the threats remain poorly understood, and some will slip through Apple’s efforts, says Steve Wilson, chief privacy officer at Exabeam, a security and cloud compliance and lead on Top 10 security threats to LLM from the Open Web Application Security Project.

“I’m really worried that the LLM is a completely different beast, and traditional security engineers just don’t have experience with AI techniques yet,” he says. “Very few people do this.”

Apple puts security first

Apple seems to be aware of the security threats that threaten its customers, especially businesses. Apple Intelligence’s implementation on user devices, called the Personal Intelligence System, will combine app data in a way that may have only been implemented through the company’s health data services. It is possible that every message and email sent from a device could be checked by artificial intelligence and context added using semantic indexes on the device.

However, the company assured that in most cases the data never leaves the device and the information is also anonymous.

“It’s aware of your personal information, but it doesn’t collect it,” said Craig Federighi, senior vice president of software engineering at Apple in a four-minute video about Apple’s intelligence and privacy at the company’s June 10 launch, he added: “You have control over your data, where it’s stored and who can access it.”

Once it leaves the device, the data will be processed within the company Private Cloud Compute Service, allowing Apple to use more efficient server-based models of generative AI while protecting privacy. The company says it never stores or shares any data with Apple. Additionally, Apple will make each production version of its Private Cloud Compute platform available to security researchers to investigate vulnerabilities in conjunction with its bug bounty program.

Such steps clearly go beyond third-party promises and should allay the concerns of enterprise security teams, says AppOmni’s Thacker.

“This type of transparency and collaboration with the security research community is important in finding and fixing vulnerabilities before they can be exploited in the wild,” he says. “This allows Apple to leverage a variety of researcher skills and perspectives to truly test the system from a security testing standpoint. While this doesn’t guarantee safety, it will help a lot.”

There is an app to (leak) this

However, the interactions between applications and data on mobile devices and the behavior of LLM may be too complex to fully understand at this stage, says Exabeam’s Wilson. The LLM attack surface continues to surprise the big companies behind mainstream AI models. For example, after the launch of the latest Gemini model, Google had to deal with unintentional data poisoning that resulted from training the model using untrusted data.

“These search components fall victim to these kinds of indirect data injection and data poisoning incidents where they tell people to eat glue and rocks,” Wilson says. “So you might say, ‘Oh, this is an incredibly sophisticated organization, they’ll do it right,’ but Google has proven over and over again that that’s not going to happen.”

Apple’s announcement comes at a time when companies are rapidly experimenting with ways to integrate GenAI in the workplace to improve productivity and automate traditionally difficult-to-automate processes. It has been slow to bring these features to mobile devices, but now Samsung has done it released Galaxy AIGoogle announced the Gemini mobile appand Microsoft announced Copilot for Windows.

While Copilot for Windows is integrated with many applications, Apple Intelligence seems to go beyond Microsoft’s approach.

Think differently (about threats)

Overall, companies need to first gain insight into how their employees are using LLM and other GenAI. While they don’t have to rise to the level of billionaire tech innovator Elon Musk, a former OpenAI investor who has expressed concerns that Apple – or OpenAI – will misuse user data or fail to secure business information, and has pledged to ban iPhones in their companiesChief information security officers (CISOs) should certainly talk to their mobile device management (MDM) vendors, says Exabeam’s Wilson.

Currently, there do not appear to be controls governing data sent to and from Apple Intelligence, and may not be available for MDM platforms in the future, he says.

“Apple has not provided many options for managing devices in the past because they were intended for personal use,” Wilson says. “So for the last 10-plus years, it has been up to third-party companies to build third-party platforms that allow you to install controls on your phone, but it’s unclear whether they’ll have the influence on (Apple Intelligence) to help control that.”

Until more controls come to the Internet, enterprises need to establish policies and find ways to integrate existing security controls, authentication systems and data loss prevention tools with artificial intelligence, AppOmni’s Thacker says.

“Companies should also have clear policies about what types of data and conversations can be shared with AI assistants,” he says. “While Apple’s efforts are helpful, enterprises still have work to do to fully securely integrate these tools.”