Apple Intelligence - Redefining AI?

Apple introduces "Apple Intelligence," leveraging generative AI for smarter features. Partnership with OpenAI brings ChatGPT to Siri and systems. Privacy-focused design addresses concerns amid mounting privacy debates.
UK Privacy
To print this article, all you need is to be registered or login on Mondaq.com.

"Intelligence that understands you". This was the phrase used by Craig Federighi, Apple's senior vice president of Software Engineering, to announce Apple's new AI model on Monday. The system, aptly named "Apple Intelligence", draws on a user's personal context and generative AI technology to deliver smart features that will simplify and accelerate everyday tasks.

Apple has also announced its partnership with Open AI which will see Apple integrate ChatGPT technology into Siri as well as Apple's iOS, iPadOS and macOS systems.

The announcements come at the peak of mounting pressure on Apple to enter the generative AI scene, having received criticism for falling behind its Big Tech competitors. Until now, Apple had been reluctant to integrate AI models into its operating systems and nursed a foundational model with only 3bn parameters, compared to GPT-4's and Google Gemini Pro's estimated trillion plus. Addressing the absence of Apple's generative AI presence until now, Federighi said it is now simply "starting with the best" by partnering with OpenAI.

What does the technology do?

The features offered by Apple Intelligence range from writing tools that can rewrite, proofread, and summarise text, to generating personalised emojis, smart photo editing and powerful image creation.

The integration of ChatGPT also brings about what Apple is describing as a "New Era" for Siri, equipping it with greater language, image, and document understanding capabilities. Apple has provided plenty of examples of how these tools could be used in practice, from a user giving Siri a list of ingredients and asking for a meal plan, to a user asking Siri to generate a story with pictures based on the user's prompts. The combination of OpenAI with Apple Intelligence allows for the seamless use of generative AI technology without having to switch from Apple's native applications such as Notes, Photos, and Siri, reinvigorating the appeal of the "Apple ecosystem".

What about privacy?

Understandably, given the wide-ranging capabilities of the new technologies, many users are likely to be concerned about how the new technology will use their personal data. Is Apple Intelligence's ability to analyse a user's "personal context" just clever marketing language used to disguise extensive use of their personal data?

Apple has been quick to confirm that the AI features have been designed with privacy principles in mind, with Federighi describing the technology as being "built with privacy at the core". In particular, Apple has pointed to features such as on-device processing which allows simpler queries to remain within its own ecosystem rather than using the cloud. For complex questions which Apple's on-device model is unable to handle, OpenAI's ChatGPT technology in combination with Apple's Private Cloud Compute will allow personal data to be processed securely in the cloud without the data being stored or made available to third parties. Or, as Federighi puts it, the technology is "aware of your personal data without collecting your personal data".

Apple has also announced other new security features, such as locked and hidden apps, presenter preview and improvements to contacts permissions, which are aimed at improving privacy for users.

The new "Image Playground" feature which allows users to create images on their device also appears to have privacy considerations at its centre. The tool's outputs are limited to three styles (animation, illustration or sketch), each of which is clearly identifiable as computer generated. In this way, Apple appears to have sought to minimise the risk of deep fakes arising through use of this generative AI tool, a concern previously linked to AI-based image and photo generator tools.

Too good to be true?

If the press releases are to be believed, Apple's newest offering sounds like a perfect example of privacy by design. But is it too good to be true?

Apple's critics would certainly argue so, with Elon Musk being one of the first to speak out against the integration of OpenAI in Apple's operating systems. As more details about the new technology emerge over the coming weeks, there will no doubt be others who voice their concerns over the potential privacy risks inherent in the technology.

Despite Apple's many assurances regarding privacy, given the extensive tasks offered by the new Apple Intelligence technology and their reliance on vast amounts of personal data, it remains to be seen how Apple's approach will address some of the long-standing privacy concerns with generative AI models in practice.

"Powerful intelligence goes hand in hand with powerful privacy". - Craig Federighi, Apple's Senior Vice President of Software Engineering

www.youtube.com/...

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More