In his latest “Power On” newsletter, Mark Gurman highlighted that Apple’s on-device LLM is a cornerstone for new generative AI capabilities that are expected to be introduced soon. Unlike most current AI systems that rely on cloud processing, Apple’s model is designed to function entirely within the device.
This approach could potentially limit the model’s capabilities in comparison to those of cloud-based rivals. However, Gurman notes that Apple might overcome these limitations by licensing technologies from other leaders in AI, such as Google.
Recently, discussions about integrating Google’s Gemini AI engine into Apple’s hardware were reported, hinting at a possible collaboration that could blend on-device and cloud processing elements for iOS 18. The key benefits of processing data directly on the device include faster response times and a greater level of privacy than what cloud-based systems can offer.
Apple appears to be shaping its AI marketing strategy to focus on practical applications that seamlessly integrate into everyday life, rather than simply emphasizing the technology’s power. The full scope of Apple’s AI ambitions is anticipated to be unveiled at the Worldwide Developers Conference (WWDC) in June, where the company typically showcases major software updates.