Mark Gurmanโs latest newsletter reveals that Apple may introduce the AI features in iOS 18 as a โpreviewโ or beta, much like the initial launch of Siri. This approach reflects the ongoing development and refinement of Appleโs AI capabilities.
Gurman details Appleโs plans for iOS 18, which include customizable icon colors and AI-generated personal emojis. However, he also criticizes Apple for seemingly being behind competitors in the AI race, noting:
โBut even now, there are signs that the companyโs AI initiative is a work in progress. Apple is considering marketing the capabilities as a preview (at least in developer beta versions before a formal launch in September), indicating that the technology isnโt yet fully baked.โ
This suggests that Apple might be repeating its history with Siri, which also debuted as a beta in 2011 and has arguably struggled to keep pace with competitors like Alexa.
The narrative that Apple lags in AI predates the rise of generative AI. Critics often point to Siriโs perceived shortcomings compared to other intelligent assistants. Now, with companies like OpenAI, Microsoft, and Google actively deploying generative AI products, the media has suggested that Apple has been slow to catch up.
However, it could be argued that labeling the AI features in iOS 18 as a beta isnโt necessarily a bad move. It acknowledges a reality often overlooked: generative AI technology is still developing and far from perfect.
Early AI systems like ChatGPT demonstrated significant flaws, ranging from making nonsensical statements to providing dangerously incorrect advice. Despite improvements, issues like AI hallucinations persist, where systems confidently deliver incorrect or bizarre responses.
Recent examples highlight these flaws, with AI dietary advice having suggested adding glue to pizza, eating rocks, and making spaghetti with gasoline. These instances underscore the need for caution and continued refinement.
Thus, Appleโs decision to introduce AI features in iOS 18 as a beta surely reflects a prudent approach, recognizing that, like all generative AI, these tools are still evolving. This perspective aligns with a comment from ChatGPT itself:
โOutputs can sometimes be factually incorrect, nonsensical, or inconsistent [โฆ] AI models can inherit biases present in their training data, leading to biased or unfair outputs [โฆ] In summary, generative AI can be reliable within certain parameters and use cases, but it is important to be aware of its limitations and actively work on improving and validating its outputs.โ