Sleeepwalking Towards AI

Apple launched a new iPhone yesterday. We know this because we watched all 1 hour and 38 minutes of their keynote. We remember seeing the original iPhone keynote in 2007, it was electrifying. Yesterday’s event was not. Apple had a lot of really interesting things to say, but the iPhone’s features barely registered. For us, the stand out feature of the event was the way in which Apple positioned Apple Intelligence (AI). Spoiler alert – that was not terribly exciting either.

In all fairness, it is hard to differentiate hardware today. Phones all have access to the same components, unless the vendors make those components themselves, like Apple Silicon chips. So Apple has shifted its competitive positioning to software, and this year AI was a big part of that messaging.

By our count, Apple used AI in five different ways.

The first, and most prevalent, is the use of AI to enhance noise canceling and audio quality. This made appearances in the Apple Watch and Air Pods portion of the keynote as well as the iPhone.

The second most common use was to enhance image processing – both video and still. As with all iPhone launches in recent years, much of this is targeted at highly specialized audiences.

Third was enhancements to Siri. A key problem with Siri has been that it cannot maintain a conversation. Say ask it to identify a song, then say “play that song” and it will not know what song you are referring to. Apple made an explicit demo around that use case to show how enhanced Siri is, but then they followed that with a comment that Siri is now capable of hundreds of new tasks, which implies that Apple is still keeping the reins and not unleashing Siri to be freely conversant.

Fourth, Apple unveiled a host of new generative writing tools including style review, grammar check and the ability to creative custom emojis with text descriptions.

Finally, Apple added text search features to a host of applications – most notably to Apple photos. For us, this seemed the most compelling as it is very complex to pull off.

But notice a common thread running through all of those – they are all features in existing apps. In fact, almost all of them are just enhancements of things the iPhone has been able to do for years. Improved image processing. Improved Siri. Improved search. Improved noise cancelation. This is all what we would label as “AI as a feature“, using neural-network based machine learning to improve things we were already doing. Some of these are big improvements – photo search for instance, and those custom emojis are truly novel. But critically note that Apple is not charging for any of these features. It is hard to see how they could.

In all fairness, it is hard to differentiate in AI today. Apple has done as good a job as anyone in finding useful ways to deploy generative AI and all the rest. It’s just that the state of the art today does not yet have much more to offer.

Leave a Reply