A look at Apple’s new Transformer-powered predictive text model

At WWDC earlier this year, Apple announced that upcoming versions of iOS and macOS would ship with a new feature powered by “a Transformer language model” that will give users “predictive text recommendations inline as they type.”

Upon hearing this announcement, I was pretty curious about how this feature works. Apple hasn’t deployed many language models of their own, despite most of their competitors going all-in on large language models over the last couple years. I see this as a result of Apple generally priding themselves on polish and perfection, while language models are fairly unpolished and imperfect.

As a result, this may be one of the first Transformer-based models that Apple will ship in one of its operating systems, or at least one of the first that they’ve acknowledged publicly. This left me with some questions about the feature.

Jack Cook did some digging into this new feature and the language model it uses, and came up with some very interesting findings. He also details his process, and of course, the code he wrote to do all of this is available on Github.