AI as the bridge to our wearable future
I have been focusing a lot on how Large Language Models (LLMs) can improve our daily work for personal benefit - why we haven't yet seen the dramatic productivity improvements many predicted. However, it’s easy to miss AI’s broader transformative potential if you solely view it through the LLM productivity lens.
Ben Thompson from Stratechery published a lengthy, but surely fascinating analysis in early December suggesting that generative AI might be the foundational technology that will enable the next major computing paradigm: wearable devices.
His argument flows through the history of computing, showing how each major transition was enabled by a "bridge" technology. Interactive applications helped us move from mainframes to PCs. The Internet bridged PCs to smartphones. And now, Thompson argues, generative AI will bridge smartphones to wearables.
His key spin is that future wearable interfaces need to be fundamentally different from what we have today. Instead of permanently visible apps and buttons (think Apple Vision Pro's floating windows), we need interfaces that appear exactly when we need them, showing exactly what we need - and nothing more. This is where generative AI comes in, with its ability to understand context and generate appropriate interfaces on the fly. Thompson points to his experience with Meta's Orion AR glasses as a glimpse of this future. A simple notification appearing when needed, a gesture to accept a call, and then back to reality - no permanent UI cluttering your view. It makes current AR/VR interfaces seem almost primitive by comparison.
I've been following Thompson's work for years, and while he's known for constructing these grand, sweeping narratives about tech evolution, he's earned his reputation as one of tech's most insightful strategic thinkers.
My personal experience with Apple’s Fantastic Three (iPhone, Watch, AirPods) reinforces aspects of Thompson's thesis. The most meaningful features are often those that fade into the background: automatically lowering music volume rather than stopping it, reading messages aloud, taking calls when I nod my head, or opening our apartment door when I approach it. These are currently pre-programmed behaviors, but imagine the possibilities when wearables can intelligently adapt their support based on real-time context.
To be clear, wearables alone won't boost quality of life or revolutionize productivity any more than smartphones or the internet did in isolation. The key use cases are probably where wearables enhance our capabilities - highlighting details we may have missed, making connections faster, nudging us gently. And indeed, that is not something where I want a clunky app floating in my field of view.
As always, I'm interested in your thoughts: Feel free to message me. Or, if you prefer, you can share your feedback anonymously via this form.
All opinions are my own. Please be mindful of your company's rules for AI tools and use good judgment when dealing with sensitive data.