The Screen Is the Bottleneck
The Screen Is the Bottleneck
Every major AI company is fighting over the same real estate. A chat window. A text box. A screen you hold in your hand or sit in front of.
We think they're optimizing the wrong thing.
The phone was never the destination
Your phone is a rectangle of glass that demands your full attention. Every AI assistant built for it inherits that problem. You have to stop what you're doing, unlock, open an app, type or tap, read a response, then decide what to do with it. The assistant might be smart. The interaction is still slow.
The screen isn't helping the AI. It's holding it back.
An AI that actually fits into your life can't live behind a lock screen. It has to be ambient. Present when you need it, invisible when you don't. That's not a phone. That's a wearable.
We started at the wrist on purpose
We chose the Apple Watch as Demi's primary surface because it's the most constrained computing device people actually wear every day. Two inches. No keyboard. Conversations measured in seconds.
That constraint forced us to build an AI that earns its place through speed and action, not conversation length. When someone raises their wrist and says "move my 3 o'clock to tomorrow," the only acceptable response is doing it. Not asking clarifying questions. Not showing a modal. Doing it.
Every design decision we've made flows from that principle. The AI has to understand you quickly, act correctly, and confirm briefly. Those are the exact same constraints that glasses, earbuds, rings, and every future wearable will demand.
Glasses are the next surface, not the first
Apple is building smart glasses. That's not a rumor anymore, it's a trajectory. Vision Pro was the research platform. The glasses are the product.
When they arrive, the AI that works on them won't be the one with the best chat interface. It'll be the one that already knows how to operate without a screen in your hand.
Demi has been doing that since launch. Voice in, action out, confirmation at a glance. That interaction model doesn't change when the screen moves from your wrist to your face. It gets better. A calendar update that floats in your peripheral vision for two seconds, then disappears. A food order confirmed with a nod. A morning briefing that reads itself to you while you make coffee.
We're not pivoting to glasses. We're graduating to them.
What wearable AI actually requires
Building for wearables isn't a form factor exercise. It's a fundamentally different kind of AI product.
Screen-first AI can be slow because the user is already sitting there. Wearable AI can't. Screen-first AI can show you ten options and let you pick. Wearable AI has to pick the right one. Screen-first AI can punt to "here are some links." Wearable AI has to do the thing.
That's why most AI companies can't just shrink their product onto a watch or glasses. The entire interaction model assumes a screen you're staring at. Remove the screen and the product falls apart.
We built without that assumption from day one.
The next five years
Wearable computing is going to be the primary way people interact with AI. Not because the hardware is cooler, but because it matches how people actually live. You're walking, cooking, driving, talking to someone. You don't want to pull out a phone. You want to say something and have it handled.
The companies building for that future today will define the category. Everyone else will be adapting products that were designed for a world that's already passing.
We're building for the world that's coming. And we've been doing it from the start.