There's a version of AI adoption happening in almost every enterprise right now, and it's the wrong one. Teams are taking products designed for humans operating alone—clicking through menus, filling out forms, waiting for pages to load—and bolting AI features onto them. A button here. A chat window there. A "summarize" option in the overflow menu.
This is retrofitting. And it's not product design. It's renovation of a building that was never meant to stand.
I've spent years building AI products inside some of the world's largest technology and financial services companies. At Microsoft, I led design on Health Futures—building AI tools intended to reimagine how clinicians interact with patient data. In financial services, I've led AI UX for products that touch millions of customers across voice, machine learning, and fulfillment systems. These weren't experiences where I sprinkled AI on top. They were built from a blank canvas with AI as the load-bearing wall.
What I've learned is uncomfortable: most product designers—and most product organizations—aren't ready for what AI-first actually demands.
What "AI-First" Actually Means
AI-first doesn't mean "we have an AI feature." It means the entire UX mental model is restructured around what AI enables.
In a traditional product, the human does the thinking and the software executes. The interface is designed to efficiently capture human intent and translate it into system actions. Every element—every field, every button, every navigation pattern—assumes a human is doing the cognitive work.
In an AI-first product, that relationship inverts. The AI does a large portion of the reasoning, synthesis, and decision-making. The human's job shifts from executor to director. They're reviewing, approving, course-correcting, and setting intent—not filling in forms.
This inversion breaks almost every design convention we've built the last 30 years of software on. Think about information architecture. Traditional IA assumes users navigate to find what they need. AI-first IA must account for the system surfacing what users need before they know to look for it. The mental model of "where do I go to find X" collapses into "what does the system understand about what I need right now."
Think about error states. Traditional error design assumes the user made a mistake. AI error design must account for model uncertainty, confidence intervals, and cases where the right answer is "I don't know." The language and visual design of that is completely different. Think about trust. In traditional software, trust is about reliability. In AI software, trust is about accuracy and intent alignment—is this system actually understanding me, and do I believe its outputs? That's a harder design problem, and we don't have canonical patterns for it yet.
What I Learned at Microsoft
At Microsoft Health Futures, we were building AI products for clinical environments where the stakes were real. What struck me most wasn't the technical complexity—it was how fundamentally broken the existing clinical UX paradigm was for AI. Clinicians were already overwhelmed with information. The EMR systems they used were designed to capture data, not to help them think. Every AI feature we tried to add exposed a deeper problem: the product had no opinion about what mattered.
AI-first design forced us to develop product opinions. The system had to understand context, surface what was relevant, and make confident recommendations—or the AI was just noise. We couldn't hide behind "the user decides." The user was already drowning in decisions. We had to design systems that earned the right to reduce cognitive load by being reliably right.
Every product decision became a question about who should do this thinking — the human or the system?
Why Retrofitting Fails
The fundamental problem with retrofitting is that you're solving for the wrong constraint. When you add AI to an existing product, you're optimizing for familiarity—minimizing disruption to existing user behavior, designing so users don't have to change how they think.
But AI-first products require users to change how they think. That's the entire value proposition. You can't preserve legacy mental models and also deliver the cognitive leverage AI provides. They're in direct tension. Every enterprise AI product I've seen fail has failed on this dimension. The AI was good. The design kept users from ever trusting it enough to let it help them.
What AI-First Design Actually Demands
Based on four years of building in this space, here's what I believe AI-first product design actually requires:
Design for trust before designing for utility. Users won't use AI capabilities they don't trust. Trust is built through transparency about uncertainty, consistency in behavior, and explicit honesty about what the system can and can't do.
Rethink the role of the interface. In many AI-first contexts, the best UI is minimal—the AI does the work and the human reviews. The temptation to add UI is real, but often wrong. Less interface frequently means better AI UX.
Understand model behavior. Not deeply—you don't need to be an ML engineer. But you need to understand what makes models fail, what confidence looks like in outputs, and how retrieval and context work. You can't design well for systems you don't understand.
Have humility about existing patterns. The conventions we've refined over decades—navigation hierarchies, form flows, modal dialogs—were built for a different era of software. AI-first design is genuinely new territory.
That's uncomfortable for an industry that has spent years accumulating best practices. But it's also the most interesting design challenge I've encountered in 17 years of doing this work. The companies that get this right—who design their products to be AI-native rather than AI-adjacent—will outcompete those who don't. Not because their AI is better, but because their design lets the AI actually help.