In my experience watching technology evolve, I’ve noticed a familiar pattern: there’s an initial burst of diverse designs which eventually converge into a single dominant design that becomes the industry standard. I found that this pattern was described by the Utterback-Abernathy model of innovation, which shows how early experimentation gives way to standardization as markets mature. We saw this with hardware — from the QWERTY keyboard and VHS tapes to the modern touchscreen smartphone form factor — and now the same is happening with software interfaces.
In particular, user interface (UI) designs have been converging rapidly in the era of AI. The best example recently was generative AI chat applications, which quickly standardized around a familiar chat layout. Now, a new UI paradigm is emerging for AI agents – autonomous AIs that can carry out tasks for us. In this blog post, I’ll explore these trends and explain what they mean for businesses building AI agents.
AI agents are the next step beyond simple chatbots. While a chatbot engages in conversation or Q&A, an AI agent is designed to autonomously execute entire workflows from start to finish, often using tools like web browsers or other apps to get things done.
In other words, you don’t just chat with an agent – you delegate tasks to it. For example, an agent might be tasked with screening job resumes: it could unzip files, read documents, extract key info, and rank candidates, all without constant supervision. You give it a goal, and it handles the execution.
Recent products provide a glimpse of what’s possible:
Anthropic and others are also developing agent capabilities (e.g. with their Computer Use model), and many startups are launching similar “autonomous AI” solutions. It’s a hot area of innovation with lots of experimentation.
In all, AI agents mark a shift from AI as a mere assistant (answering questions) to AI as an autonomous executor that can carry out real actions. With this shift comes new challenges in design: How should the user interface present such powerful autonomy in a safe, understandable way? To answer that, it helps to recall how dominant designs emerge in technology.
In the 1970s, researchers James Utterback and William Abernathy studied how industries innovate and identified a common lifecycle. Early on, during the “fluid” phase, many firms experiment with different designs and approaches, trying to find what customers actually want.
Product innovation is at its highest in this stage – think of the early days of automobiles when cars had wildly different shapes and controls, or the early days of mobile phones with all kinds of sliders, flip phones, and quirky designs. Over time, as companies learn what works and what users prefer, a dominant design emerges in the “transitional” phase.
This is a turning point: once a dominant design is established, almost all new products adhere to that basic configuration, and innovation shifts toward making production more efficient in the “specific” phase. In other words, the industry stops reinventing the wheel and starts refining the now-standard wheel. Utterback and Abernathy observed that once a dominant design emerges, innovative activity shifts from exploring alternatives to improving processes. Both producers and customers know what to expect, uncertainty drops, and companies concentrate on incremental improvements and efficient production.
The Utterback-Abernathy Model illustrates how industries move from product innovation (blue) to process innovation (red) after a dominant design is established. Dominant designs lead the focus away from creating novel configurations to optimizing production and processes.
A dominant design is essentially a standard blueprint for a product that everyone in the industry coalesces around. It’s characterized by a core set of features or form factors that don’t vary much between different manufacturers’ versions. When a dominant design emerges, it doesn’t necessarily mean it’s the absolute best in every way – it means it’s “good enough,” widely adopted, and it creates a stable platform for the market. Classic examples include:
Dominant designs matter for a few reasons. First, they give customers a consistent user experience (for example, anyone can pick up any smartphone brand and broadly know how to use it). Second, they let the industry focus on improvements within a stable framework – manufacturers can optimize components, and developers can build an ecosystem (apps, accessories) that works across many devices. Finally, dominant designs often create winners and losers.
Companies whose product design becomes dominant (like IBM’s PC, or Apple/Android in smartphones) gain huge advantages, while others fall by the wayside. And dominant design isn’t just a hardware phenomenon – it applies to software and user interfaces as well.
Not long ago, if you interacted with an AI chatbot, it likely appeared as a little icon or chat bubble in the bottom-right corner of a website. I remember those ubiquitous web chat widgets – a small window, limited in size, sitting unobtrusively in a corner. That UI made sense for basic customer support bots or simple Q&A assistants embedded in webpages. It was the legacy of early messenger apps and web chat boxes.
However, the rise of generative AI chatbots (typified by OpenAI’s ChatGPT in 2022–2023) brought a new use case: extended, ongoing conversations where the AI could produce long answers, remember context, and manage multiple discussion threads. The old bottom-corner chat box was no longer sufficient for this richer interaction.
Pre-generative AI chat interface: limited to brief interactions, these corner widgets lacked the capability for richer, contextual conversations. They were prevalent on websites for customer support before the era of advanced AI.
Enter the full-screen chat interface. ChatGPT’s web UI set the new standard: a large, center-stage chat area where the AI’s responses can be as long as needed, paired with a sidebar on the left listing past conversation threads. Users can easily start a new topic or revisit an old one from that sidebar. In a very short time, this design became ubiquitous for AI chat products. Many other AI chat apps and platforms quickly replicated the “sidebar on the left, chat on the right” layout. Google’s Bard, Microsoft’s Bing Chat (in standalone mode), Anthropic’s Claude, and numerous startup offerings all use some variant of a full-page chat UI rather than a tiny widget. When essentially every new AI chatbot adopts a similar interface, it’s clear a dominant design has emerged.
Dominant UI design for generative AI chat applications: a large conversation area allows detailed interactions, while the left sidebar keeps conversations organized and accessible. This full-page chat layout, popularized by ChatGPT, has become the standard for modern AI chatbot interfaces.
Why did this UI become a dominant design for chatbots so quickly? Simply put, it works well for users. It’s a comfortable metaphor – essentially like a messaging app, but optimized for long-form AI outputs and managing multiple conversations.
The full-screen approach removes the constraints of the tiny widget (which, as one designer noted, offered “limited screen size” and crimped the possibility of rich interaction). By dedicating the entire interface to the conversation, these AI chat apps let users focus, scroll through the history, and easily follow context. The left-hand thread list addresses a practical need: AI chats can diverge into different topics or be reused later, so giving users a way to organize or revisit them improves the experience.
In short, ChatGPT’s interface became the blueprint that others followed, turning it into the dominant design for AI chat UIs. One UX observer wryly noted in early 2025: “each time I see AI tools, I often see a copy of OpenAI’s UX framework — sidebar on the left and chat in the center right.” When an interface paradigm is that widely copied, it’s safe to call it a dominant design.
We’re now at the dawn of AI agent applications, and we can already spot a similar convergence in UI design. AI agents are more complex than chatbots – they don’t just talk, they act. This means the user interface has to do two things at once: allow the user to communicate with the agent (give instructions, ask questions, get progress updates) and let the user observe or monitor what the agent is doing. A consensus is rapidly forming around a split-screen UI to meet this need.
In the emerging dominant design for AI agent UIs, the left ~50% of the screen is a scrolling message history and input area (much like a chat interface), and the right ~50% is a dynamic viewer panel that displays the agent’s activities or workspace. The left side is where you and the agent converse: you might give a high-level command (“Book me the highest-rated tour in Rome on TripAdvisor”) and the agent may ask clarifying questions or confirm details via text. The right side then visualizes what the agent is doing as it carries out the task – for example, showing the TripAdvisor website being navigated, forms being filled out, or code being written, depending on the task. This design allows a blend of natural language interaction with direct visual feedback of the AI’s autonomous actions.
Emerging dominant UI design for AI agents in action: the user delegates a task to the AI agent (left panel), and the agent actively executes it on TripAdvisor (right panel). This split-screen approach provides real-time visual feedback and transparency, keeping the user in the loop.
If you’ve tried OpenAI’s Operator, you’ve seen this two-panel approach in action. In Operator’s interface, your chat with the agent remains on the left, and a live web browser window that the agent controls is on the right. As the agent clicks and scrolls, you literally watch it work. This provides transparency – you can intervene if something looks off, or provide additional input if needed.
Other agent systems are converging on similar layouts. Manus, for instance, in its demos also shows a console or browser view alongside the task description, letting the user see the agent’s steps. Even open-source projects like “Open Operator” (a community clone of Operator) emphasize combining AI-driven automation with human oversight, often via an interface that lets users see what the agent is doing in real time. The consistency of this pattern across different products suggests it’s quickly solidifying as the dominant UI design for agentic AI applications.
It’s worth noting why this design is likely to stick. Users need confidence when handing control to an AI agent. A well-designed split-screen agent UI inspires trust by keeping the user in the loop. The message history on the left provides a familiar conversational feel (so you can interact with the agent naturally, as if chatting), while the activity viewer on the right provides accountability (so you’re not left guessing what the AI is doing with your request).
In usability testing, I suspect this kind of interface will strike the best balance between autonomy and user control – much like the full-page chat did for pure text-based chatbots. As more companies roll out agentic AI features, expect the two-pane “chat + live view” layout to become the norm. We may see some variations (e.g. resizable panels, or multi-step workflow visualizers), but the core concept of side-by-side communication and action is poised to be the dominant design for AI agent UIs.
For CEOs, CTOs, CIOs, and product leaders looking to build AI agent capabilities — whether for internal automation or customer-facing products — UI design is central to user adoption and trust. Based on these trends, here are my recommendations on how to adapt to this new UI paradigm:
The emergence of a dominant UI design for AI agents is a clear sign that this technology is maturing. It’s reminiscent of how the graphical user interface (GUI), the web browser, or the smartphone interface became standardized foundations upon which further innovation flourished. When users start to expect a certain interface, businesses ignore that expectation at their peril. In 2025 and beyond, an AI application that still presents as a tiny chat bubble may feel as outdated as a flip phone in the age of touchscreens.
I’m convinced we’ll look back on these early days of AI agents as the period when the “agent UI” paradigm was established – much like the era when Windows and Mac popularized the desktop GUI. Companies that recognize and embrace this dominant design early will have an edge. They’ll deliver AI experiences that feel familiar and trustworthy, while competitors who try novel or obtuse interfaces might find users reluctant to adopt them.
For any organization building or deploying AI agents, now is the time to align with the emerging standard. Your users – and your future self – will thank you for not making them learn a whole new way to interact with AI.