AI

The New Dominant UI Design for AI Agents

By
Andy Walters
April 21, 2025

In my experience watching technology evolve, I’ve noticed a familiar pattern: there’s an initial burst of diverse designs which eventually converge into a single dominant design that becomes the industry standard. I found that this pattern was described by the Utterback-Abernathy model of innovation, which shows how early experimentation gives way to standardization as markets mature. We saw this with hardware — from the QWERTY keyboard and VHS tapes to the modern touchscreen smartphone form factor — and now the same is happening with software interfaces

In particular, user interface (UI) designs have been converging rapidly in the era of AI. The best example recently was generative AI chat applications, which quickly standardized around a familiar chat layout. Now, a new UI paradigm is emerging for AI agents – autonomous AIs that can carry out tasks for us. In this blog post, I’ll explore these trends and explain what they mean for businesses building AI agents.

A Primer on AI Agents

AI agents are the next step beyond simple chatbots. While a chatbot engages in conversation or Q&A, an AI agent is designed to autonomously execute entire workflows from start to finish, often using tools like web browsers or other apps to get things done. 

In other words, you don’t just chat with an agent – you delegate tasks to it. For example, an agent might be tasked with screening job resumes: it could unzip files, read documents, extract key info, and rank candidates, all without constant supervision. You give it a goal, and it handles the execution.

Recent products provide a glimpse of what’s possible:

  • OpenAI’s Operator: An AI agent that can “go to the web to perform tasks for you” by controlling a built-in web browser. Tell Operator to book a flight or order groceries, and it will navigate websites, fill forms, click buttons, and only ask for help if needed. It’s one of the first agents capable of doing work independently online – a sort of “internet butler,” as some have dubbed it. (Operator is currently a research preview, but it signals where AI is heading.)
  • Manus (China): An autonomous AI agent gaining attention for handling general tasks with ease. Manus works asynchronously in the cloud to complete tasks end-to-end. You can assign a project (say, writing a report or analyzing data), step away, and Manus will notify you when the job is done. It integrates with tools like web browsers, code editors, and databases, enabling it to fetch information and automate complex workflows on its own.

Anthropic and others are also developing agent capabilities (e.g. with their Computer Use model), and many startups are launching similar “autonomous AI” solutions. It’s a hot area of innovation with lots of experimentation. 

In all, AI agents mark a shift from AI as a mere assistant (answering questions) to AI as an autonomous executor that can carry out real actions. With this shift comes new challenges in design: How should the user interface present such powerful autonomy in a safe, understandable way? To answer that, it helps to recall how dominant designs emerge in technology.

The Utterback-Abernathy Model: From Chaos to Standardization

In the 1970s, researchers James Utterback and William Abernathy studied how industries innovate and identified a common lifecycle. Early on, during the “fluid” phase, many firms experiment with different designs and approaches, trying to find what customers actually want

Product innovation is at its highest in this stage – think of the early days of automobiles when cars had wildly different shapes and controls, or the early days of mobile phones with all kinds of sliders, flip phones, and quirky designs. Over time, as companies learn what works and what users prefer, a dominant design emerges in the “transitional” phase. 

This is a turning point: once a dominant design is established, almost all new products adhere to that basic configuration, and innovation shifts toward making production more efficient in the “specific” phase. In other words, the industry stops reinventing the wheel and starts refining the now-standard wheel. Utterback and Abernathy observed that once a dominant design emerges, innovative activity shifts from exploring alternatives to improving processes. Both producers and customers know what to expect, uncertainty drops, and companies concentrate on incremental improvements and efficient production.

The Utterback-Abernathy Model illustrates how industries move from product innovation (blue) to process innovation (red) after a dominant design is established. Dominant designs lead the focus away from creating novel configurations to optimizing production and processes.

What is a Dominant Design? (And Why It Matters)

A dominant design is essentially a standard blueprint for a product that everyone in the industry coalesces around. It’s characterized by a core set of features or form factors that don’t vary much between different manufacturers’ versions. When a dominant design emerges, it doesn’t necessarily mean it’s the absolute best in every way – it means it’s “good enough,” widely adopted, and it creates a stable platform for the market. Classic examples include:

  • The QWERTY Keyboard: Despite being invented in the 19th century for typewriters, QWERTY became the dominant design for keyboards and persists on virtually every laptop and smartphone today. Competing layouts (DVORAK, anyone?) never dislodged it. Once users and manufacturers converged on QWERTY, network effects and familiarity cemented it as the standard.
  • VHS Videocassette: In the late 1970s and 1980s, VHS beat out Betamax to become the dominant design for home video. Its victory wasn’t due to vastly superior quality, but rather a combination of longer recording time and aggressive licensing, which led to widespread adoption. Once VHS became dominant, video rental stores and camcorder makers all aligned to that format.
  • IBM PC Architecture: The IBM PC introduced in 1981 set a de facto standard for personal computer hardware. Competing designs (Apple’s aside) fell behind as IBM PC “clones” proliferated. Key components like the x86 processor architecture and MS-DOS (later Windows) became industry staples. This dominant design made it easier to create and sell software and peripherals at scale, accelerating the PC boom.
  • Touchscreen Smartphones: By the late 2000s, the myriad phone designs of the early 2000s (flip phones, BlackBerries with physical keyboards, slide-out texting phones, etc.) converged into the slab-of-glass touchscreen smartphone we know today. Since the debut of the iPhone in 2007, one large touchscreen has dominated the mobile form factor. Today’s phones mostly only vary in screen size and camera bumps, showing how complete that convergence is. (It’s only now, over a decade later, that foldable screens are introducing a new twist, but the basic touchscreen slab remains the dominant design for phones.)

Dominant designs matter for a few reasons. First, they give customers a consistent user experience (for example, anyone can pick up any smartphone brand and broadly know how to use it). Second, they let the industry focus on improvements within a stable framework – manufacturers can optimize components, and developers can build an ecosystem (apps, accessories) that works across many devices. Finally, dominant designs often create winners and losers. 

Companies whose product design becomes dominant (like IBM’s PC, or Apple/Android in smartphones) gain huge advantages, while others fall by the wayside. And dominant design isn’t just a hardware phenomenon – it applies to software and user interfaces as well.

Dominant UI Design for Generative AI Chat Applications

Not long ago, if you interacted with an AI chatbot, it likely appeared as a little icon or chat bubble in the bottom-right corner of a website. I remember those ubiquitous web chat widgets – a small window, limited in size, sitting unobtrusively in a corner. That UI made sense for basic customer support bots or simple Q&A assistants embedded in webpages. It was the legacy of early messenger apps and web chat boxes. 

However, the rise of generative AI chatbots (typified by OpenAI’s ChatGPT in 2022–2023) brought a new use case: extended, ongoing conversations where the AI could produce long answers, remember context, and manage multiple discussion threads. The old bottom-corner chat box was no longer sufficient for this richer interaction.

Pre-generative AI chat interface: limited to brief interactions, these corner widgets lacked the capability for richer, contextual conversations. They were prevalent on websites for customer support before the era of advanced AI.

Enter the full-screen chat interface. ChatGPT’s web UI set the new standard: a large, center-stage chat area where the AI’s responses can be as long as needed, paired with a sidebar on the left listing past conversation threads. Users can easily start a new topic or revisit an old one from that sidebar. In a very short time, this design became ubiquitous for AI chat products. Many other AI chat apps and platforms quickly replicated the “sidebar on the left, chat on the right” layout. Google’s Bard, Microsoft’s Bing Chat (in standalone mode), Anthropic’s Claude, and numerous startup offerings all use some variant of a full-page chat UI rather than a tiny widget. When essentially every new AI chatbot adopts a similar interface, it’s clear a dominant design has emerged.

Dominant UI design for generative AI chat applications: a large conversation area allows detailed interactions, while the left sidebar keeps conversations organized and accessible. This full-page chat layout, popularized by ChatGPT, has become the standard for modern AI chatbot interfaces.

Why did this UI become a dominant design for chatbots so quickly? Simply put, it works well for users. It’s a comfortable metaphor – essentially like a messaging app, but optimized for long-form AI outputs and managing multiple conversations. 

The full-screen approach removes the constraints of the tiny widget (which, as one designer noted, offered “limited screen size” and crimped the possibility of rich interaction). By dedicating the entire interface to the conversation, these AI chat apps let users focus, scroll through the history, and easily follow context. The left-hand thread list addresses a practical need: AI chats can diverge into different topics or be reused later, so giving users a way to organize or revisit them improves the experience.

In short, ChatGPT’s interface became the blueprint that others followed, turning it into the dominant design for AI chat UIs. One UX observer wryly noted in early 2025: “each time I see AI tools, I often see a copy of OpenAI’s UX framework — sidebar on the left and chat in the center right.” When an interface paradigm is that widely copied, it’s safe to call it a dominant design.

The Emerging Dominant UI Design for AI Agent Applications

We’re now at the dawn of AI agent applications, and we can already spot a similar convergence in UI design. AI agents are more complex than chatbots – they don’t just talk, they act. This means the user interface has to do two things at once: allow the user to communicate with the agent (give instructions, ask questions, get progress updates) and let the user observe or monitor what the agent is doing. A consensus is rapidly forming around a split-screen UI to meet this need.

In the emerging dominant design for AI agent UIs, the left ~50% of the screen is a scrolling message history and input area (much like a chat interface), and the right ~50% is a dynamic viewer panel that displays the agent’s activities or workspace. The left side is where you and the agent converse: you might give a high-level command (“Book me the highest-rated tour in Rome on TripAdvisor”) and the agent may ask clarifying questions or confirm details via text. The right side then visualizes what the agent is doing as it carries out the task – for example, showing the TripAdvisor website being navigated, forms being filled out, or code being written, depending on the task. This design allows a blend of natural language interaction with direct visual feedback of the AI’s autonomous actions.

Emerging dominant UI design for AI agents in action: the user delegates a task to the AI agent (left panel), and the agent actively executes it on TripAdvisor (right panel). This split-screen approach provides real-time visual feedback and transparency, keeping the user in the loop.

If you’ve tried OpenAI’s Operator, you’ve seen this two-panel approach in action. In Operator’s interface, your chat with the agent remains on the left, and a live web browser window that the agent controls is on the right. As the agent clicks and scrolls, you literally watch it work. This provides transparency – you can intervene if something looks off, or provide additional input if needed. 

Other agent systems are converging on similar layouts. Manus, for instance, in its demos also shows a console or browser view alongside the task description, letting the user see the agent’s steps. Even open-source projects like “Open Operator” (a community clone of Operator) emphasize combining AI-driven automation with human oversight, often via an interface that lets users see what the agent is doing in real time. The consistency of this pattern across different products suggests it’s quickly solidifying as the dominant UI design for agentic AI applications.

It’s worth noting why this design is likely to stick. Users need confidence when handing control to an AI agent. A well-designed split-screen agent UI inspires trust by keeping the user in the loop. The message history on the left provides a familiar conversational feel (so you can interact with the agent naturally, as if chatting), while the activity viewer on the right provides accountability (so you’re not left guessing what the AI is doing with your request). 

In usability testing, I suspect this kind of interface will strike the best balance between autonomy and user control – much like the full-page chat did for pure text-based chatbots. As more companies roll out agentic AI features, expect the two-pane “chat + live view” layout to become the norm. We may see some variations (e.g. resizable panels, or multi-step workflow visualizers), but the core concept of side-by-side communication and action is poised to be the dominant design for AI agent UIs.

Recommendations for Companies Building AI Agents

For CEOs, CTOs, CIOs, and product leaders looking to build AI agent capabilities — whether for internal automation or customer-facing products — UI design is central to user adoption and trust. Based on these trends, here are my recommendations on how to adapt to this new UI paradigm:

  1. Don’t reinvent the UI from scratch. Leverage the emerging dominant design (chat on the left, live action on the right) to capitalize on user familiarity. As with any dominant design, deviating without a good reason can confuse users. A familiar interface lowers the learning curve and increases trust, because users have likely seen similar AI agent UIs elsewhere. Following the pattern set by pioneers like Operator and Manus can accelerate user acceptance of your agent.

  2. Prioritize Transparency and Control: The UI should make it crystal clear what the AI agent is doing at any given time. Always show a trace or history of the agent’s actions (e.g. a log or narrative of steps taken) alongside the results. Provide controls for the user to pause, intervene, or adjust the agent’s actions if necessary. Transparency builds trust – users are far more likely to adopt AI agents if they feel they can monitor and control them when needed.

  3. Iterate with User Feedback: As you design your agent’s interface, conduct user testing focused on understanding and comfort. Observe how users react to the agent doing tasks on their behalf. Do they get anxious not knowing what’s happening, or are they overwhelmed by too much detail? Use these insights to fine-tune what information the UI shows and how it presents it. The dominant design gives you a great starting template, but the details (e.g. how you indicate progress, how and when you prompt the user for input) should be refined through real-world feedback.

Closing Thoughts

The emergence of a dominant UI design for AI agents is a clear sign that this technology is maturing. It’s reminiscent of how the graphical user interface (GUI), the web browser, or the smartphone interface became standardized foundations upon which further innovation flourished. When users start to expect a certain interface, businesses ignore that expectation at their peril. In 2025 and beyond, an AI application that still presents as a tiny chat bubble may feel as outdated as a flip phone in the age of touchscreens.

I’m convinced we’ll look back on these early days of AI agents as the period when the “agent UI” paradigm was established – much like the era when Windows and Mac popularized the desktop GUI. Companies that recognize and embrace this dominant design early will have an edge. They’ll deliver AI experiences that feel familiar and trustworthy, while competitors who try novel or obtuse interfaces might find users reluctant to adopt them. 

For any organization building or deploying AI agents, now is the time to align with the emerging standard. Your users – and your future self – will thank you for not making them learn a whole new way to interact with AI.

Explore our latest posts