The Third Interface: Your Software Needs an Agent Layer
Desktop was the first interface. Mobile was the second. AI agents are the third. Your software needs to serve all three, and the most immediate impact is on your own team.
The Third Interface: Your Software Needs an Agent Layer
Your software has a screen interface. It probably has a mobile interface. It doesn’t have an agent interface. That gap is already costing you, and not just in lost customers. It’s costing your own team hours every single day.
Every major platform shift follows the same pattern. Companies that build for the new interface first capture the market. Companies that retrofit lose ground they never recover. We saw it with mobile. We’re seeing it again with AI agents.
But this time there’s a twist. The mobile shift was about reaching customers on new devices. The agent shift is about that, and something more immediate: unlocking your own people. Your sales rep who copies deal notes from the CRM into an email. Your finance team reconciling invoices across three tabs. Your operations lead manually updating project status in four different tools. All of that friction exists because your software can’t talk to an agent.
Gartner projects 40% of enterprise applications will embed task-specific AI agents by the end of 2026, up from less than 5% in 2025.1 That’s one of the steepest adoption curves in enterprise software history. And the impact isn’t theoretical. It’s already showing up inside teams that have figured this out.
The pattern you’ve seen before
Cast your mind back to 2010. The iPhone had been out for three years. Most businesses were still treating mobile as “make the desktop website smaller.” Pinch to zoom. Tiny buttons. Unusable forms.
The companies that won didn’t port their desktop experience to mobile. Instagram, Uber, WhatsApp built for mobile first and let the desktop follow.
The same mistake is happening right now with AI agents. “We’ll just add a chatbot” is the 2026 equivalent of “we’ll just make the website responsive.” It fundamentally misunderstands the shift.
There’s a layer to this that the mobile era didn’t have. Mobile changed how customers reached you. Agents change how everyone works with your software, including your own staff. When mobile hit, your sales team didn’t suddenly become more productive because your website was responsive. But when your CRM has an agent interface, your sales team can plug it into Claude or Copilot and let the agent handle the admin that eats their day.
An agent doesn’t browse your website. It doesn’t click buttons. It doesn’t read your marketing copy. It connects, authenticates, executes, and moves on. If your product can’t be parsed, authenticated, and executed by an agent, you’re invisible to the fastest-growing software distribution channel. And your own team can’t wire it into the AI tools they’re already starting to use.
Andrej Karpathy put it simply in March 2026: “Build. For. Agents.”
Three interfaces, not one
Here’s how to think about it. Your software now needs to serve three distinct types of user:
Interface 1: Screen (Desktop) The original. A human sits at a desk, uses a mouse and keyboard, navigates menus and dashboards. This is where most business software still lives.
Interface 2: Mobile The 2010s shift. Same human, smaller screen, touch input. This forced responsive design, simplified workflows, and mobile-first thinking. Companies that treated mobile as an afterthought lost to companies that treated it as the primary experience.
Interface 3: Agent (AI) The 2026 shift. Not a human at all. An AI agent acting on behalf of a human. It doesn’t need a visual interface. It needs structured data, clean APIs, machine-readable documentation, and predictable authentication. It needs to understand what your software does, what actions are available, and how to call them.
The critical insight: agents don’t replace screens or mobile. They add a third layer. Your human users still need a screen. Your mobile users still need an app. But increasingly, the first interaction with your product will come from an agent, not a person. Sometimes that agent belongs to a prospective customer. Sometimes it belongs to your own accounts manager, trying to pull a report without logging into three systems.
What agent-native actually means
Dan Shipper at Every.to laid out five principles of agent-native design.2 Every one of them matters for internal use just as much as external distribution:
-
State persists across sessions. Agents maintain context between interactions, like a colleague who remembers what you discussed yesterday. Your HR team’s agent remembers which candidates are at which stage without anyone updating a spreadsheet.
-
Agents run the inner loop. They gather context, plan, implement, validate, and submit. Not a single prompt-response. An entire workflow. Your finance team’s agent doesn’t just look up an invoice. It reconciles it, flags discrepancies, and drafts the follow-up email.
-
Improve without shipping code. Update prompts and instructions, and every user benefits immediately. No deployment. No release cycle.
-
Everything accessible as an API. Every feature callable programmatically. If it exists in a menu, it exists as an endpoint. This is the hinge point. If your project management tool has a “change status” button but no API endpoint for it, no agent (internal or external) can automate that step.
-
Agents as users. Other agents interact with your product the same way humans do. Your customer’s agent can use your software the same way your customer does. Your employee’s agent can use your internal tools the same way your employee does.
That fifth point is the one most businesses miss. It cuts both ways. Externally, your customer’s AI assistant needs to be able to use your product, or they’ll switch to a competitor whose product their agent can use. Internally, your team’s AI assistant needs to be able to use your tools, or they’re stuck doing manually what their competitors’ teams are automating.
The protocol stack is already here
This isn’t theoretical. The infrastructure is being built right now, and it’s moving fast.
MCP (Model Context Protocol) is the emerging standard for how agents call external tools and APIs. Launched by Anthropic in November 2024, it hit 97 million monthly SDK downloads within its first year. Over 10,000 active MCP servers exist. It’s now governed by the Linux Foundation, with OpenAI, Google, Microsoft, AWS, and Cloudflare as members.3
Think of MCP as the HTTP of the agent era. It’s how agents talk to your software. When your sales rep plugs their CRM into Claude, MCP is the protocol that makes that connection work.
AGENTS.md is the machine-readable alternative to README files. Already adopted by over 60,000 open-source projects.4 It tells agents what your software does, how to use it, and what to avoid. OpenAI, Google, GitHub Copilot, and Cursor all read it.
AG-UI handles the agent-to-user interaction. When an agent needs to show something to a human, AG-UI standardises this with event-based, real-time protocols. Google, Amazon, Microsoft, and LangChain are all using it.5
A2UI is Google’s spec for agents sending rich UIs across trust boundaries.6 The agent describes what it wants to show. The client decides how to render it using native components.
These four protocols form the agent interface stack. They’re complementary. MCP connects agents to tools. A2A connects agents to agents. AG-UI connects agents to frontends. A2UI lets agents generate UIs. Together they create the plumbing that lets anyone, your customers or your own team, wire your software into an AI workflow.
Who’s already building for this
The list is growing weekly:
Stripe ships an official MCP server.7 Agents can create charges, manage customers, process refunds, and search Stripe’s documentation, all programmatically. A finance team using Claude can now query Stripe directly without logging into the dashboard.
Notion hosts an MCP server with full workspace access.8 AI tools read and write pages through OAuth. Their engineering team published a detailed teardown of how they built it for agent consumption. Any team using Notion as their knowledge base can now have an agent that searches, updates, and organises it.
Cloudflare built agent support directly into their Agents SDK. Any agent running on Cloudflare can connect to external MCP servers with built-in auth.
Postman rebuilt their entire product in March 2026 as “AI-native and built for the agentic era.”9 Agent Mode can diagnose test failures and propose fixes.
These aren’t startups experimenting. These are infrastructure companies making a bet that agent interfaces are as fundamental as mobile interfaces were. And the pattern is clear: the companies shipping agent interfaces are the ones whose customers (and whose customers’ teams) can actually use AI effectively.
What this means for UK SMEs
If you build software, even internal tools, you need to think about this now. The external reason is obvious: your competitors’ customers will increasingly have agents that evaluate, compare, and interact with software on their behalf.
The internal reason is more urgent. Your own people are already using AI assistants. ChatGPT, Claude, Copilot. Right now those tools are cut off from your business systems. Your team uses AI to draft emails and summarise documents, but the AI can’t touch the CRM, the project tracker, the accounting software, or the HR platform. All the high-value automation is locked behind tools that don’t have agent interfaces.
The companies that fix this first get a compounding advantage. Their team members become meaningfully more productive because their AI assistants can actually do things, not just talk about doing things.
Three practical steps:
1. Make your API the product, not an afterthought. If your software has features that only work through a GUI, those features are invisible to agents. Every action a human can take should be available as an API call. Start with the workflows your own team repeats most often. That’s where agent access pays off fastest.
2. Add an AGENTS.md to your repository. It takes 30 minutes. Describe what your software does, what endpoints are available, what agents should avoid, and what authentication looks like. 60,000 projects have already done this. It’s a markdown file, not a six-month project.
3. Consider an MCP server for your product. If you already have a clean API, wrapping it in an MCP server is straightforward. Tools like Speakeasy and Gravitee can auto-convert an OpenAPI spec into an MCP server. This makes your product instantly accessible to any MCP-compatible agent, whether that agent belongs to a customer evaluating your product or an employee on your team trying to get their job done faster.
The cost of waiting
Every channel shift in software has produced the same outcome. The companies that built for the new interface first won the decade.
Microsoft won retail by getting Windows pre-installed. Google won the web by owning search. Apple won mobile by owning the App Store. The companies building agent interfaces today are positioning themselves for the same advantage.
I’m not suggesting you drop everything and rebuild your product around agents. But I am suggesting you start thinking about agents as a user persona. Two user personas, in fact: the external agent evaluating your product for a prospect, and the internal agent your own team wants to plug into their daily workflow.
What does your product look like to an AI that can’t see a screen? And what does your team’s productivity look like when their AI assistant is locked out of the tools they use every day?
Those agents are already here. The question is whether your software can talk to them.
Footnotes
-
Gartner, “40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026” (August 2025). https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025 ↩
-
Dan Shipper, “Agent-Native Architectures”, Every.to (January 2026). https://every.to/guides/agent-native ↩
-
Anthropic, “Model Context Protocol” (November 2024). https://modelcontextprotocol.io. See also: The New Stack, “Why the Model Context Protocol Won” (December 2025). https://thenewstack.io/why-the-model-context-protocol-won/ ↩
-
AGENTS.md Standard. https://agents.md/ ↩
-
AG-UI Protocol. https://docs.ag-ui.com/introduction ↩
-
Google Developers Blog, “Introducing A2UI” (December 2025). https://developers.googleblog.com/introducing-a2ui-an-open-project-for-agent-driven-interfaces/ ↩
-
Stripe MCP Documentation. https://docs.stripe.com/mcp ↩
-
Notion MCP Documentation. https://developers.notion.com/docs/mcp ↩
-
Postman Blog, “The New Postman is Here” (March 2026). https://blog.postman.com/new-postman-is-here/ ↩
Frequently asked questions
01What is the third interface in software design?
What is the third interface in software design?
The article defines three interface generations: desktop (first), mobile (second), and AI agents (third). Most software is designed for humans interacting via screen and keyboard. Agent-native design means the software also serves AI agents operating autonomously via APIs and protocols rather than graphical interfaces.
02What is MCP and why is it significant for software businesses?
What is MCP and why is it significant for software businesses?
MCP (Model Context Protocol) is an open standard for connecting AI agents to external tools and data sources. With 97 million SDK downloads and over 10,000 MCP servers already available, companies including Stripe, Notion, and Cloudflare have shipped MCP support, making their services accessible to AI agents without custom integration work.
03How should a software business prepare for the agent interface?
How should a software business prepare for the agent interface?
The article recommends three steps: treat your API as the primary product (agents cannot use graphical interfaces), add an AGENTS.md file explaining how AI should interact with your service, and evaluate whether an MCP server is worth building for your category. Companies that do this first capture agent-native users before competitors retrofit it later.
04What is AGENTS.md?
What is AGENTS.md?
AGENTS.md is a machine-readable documentation file, analogous to README.md for humans, that tells AI agents how to interact with your software. It describes available endpoints, required parameters, rate limits, and expected behaviours in a format that AI systems can parse and act on without human interpretation.