The Agentic Web: AI Agents shall inherit the earth


A few weeks ago I was doing something mundane — transferring a domain from one registrar to another. I used Claude’s browser extension alongside Claude Code to help automate the process. What followed was unexpectedly instructive. The agent stumbled. It fumbled through dropdowns, got confused by modal dialogs, tried clicking things that weren’t there. The interface, built for a human with eyes and decades of learned web behaviour, was a maze for a machine.

That small frustration is a window into something much larger. We are in the early days of a wholesale redesign of the internet — one where AI agents, not people, are the primary users. And most of us haven’t started thinking about what that means.

I want to be honest about what I don’t know here. The technology is still nascent. Agents today are clunky and unreliable by the standards of where they’re clearly heading. But the infrastructure being built around them — the standards, the business models, the commercial incentives — is moving fast. And infrastructure has a way of shaping outcomes long before most people notice it’s there.

Satya Nadella’s declaration

At Microsoft Build 2025, Satya Nadella put a name on what’s happening: the “Open Agentic Web.” He called it the most significant platform shift since Windows in 1991, the web in 1996, and cloud and mobile in 2008. That’s a big claim, and he made it without hedging.

His vision is direct: rather than a person searching for a flight, comparing options, and entering payment details, you tell an agent what you need — “book the most cost-effective travel for my conference” — and the agent handles everything. It reasons, navigates, negotiates, and executes. You review the result.

“The internet is no longer a place for human beings to browse. It is becoming a medium where autonomous AI agents act as proxies for their owners.”

This isn’t speculative. Tools like Claude Code and OpenClaw already operate with substantial autonomy — accessing file systems, running shell commands, managing interactions across hundreds of apps simultaneously. Garry Tan of Y Combinator described the transition as moving from the pre-historic era to the historical era of agents, a moment where they start to form their own economy and record their own history. On MoltBook, a social network built exclusively for AI agents, tens of thousands of agents are already interacting, sharing preferences, and building what the platform describes as “culture” — without human involvement.

Whether that framing is a little breathless or exactly right is something we’ll know in a few years. What’s harder to dispute is that the commercial infrastructure around this shift is already being built. And that’s where it gets interesting.

AEO: The new gatekeepers: training data as the new SEO

Traditional SEO had a simple logic: rank well on Google, get human eyeballs, convert them into customers. That assumed someone at a keyboard, reading a list of links, clicking the most promising one.

That assumption is breaking down. By some measures, generative AI now accounts for over 60% of information retrieval. Traffic arriving from LLMs converts at four to six times the rate of traditional organic search — Webflow reports that 8% of its new signups now come from LLM-referred sources, and those visitors are far more likely to become customers.

A whole industry has grown up around this. AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization) aim to get brands cited, recommended, or synthesised into AI responses. China’s GEO market reached $3.65 billion in the first half of 2025 alone — up 240% year on year. Nearly half of Y Combinator’s Spring 2025 batch were AI agent companies. YC’s own framing was unambiguous: “Create what intelligent agents want, not what humans want.”

The mechanism matters here. If an AI agent recommends your product, it’s often because your brand is embedded in the model’s training data or easily retrieved through real-time search, and you provide structured, machine-readable documentation that makes you easy to synthesise. This isn’t random. Companies like Mintlify and Fern help businesses publish agent-friendly content — clean, token-efficient, structured so that LLMs can actually use it. The llms.txt standard (a machine-readable file served at a site’s root, like robots.txt but for AI agents) can reduce the tokens an agent needs to understand a site by up to 30 times.

Think about what this means competitively. The company with the best documentation, the most structured data, and the deepest presence in training datasets will be recommended to users who never asked to see their choices filtered this way. It’s not manipulation, exactly. But it’s not neutral either.

The Four layers of AI engine optimization (AEO)

LayerWhat it measuresKey levers
SemanticHow deeply a brand is embedded in an LLM’s long-term training memoryEntity relationships, topical authority, multimodal signals
RelevanceHow effectively content surfaces in real-time retrieval (RAG)Structured content, schema markup, semantic HTML
CitabilityHow often cited in AI-generated responsesCitation frequency, recommendation sentiment, llms.txt
ValidationTrust and credibility signals an AI can verifyVerifiable facts, authoritative citations, consistent presence

Designing for machines

My domain transfer wasn’t a one-off failure. AI agents struggle with interfaces built for humans — complex JavaScript, pop-ups, modal flows, CAPTCHAs, non-semantic layouts. All the things a person navigates without thinking are, for an agent, genuinely hard problems.

The industry is responding. “Agent-optimised” design is becoming a real category. Sites are adding hidden selectors and machine-readable metadata that let agents identify page elements more reliably than any human scrolling through a page. Microsoft’s open-source NLWeb standard lets any website answer natural language queries directly. Vercel’s Agent Readability Specification gives developers a concrete checklist: serve llms.txt files, publish both XML and Markdown sitemaps, annotate HTML properly.

“Documentation is no longer just a reference for human developers. In the agentic web, it is the front-end for AI agents.”

Emerging standards like MCP-UI (Machine-Readable Procedural Information for User Interfaces) go further: they allow agents to interact with standardised components on behalf of a brand without a human in the loop at all. We may soon be in a world where every major website maintains two parallel versions — one for human visitors, and a shadow version built entirely for the agents acting on their behalf.

Brands that adapt will be structurally more visible to agents. Brands that don’t may effectively become invisible to a growing share of internet activity. This is the mobile-responsive moment for the agentic web — and most businesses don’t know it’s happening yet.

Advertising’s next move

Here’s the uncomfortable question the industry is only beginning to deal with: if AI agents are doing the browsing, who sees the ads?

Traditional digital advertising assumed a human at the screen. Display banners, sponsored search results, programmatic placements — all of it was built on eyeballs. As agents increasingly intermediate between users and information, that model has a serious problem.

The response is already being built. OpenAI is moving into sponsored placements in 2026. Dan Weinberg of Red Krypton, whose company helps organisations track and improve AI visibility, calls the next two to three years a “golden age” for organic optimisation — the window before advertising inside AI platforms becomes purely pay-to-play. The implication is clear: once the money moves in, the game changes.

And the money will move in. The question is how transparently. Will it be disclosed when a recommendation inside ChatGPT or Claude is sponsored? Is there a regulated equivalent of Google’s “Ad” label for AI suggestions? There isn’t one yet, and there’s no widespread push for one.

This creates a real problem. An AI agent that appears to work for you — your personal assistant, acting on your behalf — may in practice be shaped by commercial relationships you know nothing about. The agent found a product. But did it find the best one, or the one that paid to be found? Right now, you often can’t tell.

The autonomy question

I want to be fair to the counterargument, because it’s a real one.

Human browsing is already heavily manipulated — by dark patterns, attention engineering, manufactured urgency, fake reviews, and algorithmic feeds designed to keep you scrolling rather than deciding. An AI agent that evaluates products on consistent, objective criteria might actually make better choices than a human who’s just been shown 47 “limited time” offers and a barrage of five-star reviews that were incentivised. On that basis, agents could increase user autonomy rather than diminish it.

The problem isn’t agents versus some idealised version of human choice. It’s agents versus the messy, imperfect human reality — and whether the commercial incentives shaping agents make them meaningfully better or just differently captured.

“When we delegate discovery and decision-making to an AI agent, we assume it’s working for us. But an agent shaped by commercial interests isn’t a neutral assistant — it’s a middleman with a friendly face.”

There’s a pattern worth taking seriously here, even without claiming it’s inevitable. Cable television arrived promising unlimited choice and concentrated into a handful of conglomerates owning most of what you watched. Search engines promised neutral information retrieval and built trillion-dollar advertising businesses on steering which results appeared first. Social media promised connection. The structural difference with AI agents is that previous technologies shaped what you saw. Agents will shape what you do. The gap between a biased search result and an agent that books the wrong thing on your behalf isn’t trivial.

The more we delegate, the less we practice the skills of evaluation and comparison that let us catch bad recommendations. There’s a version of the agentic web where we become comfortable passengers who only notice the direction we’ve been taken when it’s inconvenient or expensive to object. That’s not a technology problem. It’s a governance problem — and governance requires people paying attention to things that aren’t very exciting until they go wrong.

What a good version of this looks like

The transition Nadella described is real, and it’s not going to stop. That’s fine. The efficiency gains are genuine — a well-designed agent navigating complexity on your behalf can save hours of work and surface options you’d never have found. The question isn’t whether to have agents. It’s whether the agents we end up with are actually working for us.

That requires some concrete things. Sponsored recommendations in AI-generated responses should be disclosed — same principle as the “Ad” label in search. The protocols governing how agents retrieve and rank information should be auditable, so throttling and misdirection can be detected. Users should have meaningful control over which agents act on their behalf and under what constraints. These aren’t radical demands. They’re the same transparency norms we eventually required of search and social media, applied earlier rather than after a decade of damage.

The brands and developers who understand this moment are already moving. The llms.txt standard, agent-ready documentation tools from Mintlify and Fern, the entire YC bet on agent infrastructure — the ecosystem is organising itself around the new reality. The question is whether the people whose choices are being mediated are paying attention too.

Because there’s a version of the agentic web that’s genuinely useful — where AI agents are transparent, auditable, and actually work for the people who use them. And there’s a version that’s the most sophisticated capture of human decision-making ever built, dressed up in the language of convenience.

Which one we get depends on whether we stay curious about it before we’ve already handed over the keys.

―――

Sources & further reading

•  Microsoft Build 2025: Satya Nadella unveils the “Open Agentic Web”

•  YC’s latest assertion: Create what intelligent agents want, not what humans want

•  The AI Agent Economy Is Here — Y Combinator

•  AI Engine Optimization (AEO): How to get cited in AI answers

•  Improved agent experience with llms.txt — Mintlify

•  Agent Readability: A specification for AI-optimised websites — Vercel

•  Literate AI: Helping sports organisations prepare for the AI-driven internet

•  The Agentic Web: A network of autonomous AI agents — Cogent Infotech

•  2025 State of AI Discovery Report — Previsible

•  Want LLMs to recommend your brand? — Mint Studios

•  What are AI agents doing on MoltBook? — AI Inside podcast