Cloudflare has published something useful: a scoring tool and a dataset that quantifies what the agent-readiness gap actually looks like across the web. The numbers are not surprising. They are, however, the first authoritative public measurement of a structural condition that has been visible to anyone building for autonomous operation for some time. Four percent of sites support Markdown content negotiation. Fewer than fifteen sites across the top 200,000 domains support MCP Server Cards or API Catalogs. Seventy-eight percent have a robots.txt, but almost none of them were written for AI agents. The web has been telling itself it is preparing for the agentic era. The data says it is not.

We have made this argument twice in the last few weeks: first in The Agent-Ready Business, and then in The Lexicon Was the Point. The Cloudflare dataset does not change the argument. It confirms it with a measurement.

What the score is actually testing

isitagentready.com evaluates sites across four dimensions: Discoverability, Content Accessibility, Bot Access Control, and Capabilities. The framing from Cloudflare is correct — this is like Google Lighthouse for agent interaction standards. But there is a structural implication in those results that the framing underplays.

A site that scores poorly on agent-readiness has not failed a checklist. It has revealed an architecture. The 96% of sites that do not support Markdown content negotiation are not sites whose developers forgot to implement a header. They are sites whose back-end was never designed to serve content to anything other than a human reading HTML in a browser. Adding the header is trivial. Changing what the header serves — clean, structured, machine-parseable content — requires that the content itself was built to be served that way. Most content was not.

The same is true at every layer of the score. An MCP Server Card declares what your site can do as callable operations for an AI agent. Declaring that requires having callable operations. A business whose fulfilment process passes through a phone call, a manual queue, or a human approval step does not have callable operations. It has a web interface in front of a human-dependent process. The annotation is cosmetic. The Legacy Liability — the accumulated structural debt of human-centric design — is what the score is actually revealing when it fails.

Why the businesses that pass were built that way before this existed

The WebMCP perspective made a specific claim: the standard is not asking whether your website has clean HTML. It is asking whether your business was built to be operated by logic rather than by humans. Cloudflare’s scoring framework confirms this from the infrastructure side. The businesses that pass the content accessibility check are not the ones that completed an implementation sprint. They are the ones whose architecture was designed to serve content to machines before any standard existed to require it.

The declaration layer argument is the same point at the vocabulary level. A Declaration Layer is not a document you produce after building your site. It is a structural property of how the site was built: whether the terminology is anchored to canonical definitions before any agent encounters it. Arco’s Lexicon preceded the llms.txt implementation. The spec-compliance work did not create the declaration layer. It exposed one that already existed. That is the distinction most sites implementing llms.txt this week will get backwards: they are declaring a vocabulary that is not yet anchored to anything.

The businesses Arco builds are designed with Machine-Readable Interfaces as first-order constraints, not as post-launch integrations. Every operational loop is deterministic. Every integration point is schema-validated. Every content surface is built to be served to a machine as cleanly as to a human. This is what Architectural Certainty means in the context of agent interaction: the business is as readable to an agent as it is to its own Steward — because it was designed that way from the first transaction, not retrofitted after a scoring tool made the gap measurable.

The gap is not closing

The most useful number in the Cloudflare dataset is the one that has had the longest time to move: robots.txt adoption is at 78%, but almost none of those files include AI agent directives. The robots.txt standard has existed since 1994. Thirty years of web infrastructure, and the majority of sites have a file that was written for a search engine crawler, not an AI agent. If the implementation rate for a three-decade-old standard looks like this, the implementation rate for MCP Server Cards six months from now will not be materially different.

This is not a pessimistic assessment of developer velocity. It is an accurate description of how Coordination Tax operates at the infrastructure layer. The businesses most in need of agent-readiness are the ones with the most legacy infrastructure to retrofit, the most human-dependent processes to expose, and the most organisational inertia to overcome before a standards compliance project gets prioritised. The tooling is available. The structural debt is what prevents adoption.

The businesses that will define the next decade of agentic commerce were not made agent-ready by a scoring tool. They were built that way before one existed to measure it.

KEY TAKEAWAY

What does Cloudflare's agent-readiness data reveal about the structural gap in the web?

Cloudflare's scan of 200,000 domains shows that only 4% of sites support Markdown content negotiation, fewer than 15 support MCP Server Cards, and 78% have a robots.txt that was written for search engine crawlers rather than AI agents. The data confirms a structural condition: agent-readiness is not a tooling problem. It is an architectural one. The sites that pass the content and capability checks are not the ones that completed implementation sprints — they are the ones whose architecture was designed to serve content and callable operations to machines before any standard existed to require it. Legacy Liability — the accumulated structural debt of human-centric design — is what the score reveals when it fails. The gap is measurable, not closing, and architectural in origin. Source: Cloudflare Radar AI Insights, April 2026.