The businesses with agents in production are not the ones with the best models or the most sophisticated infrastructure. They are the ones that documented their operational processes with enough precision that an agent could execute them. That documentation is not a technical artefact. It is the Operational Ontology — the canonical vocabulary of every concept, exception, escalation condition, and success criterion that governs a task — and its absence is the reason that, despite almost every enterprise experimenting with agents, almost none are running them in production. The infrastructure gap closed. The design discipline gap did not.
What Vercel built and what the data shows
Tom Occhino, CPO of Vercel, opened the SaaStr Deploy Summit 2026 with three production deployments that together constitute the clearest commercial evidence yet of what the Arco argument has been predicting. A customer support agent now handles 93% of Vercel’s customer inquiries without human intervention — a 7% Escalation Rate across the full support function. A content agent converts Slack threads into blog post drafts and now generates 96% of Vercel’s marketing output, with a human reviewing and publishing rather than creating from scratch. A lead qualifying agent replaced what Occhino described as a large SDR team doing manual, repetitive work — with every team member redeployed into higher-value roles rather than eliminated.
The fourth deployment, Deal One, is the most structurally instructive. It is a multi-layer sales intelligence agent that ingests every sales call the moment it ends (via webhook to a Vercel Function), runs the transcript through a 10-step durable pipeline, generates structured summaries covering topics, objections, deal stage, sentiment, and stakeholders, embeds them into a hybrid vector and keyword index alongside atomic objection records and cross-deal patterns, exposes the entire knowledge base via a secure MCP server, and then surfaces intelligence to sales reps in Slack exactly where they already work. The rep at-mentions the agent, asks what the top objections were or why a deal slipped, and receives a cited answer linked to specific moments in the call transcript. This is not a chatbot. It is a Context Architecture built to compound with every sales cycle.
The structural argument this confirms
In Memo #04 — Why We Don’t Build MVPs, we argued that the discipline of documenting a process with enough precision to encode it into deterministic infrastructure is the same discipline that makes it autonomous-ready. An agent is only as good as the Operational Ontology it operates on: the canonical definitions of every concept it will encounter, the Exception Architecture that specifies what it resolves autonomously and what it escalates, and the success criteria that determine whether a given execution was correct. Occhino made this explicit in his advice for first-time agent builders: “Document exactly how that work should be done. An agent is only as good as the process it’s given. If you can’t write down what a human should be doing for that process, an agent won’t be able to do it either.” This is not a tooling observation. It is the precondition for Full-System Design.
The data confirms the Revenue-to-Headcount Advantage at its most direct. Vercel’s SDR team was replaced not by a software purchase but by a purpose-built agent whose operational process was documented, encoded, and deployed as proprietary infrastructure. The team members redeployed are the Stewards of the next capability — the pattern the Stewardship Model predicts. The 93% support resolution rate represents Headcount Decoupling at the service function: the support team’s cognitive load is now concentrated on the 7% of conditions that genuinely require human judgment, not the 93% that do not. This is the Intervention Threshold functioning as designed — and it is functioning at a 1:14 ratio for T1 support tasks, compared to Arco’s 1:100 target for mature autonomous systems, which confirms Vercel is early in the compounding cycle, not at its ceiling.
Occhino’s “agents as a service don’t really work” observation confirms the De-SaaS-ing argument from a different vantage point. A one-size-fits-all agent has the same structural problem as a one-size-fits-all SaaS tool: it was built for a generic human operator, not for the specific operational vocabulary of a given business. Every enterprise that attempts to deploy an off-the-shelf agent onto a process it has never documented precisely will encounter the same failure: the agent cannot resolve the ambiguity that human workers resolved through judgment, and the Knowledge Debt accumulates from the first execution cycle. The businesses that have agents in production — Vercel included — built proprietary operational infrastructure whose Context Architecture is not available to any competitor that tries to purchase the same capability off the shelf.
Deal One’s architecture is also a commercial implementation of the Operational Ledger argument: a structured, continuously updated knowledge base that agents query at execution time, compounding with every sales call, every encoded objection, every cross-deal pattern. As documented in Actively and the Operational Ledger Argument, the 2x conversion at Samsara from per-account context compounding is what the same mechanism produces at the GTM layer. Vercel’s Deal One and Actively’s per-account agents are independent commercial confirmations of the same structural prediction: compounding context produces structurally different operational outcomes than agents that reset between sessions.
The Operator's Verdict
Almost nobody is running agents in production despite almost everyone experimenting with them. The gap is not the model. It is not the infrastructure. It is the absence of an Operational Ontology precise enough for an agent to execute without improvising. The businesses that close that gap first will not just run agents in production.
They will own the Operational Ledger that makes every subsequent cycle cheaper, faster, and harder to replicate. The window to be early is closing. The documentation is where it starts.
KEY TAKEAWAY
What does Vercel’s SaaStr Deploy 2026 data reveal about why most enterprises are failing to run agents in production?
Vercel’s CPO Tom Occhino confirmed at SaaStr Deploy 2026 that the gap between experimenting with agents and running them in production is not an infrastructure gap — it is an Operational Ontology gap. Vercel’s production deployments (93% support resolution rate, a replaced SDR team, 96% AI-generated marketing content, and the Deal One multi-layer sales intelligence agent) all share one precondition: the operational process was documented with enough precision to specify the exception conditions, escalation triggers, and success criteria before any code was written. Enterprises without this documentation deploy agents that fail where human workers succeeded, because human workers could resolve ambiguity through judgment while agents require explicit operational vocabulary. Occhino’s advice — “document exactly how that work should be done before building an agent to do it” — is the practitioner statement of the Operational Ontology requirement. The businesses that compound from the agent era are the ones that built this documentation as proprietary infrastructure rather than attempting to purchase agent capability as a service.
