
The dominant question in boardrooms right now is “how do we make our people more productive with AI?” It’s the wrong question. It assumes today’s workflows are the right workflows and that AI is a lubricant you squirt into the gears. Bolt a copilot onto a broken process and you get a faster broken process — and a bigger AWS bill.
The data tells this story without much ambiguity. McKinsey’s 2025 State of AI survey found that 88% of organizations now use AI in at least one business function, up from 78% a year earlier. Only about 6% qualify as “AI high performers” — organizations that attribute 5% or more of EBIT to AI use. MIT’s The GenAI Divide puts a starker number on the same gap: 95% of enterprise AI pilots produced no measurable P&L impact in the six months after launch, despite $30–40 billion in enterprise spending. That MIT finding has been fairly critiqued for defining success too narrowly — it ignores efficiency gains, churn reduction, and pipeline velocity. But even the softer reads of the data land in the same place McKinsey does. Mass adoption. Rare value.
The gap between adoption and value is not about model quality. Nearly everyone has access to the same models. It’s about what adopters are asking the models to do. Winners are not differentiated by tool access. They’re the ones who stopped lubricating old workflows and rebuilt them. McKinsey tested 31 organizational practices against AI impact. Fundamental workflow redesign came out on top — the strongest single correlate of any factor tested. High performers are roughly three times more likely than their peers to have redesigned workflows end to end.
The right question is not “how do we add AI to what we already do?” It’s “what does this business look like when an intelligent layer is sitting at the center of it?”
Open loops and closed loops
Every business operates on feedback. Decisions get made, actions get taken, outcomes happen — and the feedback that connects those three things is either tight or lossy. That is the only interesting distinction.
An open-loop business has lossy feedback. Status gets summarized up the chain. Customer signals pass through whoever happened to be in the room. Variance gets explained in a meeting three weeks after it happened. Decisions reach decision-makers after the information has already decayed.
A closed-loop business is different in kind, not degree. Every important event produces a structured artifact. Those artifacts flow into a consolidated surface that a central intelligence — agents, models, rules — can read. The intelligence synthesizes patterns, drafts actions, and flags exceptions. Humans judge at decision points and the outcomes of those judgments feed back upward as new artifacts. The loop closes.
The payoff is not efficiency. The payoff is correctness over time. Open-loop businesses drift. Closed-loop businesses self-correct.
If this sounds like business process reengineering in new clothes, the pattern-match is fair. The answer is that the substrate changed. BPR foundered in the ’90s because closing the loop required a human in every seat of the feedback path, and humans were expensive. Most organizations ended up with redesigned processes and no way to actually run them. What’s new is that the loops can now be closed at the cost of tokens rather than headcount.
The three primitives
Strip the architecture of a closed-loop business to three ideas, because every function is an application of them.
Artifact emission. Every important action produces a structured, machine-readable record. Meetings captured and transcribed. Decisions written down in a consistent format. Customer conversations logged against a common schema. Vendor interactions preserved. This is discipline before it is tooling. No amount of AI can read signals that were never emitted in the first place.
Queryability. Those artifacts live in one logical surface, indexed and accessible — not scattered across fifteen SaaS silos. A question asked of the business — by a human or an agent — can be answered against the whole corpus, not the seven percent of it that happens to sit in whichever tool you checked first.

Closed loops. The intelligence layer reads the queryable surface and does something with it — drafts the follow-up email, flags the variance, clusters the complaints, proposes the next action. Humans judge. The outcomes of those judgments become new artifacts, and the loop compounds.
That’s it. Three primitives. Everything that follows is an application of them to a specific function.

Figure 1. The architecture reads bottom-up: artifact emission is the foundation, queryability is the plumbing, the intelligence layer is the core of the business, and closed loops are how it touches every function.
What this looks like across the business
The same pattern, applied eight different ways.
Sales. Today: optimistic forecasts, half-filled CRM, customer signals filtered through whoever owns the account. Closed loop: every conversation transcribed, signals synthesized across the pipeline, at-risk deals flagged before the human would have caught them, forecast calibrated on what customers actually said rather than what the sales team wants to believe.
Service. Today: tickets handled in isolation, patterns surface months after the damage is done. Closed loop: issues clustered as they happen, draft responses attached to tickets, recurring problems routed to operations and product with context, customer health scores updating continuously.
Operations. Today: weekly standups, monthly status decks, blockers discovered at milestone reviews. Closed loop: committed versus delivered work reconciled automatically, blockers surfaced before they cascade, leadership reviews exceptions instead of status.
Finance. Today: the books close monthly, variance analysis happens weeks after the quarter it describes. Closed loop: anomalies flagged in near-real-time, variance explanations drafted automatically for budget-owner review, forecasts updated on live signal. The monthly close becomes validation, not discovery.
People. Today: performance is anecdotal, engagement is measured twice a year, development is left to whoever remembers to have the conversation. Closed loop: structured signals from 1:1s, peer feedback, and project outcomes feed coherent development recommendations. The caveat here matters more than anywhere else. This function requires deliberate governance. The line between helpful visibility and surveillance is easy to cross, and once crossed, hard to uncross. Build the governance before you build the capability.
IT. Today: reactive service desk, siloed monitoring, Tier 1 ticket volume that swallows the team. Closed loop: unified observability, agent-drafted runbooks, Tier 1 handled by the intelligence layer, IT shifting from ticket-processing to system design and governance.
Product and engineering. Today: a backlog that is half-guessed, a roadmap built on whoever spoke loudest in the last planning meeting. Closed loop: customer signals, usage data, and delivery telemetry flowing into a single surface that grounds prioritization in what is actually happening, not what the loudest PM thinks is happening.
Strategy. Today: decisions made in offsites, built on whatever slides got prepared, evaluated at the next offsite. Closed loop: operational data as the substrate for strategy, strategic bets monitored with the same live signal as operational KPIs, mid-course correction as normal practice rather than exception.
Eight functions. One architecture.
The org shape that emerges
When every function runs as a closed loop, the organization’s shape changes. Not a wholesale flattening — something more specific.
The human-router layer compresses. Roles whose main job is moving information between people who could talk directly — coordinators, status-rollup managers, admin-heavy middle layers — lose their function because the intelligence layer does it better and without latency.
The builder-operator expands. More people in every function need to wield AI natively — composing agents and workflows the way previous generations wielded spreadsheets. These aren’t developers; they’re operators who are fluent in the tooling.
The accountability layer stays and matters more. Judgment, customer ownership, and regulatory responsibility do not disappear. The system amplifies whatever these roles point it at, which means pointing it at the right thing gets disproportionately more valuable.
Leadership reshapes. Your job stops being “get status” and becomes “ask sharper questions of a system that already has the answer, and then decide.” That is a different job. Not harder or easier. Different.
What makes this hard
A candid word on the obstacles, because the piece is worthless if it pretends this is easy.
Artifact discipline is culture, not tooling. Capture must be modeled by leadership or it never sticks. A CEO who writes their decisions down creates a company that writes decisions down. A CEO who doesn’t, doesn’t.
Data lives in silos by default. Making the corpus queryable takes real integration work. Standards like the Model Context Protocol are making this easier, but “easier” is not “free.”
Governance is the load-bearing wall. Who sees what. What agents are allowed to act on without human approval. What the audit trail looks like when an agent did something consequential. Who signs off when the draft the agent produced turns out to be wrong — because in a regulated industry, “the agent decided” is not an answer a regulator will accept. McKinsey’s 2026 AI Trust Maturity research framed the shift well: in the agentic era, organizations must contend not only with systems saying the wrong thing but with systems doing the wrong thing. That is a different governance problem than the one most companies have built for.
Humans resist legibility. People whose value was in being the information router, or in owning private context, will experience this as threat. They are not wrong to. Name it. Handle it directly. Move people into roles where their judgment matters more, not less.
The response to all four is the same: start small. Pick one loop. Prove the model. Expand. Boil-the-ocean programs usually fail, and the failures are sticky — they poison the organization’s appetite for the next attempt.
Where to start
A practical first move, not a twelve-month roadmap.
Pick one high-leverage loop — usually customer or operational, because those are where artifact discipline is closest to the surface and where value shows up fastest. For the next ninety days: install capture discipline against that loop. Consolidate the data surface it draws on. Introduce one well-defined agent that drafts, flags, or summarizes against the loop. Measure two things — cycle time and human hours recovered.
That’s it. The second loop is dramatically easier once the first one is real. The organization has learned what capture discipline feels like, what integration plumbing looks like, and what it means for a human to judge a draft the system produced.
The competitive stakes
The closed-loop business doesn’t just run more efficiently. It knows itself in a way open-loop competitors cannot. It corrects faster, makes better bets, and captures institutional knowledge instead of losing it every time someone leaves. It scales without scaling headcount proportionally.
The window for this is open and the dynamic is compounding. The tools are capable. The patterns are known. The cost curve is favorable. McKinsey’s longitudinal data shows the gap between AI leaders and laggards widening year over year — “winners take most” is becoming the operating dynamic of this transition, not a future possibility.
Businesses that close their loops in the next eighteen to twenty-four months will compound that advantage against competitors still running on status meetings and monthly reports. The ones who treat AI as a productivity upgrade will spend another two years adding features to broken workflows and wondering why the P&L doesn’t move.
This is not an AI project. It’s an operating model change. Build it like one.
Sources
- Y Combinator. (2025, April 24). The Playbook For Building An AI Native Company [Video]. YouTube. https://www.youtube.com/watch?v=EN7frwQIbKc
- Willison, S. (2026, February 7). *The “software factory” metaphor for AI-assisted development*. Simon Willison’s Weblog. https://simonwillison.net/2026/Feb/7/software-factory/
- McKinsey & Company. The state of AI in 2025: Agents, innovation, and transformation. November 2025. mckinsey.com
- MIT NANDA. The GenAI Divide: State of AI in Business 2025. August 2025. Coverage: Fortune
- McKinsey & Company. The AI transformation manifesto: 12 themes driving growth. April 2026. mckinsey.com
- McKinsey & Company. State of AI trust in 2026: Shifting to the agentic era. March 2026. mckinsey.com
