Finding Fault Lines within the Firm
Disruptive AI tools present an opportunity to observe business protocols that usually remain hidden. This article shares what has been revealed so far about tensions between production and authority.
Rafa Fernández is the host of the Protocols for Business SIG, which meets every two weeks on Discord to discuss protocols in business settings, and you’re welcome to join the next session. Link to join at the end of the article.
Seeing Business Protocols
If you ask a typical manager how their company works, they will usually begin with familiar answers. They describe the business model, sketch an organizational chart, or point to operational policies. Often they use these to explain how decisions are approved, how risks are reviewed, and how work is supposed to move from idea to execution. These descriptions are usually sufficient – until something goes repeatedly wrong or keeps going stubbornly right.
When coordination continues to break down or when performance remains unexpectedly stable despite adverse conditions, those accounts begin to feel lacking. Deadlines slip week after week. Review cycles stretch without resolution. Or a team continues to hit targets year after year in an unfavorable market. Managers reach for partial explanations: tooling issues, unusually strong execution, “good ops.”
Individually, these accounts are accurate. But they rarely explain why the same deviations – positive or negative – recur in the same places.
Recurrence is the clue to protocols, which provide a more precise explanation. Protocols do not solve problems or guarantee success. They lock in commitments that stabilize how variation is absorbed over time. In doing so, protocols keep friction within tolerable bounds, allowing organizations to continue operating – and in some cases, to build momentum – despite persistent strain. This is their quiet achievement: protocols hold recurrent tensions within ranges the organization can live with.
For this reason, protocols rarely appear as foregrounded processes. Once established, they recede into habit, embedding themselves in infrastructure, workflows, and expectations. They become “how things are done,” rather than objects of reflection or redesign. As Venkatesh Rao, Program Director of Summer of Protocols, observes:
“Though they arrive slowly, protocols typically install themselves in extraordinarily persistent ways, often turning into seemingly immortal and unconscious parts of our built environment. Their relative invisibility is a second major tell.”
Protocols do not announce themselves. They constrain motion quietly, with little conscious effort, shaping what can happen without drawing attention to themselves. This creates a methodological problem. If protocols are most effective when they are least visible, how should they be studied?
Our group, the Protocols for Business Special Interest Group (SIGP4B), has taken an oblique approach. Rather than attempting to inspect protocols through documentation or formal analysis, we have found success by studying moments when habituation fails in business contexts. Operational reinvention and technological shifts interrupt routine behavior and make protocols visible.
These moments of disruption show up as unresolved approvals, delivery delays, or persistent strategic confusion. In parallel, they appear as periods of sustained advantage: new categories forming, firms holding position through volatility, or business models that continue to function when others falter. Without attention to the underlying pattern, these episodes are often treated as isolated failures or market-cycle anomalies. When viewed through the lens of protocol studies, recurring business challenges in moments of volatility temporarily illuminate how an organization is actually structured.

In geology, fault lines are not identified by close surface inspection. They are discovered when accumulated stress forces the underlying structure to express itself. Our discussions highlighted how protocols behave similarly. Persistent problems and persistent point less to local error or exceptional talent than to the protocols through which pressure concentrates and trade-offs are stabilized. The artifacts left behind by this work – extra reviews, HR policies, new roles, software controls, even new business models – offer a useful vantage point. Still, it is sustained strain and disruption that make the boundaries of protocol structures unmistakable.
Over the past year, our group has been watching one such disruption unfold across many organizations at once. AI software solutions, such as large language models and autonomous agents, are diffusing, often in the shadows, into everyday business operations. How work is generated, evaluated, and sequenced in time is changing rapidly. As a result, protocols that once operated in the background have become visible, creating an opportunity to examine what business protocols are actually doing when placed under pressure.
Protocols under Pressure
The dominant technologies of the past two decades – networked software, mobile devices, cloud infrastructure, remote collaboration – are entering a period of relative maturity. Their organizational effects are well understood, if not fully resolved, especially in today’s top enterprises. AI software, by contrast, has just arrived in the past few years. LLMs, copilots, and early autonomous agents are being adopted unevenly across functions, spawning new markets, but often failing at enterprise-level deployment.
AI is usually discussed in terms of automation or productivity. Those framings are not wrong, but they miss what makes AI adoption particularly revealing from a protocol perspective. While much of the public discussion frames AI in terms of cost-savings or new markets, our SIG has been focusing on the pressure it places on current coordination systems by changing the speed and scale at which work is produced: AI lowers the marginal cost of many forms of knowledge work while dramatically increasing total output. Drafts, analyses, summaries, and proposals can be generated continuously. AI’s production acceleration shows up everywhere at once; tools are being adopted simultaneously in product, marketing, legal, operations, and management functions. Work that once arrived in bounded increments now arrives as a flood, often carrying significant amounts of low-signal algorithmic output (“slop”) mixed in with it.
Many organizational protocols evolved under production paradigms which assumed a specific level of scale. Documentation, like other forms of knowledge work, took time. Decisions moved through sequential approval gates. These constraints shaped how firms organized authority, pacing, and oversight.
Our group repeatedly returned to version control as a protocol case study. Today’s default workflow assumes that meaningful code change is mediated through human reviewers, which stabilizes quality and accountability by gating “merges.” This implicitly capped how much change a codebase can absorb over time, limiting it to the skill and capacity of reviewers. Large refactors – such as migrating a core codebase to a new language – have historically been slow because review, coordination, and rollback had to proceed at human pace.
Now, LLMs challenge these assumptions and workflows. The resulting strain mirrors an earlier protocol transition from centralized to distributed version control, which supported continuous deployment at scale. New protocol paradigms are necessary to address AI-enabled speed and scale.
Production constraints become easier to understand when protocols are treated as technologies in their own right. In “Constructing the Evil Twin of AI,” Venkatesh Rao describes protocols as deliberately anti-intelligent systems:
“What happens when you try to deliberately construct something both anti-intelligent and anti-agentic? You get protocols! Protocols lock in commitments, rigidly constrain behavior, and implacably resist the oozy qualities of intelligence…”
Protocols are designed to limit, to configure trade-offs, and to make actions repeatable under uncertainty. They create leverage through hardness, yet often can be brittle under certain strains.
It’s worthwhile to note that AI compresses some constraints while bypassing others. Output multiplies while, at least for now, reviews and escalations are sometimes sidestepped. Production gains from AI adoption do not lead to smooth acceleration, but uneven load and jerky momentum.
Seen this way, the friction and failures accompanying AI adoption are not primarily symptoms of resistance to change or poor product quality. They are signs of protocol integrity under load. AI introduces fluidity where rigidity previously performed a stabilizing function. Existing coordination systems are being asked to absorb more output, faster, without having been designed for that operating range.
Similar to fault lines, coordination pressure accumulates along established paths. What has been routine begins to feel strained. What has been tolerable begins to feel tight. To understand the consequences of that pressure, it helps to look at where it concentrates first.
Tensions in Business Time
Across the SIG’s discussions, interviews, and readings, a consistent pattern has emerged. Under AI adoption, the first thing that stops working smoothly seemed unintuitive: time.
This became clear when our group reviewed Blake Scholl’s writing on Boom Supersonic. Here, Scholl distinguishes between at least two clocks operating inside the same organization. The first is the calendar: project timelines, milestones, and delivery dates. The second is what he calls the Slacker Index: the amount of time engineers spend waiting – on inputs, approvals, dependencies, or external constraints – rather than building. Even in well-run, safety-critical organizations, these clocks coexist.
Under stable conditions and in mature industries, this alignment is usually implicit. Engineering velocity, supplier lead times, regulatory review cycles, and internal decision-making rhythms evolve together. At Boom, hardware design, simulation, testing, and supplier manufacturing are paced to one another. Slower clocks constrain faster ones in predictable ways. Waiting is visible, expected, and priced into the system.
As Scholl points out, AI-enabled production changes the speed and scale of production. Certain forms of work – design iteration, analysis, documentation, internal review – can suddenly accelerate by orders of magnitude. From the perspective of the Slacker Index, local waiting collapses. Yet the calendar will not automatically follow. Supplier lead times remain fixed. Certification processes still unfold at human and institutional speeds. External partners continue to operate on contractual and regulatory time.
The consequence of AI-enabled opportunity is temporal divergence (a topic explored in depth by SIG member Sachin). Some clocks speed up sharply while others remain unchanged. At Boom, this would mean design teams outrunning suppliers, simulations outrunning manufacturing feedback, or internal decision cycles outrunning the capacity of external partners to respond. The Slacker Index may improve locally – less waiting to produce – but worsen systemically as downstream dependencies fall behind.
AI systems further amplify this effect in two ways. One, because they generate outputs without passing through the durations that normally situate work, creating a dizzying orientation. Large language models produce analysis and proposals instantly. Work arrives early in excess of the organization’s ability to absorb it. Knowledge accumulates faster than it can be evaluated, integrated, or acted upon.
Second, AI software using LLMs can be contextually misaligned. They draw on data that’s often years apart (a model trained up to 2024, used in 2026) and produced outside the local business context. From this lens, the recent focus on improving AI product memory seems intuitive. Efforts such as RAG, MCP, skills, and even “undo” prompt features become attempts to realign probabilistic software into business context, tempo, and authority.
Safety-critical organizations like Boom make these dynamics visible precisely because they cannot simply collapse time. Hardware, suppliers, and regulators enforce non-negotiable rhythms. When AI accelerates internal work without moving those external clocks, coordination strain surfaces quickly. Slack accumulates in unfamiliar places, with no protocols available to redistribute it.
When time regimes fall out of alignment, coordination problems and opportunities change form. Delays no longer appear as isolated errors that can be corrected locally. Instead, organizations experience escalating tensions: pressure to act without corresponding capacity to review, decide, or remember.
As with other systems under strain, previously hidden structures become easier to observe once alignment fails. AI adoption exposes how dependent coordination was on the quiet temporal alignment – rhythm – of existing protocols.
Yet work does not stop. Systems do not fail outright. Instead, protocols drift out of phase. Understanding how firms respond requires looking at how they attempt to restore coherence – often without redesigning the underlying structures that produced it.
Enterprise Management, Built to Last
When shared assumptions about time lose coherence, organizations first adapt within current structures. Work continues by absorbing friction rather than resolving its source.
One visible form of this absorption is Boom’s solution: integrate vertically. The critical move was purchasing their own large-scale manufacturing equipment rather than continuing to rely on external suppliers whose lead times dominated the schedule. Supplier queues and fabrication delays had become the governing clock for the entire program, producing a high Slacker Index: engineers were ready to iterate, but progress stalled while waiting on parts. By acquiring the machine, Boom internalized that bottleneck and converted supplier wait time into an internal, controllable process. This collapsed a multi-month external dependency into a shorter, iterable internal cycle, allowing design, testing, and manufacturing to co-evolve rather than queue sequentially.
Another response was novel translation work. The SIG discussed the fast growing Forward Deployed Engineer role, emerging to help mediate between fast-moving demands and slower-moving infrastructure. Their task is not to eliminate mismatch, but to work across it and leverage it – adjusting scope, translating intent, and negotiating constraints as they appear. This work allows organizations to keep operating even as tempos diverge, and gain a competitive advantage in the process. At its best, the work defines the operating model. This is the case for Palantir and large AI labs like OpenAI and Anthropic.
Other adaptations the SIG encountered took the form of operational formalization: AI usage guidelines, governance documents, digitized ontologies. These measures make previously tacit constraints visible without altering the structures that produced the misalignment. They stabilize behavior at the margin while leaving underlying coordination regimes intact.
As adaptive load to the new pressures increases, authority structures reassert themselves. Approval gates and prohibitions harden. Data confidentiality clauses are expanded. Hierarchies become more visible. These protocols surface because they are firm fault lines that stabilize liability, escalation, and accountability when temporal coherence weakens.
Some of these protocols absorb the load effectively. But others begin to strain. In geological systems, stress redistributed after an initial shift often produces secondary movements elsewhere. Organizational responses to temporal misalignment follow a similar pattern. Processes adapt where they can, while pressure seeks relief where they can’t.
This adaptive pattern becomes more consequential as management itself is increasingly parameterized within software, paving a path to programmatic enforcement.
Warning: Brittle Software
Before returning to our initial question on business protocols, it is worth taking a brief detour. Alongside the recent diffusion of AI, the SIG has been considering a quieter shift: management controls themselves have been moving into software.
Approval flows, access rights, version control rules, and incident response procedures are increasingly encoded directly into business software systems. Authority becomes a software admin configuration setting. Expenses are automatically rejected when outside policy. These changes predate AI, but they shape how AI is experienced inside organizations.
This matters because management subsumed in software behaves differently from practiced management. Protocols, when encoded in software, limit discretion in what is authorized and auditable. In contrast, management practice as a whole consists of what is necessary and effective in context. Under stable conditions, the gap between the two – protocol and practice – is manageable. A manager can override an expense request, or call a system administrator to update the software configuration.
AI changes the scale and speed at which this gap becomes visible. On one hand, employees and customers bypass formal systems using personal tools, custom scripts, or unregistered agents operating through employee accounts. On the other, software-encoded controls enforce protocol broadly and uniformly, at relatively low cost. A single configuration change can propagate across an organization instantly.
This shift produces a recognizable organizational pattern. Practices that once relied on informal judgment or situational flexibility are increasingly forced into deterministic software-defined pathways. Work that used to be resolved through conversation – such as asking legal for a quick review or exercising discretion under time pressure – is now mediated through systems that require explicit inputs, permissions, and logs. These systems work especially well in mature and slow-changing businesses, and can be brittle under volatile situations like a new technological disruption such as AI software.
A recent example is from an airline: A customer relied on an airline’s AI chatbot for information about bereavement fare rules. The chatbot incorrectly stated that refunds could be claimed retroactively, advice that contradicted the airline’s formal policy. When the customer followed the guidance and was denied the refund, the airline argued in court that the chatbot was a separate tool and that customers were responsible for verifying information elsewhere on the site.
The court rejected this argument. It ruled that the chatbot was part of the airline’s customer-facing system and that the company was responsible for the commitments it made, regardless of whether those commitments were generated by an AI system. It is tempting to shape the story to focus on LLM shortcomings: “This type of mistake, in which generative AI tools present inaccurate or nonsensical information, is known as AI hallucination.”
What’s more important is that the failure occurred because a time-tested operations efficiency strategy (delegating frontline explanation to an automated system) collided with protocolized legitimacy (formal fare rules, auditability, liability). The business made an assumption that explanatory labor could be delegated to probabilistic software, decoupling it from the protocols that confer legitimacy and liability. The chatbot accelerated response and reduced staffing load, but it was not strictly aligned to the same review, approval, and accountability protocols that governed pricing policy. At sufficient scale, that gap became visible.
Understanding this trend increases the diagnostic value of the current moment. AI adoption, like many previous “digital transformations” since the advent of the internet and before, highlights where management has already been formalized, where practice has been carrying hidden load, and where current protocols are breaking.
With that context in view, it becomes easier to return to the original methodological question: how AI, not framed as a solution but as a protocol disturbance, makes the underlying structure of coordination inside firms visible.
Strange New Business
Since the summer, our group has treated AI adoption and similar business disruptions as an observational lens rather than an optimization target. This lens supports our SIG’s goal of researching, evaluating, and finding opportunities to improve business protocols under pressure. AI is useful here because it is widespread, fast-moving, and poorly aligned with existing organizational rhythms.
This approach reflects a broader pattern in protocol studies. As Venkatesh Rao notes in Strange New Rules, new coordination regimes often feel unfamiliar at first, and then they are banal. Over time, they stabilize. Eventually, they disappear into habit. What becomes difficult is not living with protocols, but noticing and refactoring them.
AI has interrupted that disappearance. It has surfaced assumptions about time, authority, and review that were previously implicit. In short, it has made visible where management is already encoded, where practice has been carrying unacknowledged load, and where coordination depends on alignments that no longer hold reliably.
Although our recent work has focused on business management protocols – multisigs, version control, incident response – similar dynamics appear in transaction and production protocols as well, including market auctions and inventory management. Across cases, recurrent confusion, friction, and governance anxiety function serve as signals of protocol stress; they point to coordination systems doing more work than they were designed to absorb.
The conditions described here are not temporary. As AI software continues to diffuse unevenly into business operations, the protocols that manage coordination trade-offs will remain under strain – and increasingly visible. Our group treats this moment as an opportunity to observe, compare, and refine how business protocols actually function under load. If you are interested in these topics, or are encountering protocol changes yourself, share your email here and join our next meeting!




