In this issue: Latest happenings in our Special Interest Group in Formal Protocol Theory; report on the Protocol Foundations Workshop; the Tan Paper project; meditation on protocols as the evil twins of AI; and two new writing bounties.
A coffee vending machine is a device for turning coins into cups of coffee. A computer scientist is a device that turns coffee into papers. What can happen, and perhaps more importantly, cannot or should not happen, when you connect the two devices?
That’s the whimsical premise of a paper we read as part of a track of discussions on process calculi, curated by
, co-host with me of SIGFPT, the Special Interest Group in Formal Protocol Theory, which meets every other Friday at 10AM Pacific (1700 UTC) on our Discord. The next call is this Friday, 17th, where we’ll be discussing “motion languages” developed in robotics, based on a formalism known as maneuver automata, and how they might be used to describe protocols. If these sorts of discussions interest you, join us!If it sounds like we’re casting a wide net here, it’s because we are. The challenge of modeling and analyzing protocols in the broadest sense is a daunting one, and we’re still in discovery/cartography mode, four months in, gathering tools, techniques, and motivating examples from all over the place. Since my previous update (What is Formal Protocol Theory, August 20), things have been moving along briskly.
While the material we tackle can sometimes be intimidatingly mathematical, don’t let that scare you off! We make aggressive use of LLM ELI5ing, toy examples, paper-napkin diagramming, and simple back-of-envelope calculations, to make our discussions accessible even to participants from pure humanities backgrounds.
Whatever your technical aptitudes, there’s never been a better time to vibe-math your way into previously inaccessible territories. All you need is curiosity and irreverence.
Foundations Workshop
A major highlight of the last eight weeks was the Protocol Foundations Workshop held in September. Around 30+ interested folks from a variety of technical and non-technical backgrounds gathered over four Zoom sessions across two days, with brisk activity in-between the sessions on our Discord, to map out the contours of this emerging field.
We had some pretty intense and wide-ranging lightning talk sessions (see screenshot below from our Roam graph for a sense of the proceedings), and then attempted to synthesize the discussions into a coherent map of the territory, and an agenda for continuing discussions and work.
Tan Paper Project
One of the follow-through activities that has shaped up since the workshop is a collaborative effort to write a “tan paper” (our term for something between a non-technical white paper and a technical yellow paper), laying out a roadmap for research in formal protocol theory. This effort is just getting started, and will likely unfold over the next few months. So if you’re interested in contributing, this would be a good time to jump in. If our cunning plans unfold as we hope, this tan paper will lay the groundwork and define the agenda for an exciting new technical field, and trigger a flood of new research and technical tooling.
Currently, the project is shaping up to take a “protocol failure and repair” view of the domain, and adopt something like a bridge perspective between engineering and sociological perspectives.
We will also draw from older fields like cybernetics, systems theory, science and technology studies (STS) and institutionalism, while remaining oriented towards the defining “true north” elements of protocol theory – observability, verifiability, temporality, tight grounding in empiricism, and real infrastructures – that are often under-theorized in adjacent fields. We hope to outline a “basket of protocols” approach to formalization, in terms of scoping-in both a wide range of protocols (from handshakes and traffic to blockchains, AI regulation, and climate), and in terms of tools and frameworks.
Some of the “ingredients” that might show up in the paper:
Automata theory
Process calculi
Decentralized stochastic control
Game theory
Statistical physics type models
Category theory
Computational modeling and simulation
Temporality and memory models
Cryptography
It’s still very early in the process, so we don’t quite know where this will end up, so if you’re impatient to learn more, you’ll just have to attend our sessions.
Mission: Evil Twinning
Reviewing the progress of our study group got me thinking about why getting this field going matters in 2025, and it struck me that the biggest reason is to develop an expansive and holistic view of technological society that is in some sense the natural antithesis to the one offered by the ascendant AI view.
If you’ll excuse an outrageously mixed metaphor, AI is the irresistible force eating everything, while protocols, especially modern computationally mediated ones, are emerging as the immovable objects that absolutely resist being eaten.
So FPT, in some ways, can be understood as the “Evil Twin” field of AI – the art and science of designing “immovable” objects. While I normally dislike negative and reactive definitions, in this case it’s not only a very fertile framing, but a poetically appropriate one. And it casts everything we’ve been doing in SIGFPT in a very interesting light.
An evil twin, unlike a nemesis, isn’t a true antipode. What makes for an interesting pair of evil twins is an unsettling combination of extreme similarity and extreme dissimilarity between two entities. The extreme similarities, in this case, are fairly obvious. Both AI and protocols, for example, play a role in “automation” of human functions. Both generate and govern complex behaviors. In typical modern examples, both rely on, and compete over, sophisticated computing capabilities. Both require complex human interfaces and attention to safety. Many actual engineered artifacts necessarily involve both AI and protocol design philosophies (including modern AIs themselves – protocols feature in the training recipes, “alignment” recipes, in the design of guardrails hard and soft, and in the interconnection fabrics of distributed AIs).
But it is the “evil” oppositions that are interesting, especially theoretically, and it’s easiest to see this in terms of the two defining qualities of the “AI view” of technological society: intelligence and agency, which increasingly suffuse everything around us.
Anti-Intelligence and Anti-Agency
“Intelligence,” however you define or design it, tends to make everything it touches fluid, contingent, ambiguous, unstable, and malleable. Intelligence “oozes” everywhere, and into everything, through the tiniest pores. Which is as it should be. That is the great power of intelligence, at once enlivening and corrosive – to root everything in a foundation of systematic doubt; to make it certain that nothing is certain. Everything is subject to movement and change once intelligent attention is directed at it.
The other dimension of AI is agency, the quality in behavior that lends it a restless, proactive, intentional character. A quality that causes endless change and becoming; that leads to willfully transforming and being transformed; that insists on nudging what is towards what could be. That is constantly drawn to the adjacent possible.
What happens when you flip these two qualities? What do you get when you try to deliberately construct something both anti-intelligent and anti-agentic? You get protocols!
Protocols are often deliberately designed to be “dumb,” even if the component elements are the same kinds of very powerful computers that run AI models, and must be artificially dumbed down to serve their functions. Protocols lock in commitments, rigidly constrain behavior, and implacably resist the oozy qualities of intelligence with hardness, immutability, and rigidity. With immovability.
Agency is subtler. In the world of AI, proactive, goal-directed, intentional behavior, within an evolving space of high optionality, is a highly desirable capability in all machines. In the world of protocols, the goal is usually to restrict the range of options, and limit autonomy in goal-directed behaviors. Not only do protocols themselves usually not have “goals” as such (an early finding from our research), they actively constrain the agency of actors operating within their confines, through both designed “artificial nature” type laws that cannot be broken, and incentives that reward or punish to constrain unbridled agency in softer ways.
To borrow a pair of terms from the postmodernists, protocols turn smooth behavior spaces (such as open, unbuilt terrain) into striated behavior spaces (such as a system of roads).
If the modern mantra of AI-inspired “agentic” culture is you can just do things, a mantra for protocol-inspired culture might be fish can’t see water, following David Foster Wallace’s famous speech, This is Water. The holy grail of protocols is perhaps cultural and technological design that silently, inscrutably, and immovably prevents specific classes of actions, to the point where you cannot even conceive of the possibility of going “against protocol.”
HAL 9000, the primary antagonist in 2001: A Space Odyssey, usually thought of as an AI, is perhaps better thought of as a supple intelligence wrapped around an immovable protocol. After all, its most memorable line was I’m sorry Dave, I’m afraid I can’t do that.
Blockchains
Blockchains, the original motivating class of protocols behind this magazine and the Summer of Protocols programs, illustrate the evil-twin character most clearly, with their “code is law” design premise, which brings a cryptographically secured doomsday-machine character to more fluid human understandings of law, incomplete contracting, sovereign exception-making and so forth. The immutability of blockchains, it appears, can not only resist randomness, humans, and AIs, it might even resist future quantum computers.
Many criticized the earliest blockchain, Bitcoin, based on “Proof of Work,” for embodying willfully engineered stupidity, designed to convert fossil-fuel joules into the most “useless” conceivable computations. Within the crypto world however, Ethereum is, rather remarkably, criticized for shifting to a “Proof of Stake” system precisely because it is not dumb enough, allowing for (according to the criticism) insufficient rigidity of behavior. For being too useful in some sense (Proof of Stake embodies the sociology of stakes in a system in legible, manipulable ways while Proof of Work doesn’t embody anything beyond computation for the sake of computation, useless by design).
To make matters worse, Ethereum’s vision of a “world computer” (Ethereum is a Turing-complete computer, unlike Bitcoin, which deliberately avoids being Turing-complete), is viewed by many as being against the true spirit of blockchains. Too close for comfort to the “irresistible force” side of the dichotomy. Too simpatico with AI for a self-respecting protocol. Still, overall, Ethereum falls on the protocol side of the fence – more immovable object than irresistible force.
The dichotomy runs even deeper. Zero-knowledge proof technology, increasingly a mainstay of blockchains, is being increasingly recognized as the apotheosis of “AI resistance” in a mathematical sense. Many schemes and proposals for governing AI systems rely on the remarkable properties of ZK systems.
Robots
The deliberately anti-intelligent, anti-agentic character of protocols is not limited to blockchains of course. It is also evident in much simpler technologies, such as coffee machines. Many ordinary artifacts are made as dumb and rigid as possible for a variety of reasons, the most important being stability of technologically embodied commitments, and determinism and predictability in generated behavior.
An area where this tension between intelligence and anti-intelligence is particularly evident, and rapidly coming to a head, is robotics. Classical robotics, in domains like factory automation, aims to enable and embody highly controllable, verifiable, and deterministic behaviors. The very word robot is etymologically derived from the Czech “robota” for forced labor or drudgery. Until quite recently, “robotic” in English was an adjective pointing to rigid, mechanical behaviors, not uncanny machines that dance and perform acrobatics more fluidly than we can.
The pre-AI view of robots was: More expressive and complex than machine tools and traditional automation, but decidedly less expressive and complex – by design – than free humans. By analogy to blockchains, traditional CNC machine tools are rigidly limited by design, like Bitcoin, while traditional robots, like Ethereum, are more expressive and programmable, but still rigidly limited. Both CNC machines and traditional industrial robots are clearly protocol technologies, designed for reliably repeatable and safely constrained behaviors.
Modern robotics though, is drawing deeply from the well of modern AI. Reinforcement Learning (RL) based robots, rather than being designed from first principles starting with physics models and formal control theory, learn more like human babies, thrashing about getting a feel for their bodies, and exploring remarkably weird behavior spaces until they finally learn the (protocolized!) adult modes of movement. Though these pre-theoretical learning trajectories might eventually converge on the same behaviors as formal design techniques, they do so without the need for the scaffolding of elaborate mathematical theories. But, as with babies, they do so with a lot of stumbling and flailing along the way, requiring safe training environments.
The tension is illustrated in a hilarious episode of Futurama, where Bender the robot is puzzled by humans doing an 80s-style robot dance, which he is not robotic enough to imitate.
This tension, incidentally, will be the topic of our call this coming Friday (October 17) – maneuver automata are an old “protocolish” way (from ~2003) of designing robot behaviors that are colliding with the emerging “AI” ways. Future robots that meld the two philosophies elegantly will likely lead to the most powerful robots imaginable. Thesis and antithesis converging in synthesis.
We are studying this in our group because we think very similar dynamics will play out in fields beyond robotics, anywhere AI and protocol ways of thinking and doing collide. Which is everywhere.
Teaching Birds to Protocol
For the field of AI, the possibility of pre-theoretical engineering is perhaps its most enticing feature, rather than a bug. You don’t need complicated theories and formalisms! You can just have AIs and robots fuck around and find out (FAFO) like biological organisms do!
Nassim Taleb’s old admonishment – do not attempt to teach birds to fly – now seems to apply not just to birds, but to robotic airplanes and helicopters too. And even to us theorizing creatures, humans! Taken to the limit of absurdity, the stance would be that FAFO all the way down is the only way to exist.
This is not a new idea. Back in 2008, before the birth of modern AI, Chris Anderson, in Wired, wrote the first clear AI-supremacist manifesto along these lines, The End of Theory, arguing that we no longer need theories. We just need tons of data and simple algorithms to chomp through them with massive compute.
It is now becoming clear that people like Anderson, and modern FAFO-maximalists, deeply misunderstand what theories and formalization are for. They are meant to constrain and control, not enable and unleash! They are protocol fuel!
This is a rather wild idea! The point of all theorizing is to discover impossibilities, symmetries, and conserved quantities that can be recruited to create structures that effectively constrain and confine the seemingly unbridled and unstoppable power of intelligence and agency. To lend to engineering some of the dependable inexorability of natural laws. To give intelligence and agency worthy adversaries to try to hack and get around. A great deal of ingenuity, for instance, has historically gone into efforts to make perpetual motion machines, which physics declares to be an impossible task.
Paradoxically, when we are able to find and harness a natural phenomenon that can serve as a hard constraint on certain behaviors, things get more creative, not less, because FAFO energies are more potently focused. Examples include asymmetric public-key cryptography,(which rests on the intractability of reversing certain mathematical operations), or gravity (which so far, we don’t know how to turn off). Look around. You’ll find that every significant technology in our lives rests on cleverly harnessed impossibilities and limits, configured into scaffoldings that limit rather than enable.
FAFO – the essence of not just AI but any evolutionary process – may help you figure out what you can do, but it takes theorizing to figure out what you cannot do, and use it to shape behaviors around what you should not do. And it takes both elements – unstoppable forces and immovable objects – to build not just technologies, but societies.
Civilizing the Machine
For protocols, limits and impossibilities are features, not bugs. The rigidity entailed by formally designed, and rigidly (perhaps unbreakably) constrained behaviors is not just a feature, it is a highly desirable one. So much so that we humans, who historically evolved in the FAFO ways modern AIs do, consciously constrain our own behaviors through elaborately designed “anti-intelligent” social protocols that separate us from our nearest primate cousins, and call this deliberate self-domestication “civilization!”
We do this even when we cannot find suitable natural laws to harness, and are forced to make up arbitrary ones, and rely on less powerful forces than mathematical impossibility, such as guns or burned bridges, to enforce them. Before “code is law,” we had entrenched bureaucracies. Before immutable ledgers, we had inviolable religious doctrines.
Why would you want to design and voluntarily inhabit such anti-intelligent and anti-agentic structures of control and confinement, as humans have been doing since the dawn of civilization? Why might we similarly want to do this to the emerging “wild” ecology of AIs?
The answer is the same in both cases – by bringing a certain predictable order and discipline into unpredictably chaotic intelligent and agentic behaviors, we make it possible to socially scale, coordinate, and commit to actions in much more powerful ways. By giving up a little intelligence and agency at one locus, we gain vastly more of both at another.
And in 2025, we can increasingly do this in strikingly non-arbitrary ways. Indeed, we must do it using the most immutable, immovable phenomena we can find, since the force of AI is the most irresistible ever conjured. Nothing weaker than mathematical inevitability can resist.
Which is why we think formal protocol theory is such an exciting frontier. If you find this argument compelling and irresistible, you must join us this Friday. Resistance is futile.
Protocol Fiction Writing Bounties
There are two new protocol fiction bounties live on our Discord. Cyborgs vs. Rooms and Strange laws of robotics. These bounties operate on hackathon pitching rules. Head over to the #pitches-bounties-workshop thread and read the prompts and guidelines there. We encourage you to use LLMs creatively in your writing process, and selected pieces will be published as paid contributions to Protocolized.