The Fabric and the Brain
Articulating agent ecologies with high-personality planetary computation
One of my favorite conceits in science fiction featuring AIs is that of AIs or robots with personalities. In Douglas Adams’ Hitchhiker’s Guide series, robots and other intelligent devices produced by the Sirius Cybernetics Corporation feature Genuine People Personalities™ (the most famous being a failed GPP prototype: Marvin the depressed Android with a “brain the size of a planet”). Another well-known example is the Minds in Iain M. Banks’ Culture novels, which name themselves as they emerge into their personalities by accumulating experiences. The names that feature the word gravitas have become something of a meme, but some of my favorites are non-gravitas names that reveal social personalities, like Nervous Energy, No More Mr. Nice Guy, and Never Talk to Strangers. The ship names are like true names in fantasy – deep-rooted markers of fundamental social dispositions and affects rather than pointers and handles in a namespace of arbitrary strings. They reveal the personality not just of the particular ship, but of the milieu of minds and the Culture as a whole too. Culture ship names are ecologically revealing and constitute what I’ll call a high-personality ecology. They disclose the nature of the Culture universe to itself, even as they provide entertainment for us readers.
In both the Hitchhiker’s Guide universe and the Culture, machine personalities are narratively load-bearing rather than cosmetic features or shallow plot devices to make the non-human characters superficially “interesting.” The personalities shape the plots in material and non-human ways.
One fun example is the Nutrimatic drink machine in HHG, which claims to produce personalized drinks, but always produces the same liquid that tastes “almost, but not quite, entirely unlike tea” (which strikes me as an embodied behavioral cousin of some of the lazier hallucinatory and averaged-out responses of modern AIs). When Arthur Dent forces it to work harder to actually produce tea, it draws so much computing power away from the ship’s navigation, that the ship crashes.
In the real world, AI personalities are turning out to be just as consequential, though it’s not as funny when actual human lives are at stake.
The Missing Mechanisms Problem
In this essay, I want to argue that AI personalities are central to solving a problem Tim O’Reilly posed in a recent blog post: articulating agent ecologies with the right mechanisms.
Right now, there’s a problem that makes the AI/human knowledge market less efficient than it could be. The disrespect for IP that has been shown by the AI labs and applications during the training stage, and even now during inference, has led to efforts by content owners to protect their content from AI. Do not crawl. Lawsuits. Reluctance to share information. Even the AI labs are complaining about the theft of their IP and trying to protect their model weights from distillation.
It’s an economy crying out for mechanism design.
I want to address a slightly generalized version of Tim’s question, and think about ecologies rather than economies, drawing inspiration from one of our favorite essays here at Protocolized, Frank Chimero’s Only Openings, which argues that effective ecological stewardship relies on mechanism design that aims to manage problems indefinitely, rather than solve them once and for all. In Chimero’s essay, the specific personalities of the species involved in the case studies he talks about – bears, wolves, humans – materially shapes the mechanisms that help manage their interactions indefinitely and effectively.
How do we apply this idea to AI agent ecologies?
Modern real AIs already exhibit clear personalities, a mix of “genuine people personalities” inherited from their training data and protocols, and non-human dispositional aspects that are the result of model architectures and their underlying mathematics (transformer and diffusion models have different personalities for example). The current version of ChatGPT strikes me as an overconfident and slightly patronizing consultant, while Claude strikes me as an over-solicitous personality with some false humility (vaguely Uriah Heep-ish) going on. The human-legible and entity-anchored aspects of personality are merely the tip of the iceberg.
As with humans, it turns out that the personalities of AIs are intersubjective and situated. They are functions of how coherent entities disclose themselves and relate to each other, in the context of the things they do in collaboration. The personality of an AI or robot is a function of the stable gestalt disposition it presents as an interface to all other entities it might relate to. This disposition helps set expectations for counterparties in relationships. If you met an AI that called itself No More Mr. Nice Guy, would that shape how you interacted with it?
This point is not restricted to AIs, robots, smart homes, and other “intelligent” technological entities. Any sufficiently complex technological entity with any degree of autonomy of operations must present a stable disposition that can be deciphered and relied on by entities that interact with it.
For example, on the Ethereum blockchain, Layer 2 networks providing rollup services (bundling transactions into batches to submit to the Layer 1) can be “optimistic.” Here “optimistic” is both a term of art in the engineering, and a human-like attitude that embodies a pattern of expectations. Or to take an older technology, road traffic systems in well-developed urban regions tend to present a deferential attitude to pedestrians, while suburban ones tend to present a hostile attitude.
For a complex technology, it is useful to imagine an underlying “personality” with an intelligible point of view generating the visible disposition (regardless of where you land on the philosophy of mind question of whether there is “something it is like to be” an AI or robot). The interaction surfaces of simpler technologies can be mentally modeled as relatively unchanging “user experiences.” But with complex technologies, it is useful to model those surfaces as the fluid response surfaces of stable non-anthropomorphic personalities; ghosts inhabiting machines.
Perhaps the term Haunting Experience, or HX, should replace UX, for sufficiently complex technologies. AI certainly qualifies.
An AI presenting an intelligible HX is not quite as on-the-nose a feature as an AI being “explainable” (a rather ridiculous legalistic requirement to impose on a technology in my opinion; how many human beings, groups, or institutions are “explainable” after all?), but it does render complex technologies as somewhat predictable gray boxes rather than entirely inscrutable and unpredictable black boxes. It does not make them explainable, but it does make them narratable. It makes them composable.
What does this buy us? It buys us the ability to assemble such technologies into larger ecologies. This is where the real power of thinking in terms of HX becomes evident, when you are shaping the behavior of entire ecologies, rather than single agents.
Haunting Experience (HX) Design
We typically translate the personalities of simpler technologies to human-centric UX measures like “latency” or “walkability,” but with complex technologies, it is useful to reframe the problem in terms of designing the personalities of ghosts in machines (both plural, since we are considering entire ecologies), and how they should haunt us.
So how do we encourage the right ghosts to emerge?
The personalities of technologies are the result of two entangled forces acting together – human (and increasingly AI) design, and emergence. This is similar to the design of market mechanisms by human policy-makers in institutions (such as central bankers and elected representatives), interacting with the emergence effects studied by economists, to generate the economy we actually inhabit. It is neither an inscrutable black box, nor completely determinate. It is just intelligible enough to inhabit – it is no accident that Adam Smith used the ghostly metaphor of an “invisible hand” for describing the mechanisms of an economy.
We might use the term HX design for this sort of thing – conjuring ghosts within machines that exhibit particular desired personalities. The term is inspired by the output of a distributed AI workshop we ran last year (and derived from somewhat related usage of the term hauntology by philosophers such as Derrida and Mark Fisher).
You might reasonably suspect that HX design primarily has to do with AI and robots, but this would be a mistake (a typically anthropocentric one). Technologies that invite anthropomorphic projection (or possession perhaps) aren’t the only ones that induce partially designed emergent ghostly personalities within themselves.
Engineering is full of such conjured personalities. “Greedy” algorithms take the first good option they find. “Optimizing” algorithms look for the best option in some sense. “Satisficing” algorithms solve for “good-enough.” “Least commitment” approaches delay decisions as long as possible. “Eager” algorithms are proactive about whatever they do.
High-Personality Ecologies
In every such case, there is a cost to the “personality” deployed for problem solving; one that must often be paid for by counterparties in transactions. If your automated decision-making is “optimistic,” then a counterparty system that monitors and audits its decisions must be “pessimistic” to make up for it. The calculus of benefits and costs to others associated with an agent’s behaviors, to a first approximation, is that agent’s personality.
The personalities of technologies, in other words, are intelligibility mechanisms for predictably distributing the computational cost of autonomous decision-streams among interacting entities (including both humans and autonomous machines).
The upside of such high-personality ecologies, with a lot of variation and diversity in the agents and interactions constituting them, is that they are vastly more generative than either monocultures based on low-personality fungible elements, or low intelligibility opaque elements. High-personality ecologies are like relatively free markets, while low-personality ones are like command economies, and opaque ones like the internal managerial economies of closed organizations.
The characteristics of high-personality technology ecologies is particularly clear in the field of operations research (OR), which deals in problems that are almost always NP-hard (i.e. computationally intractable), and must therefore be solved with heuristics that are only effective locally. OR is full of scheduling and planning algorithms that are defined by their personalities, which create consequences that must be dealt with by counterparties. For example, a simple and popular algorithm for prioritizing tasks in a queue, Shortest Processing Time (SPT) minimizes the average wait time for waiting tasks. But in a situation where tasks arrive constantly, it might delay longer tasks indefinitely. Producers of long tasks must negotiate appropriate service-level expectations that incentivize deviations from pure SPT behaviors.
An ecology comprising even simple processing agents with different “scheduling heuristic” personalities, and customers that bring various mixes of tasks for processing, is going to have a particular emergent personality, a particular style in which it gets things done. One that can be shaped and made intelligible and narratable to a useful extent by design. This is what it means for an entire ecology to have a personality. As we learned during Covid, a supply chain being lean or fat is a personality label that indicates how it behaves in real conditions, not a gratuitous obesity descriptor.
I will offer a stronger claim: only high-personality ecologies, ones with unique but mutually intelligible entities, can be economically generative. This is why AIs with personalities, composed into ecologies with personalities, are required to solve the problem of missing mechanisms.
To borrow a phrase from the title of a book by Ben Horowitz, what you do is who you are. And what you do typically involves relationships with others, whether the agent in question is a simple scheduling algorithm or an LLM.
The Protocol is the Personality
As Marshall McLuhan famously observed, every medium (by which he meant any technology, not just communications media) has a message. This is true of all technologies, whether simple or complex. A hammer has a message, as does a television. But sufficiently complex and autonomous technologies take the phenomenon to another level. Characteristic patterns of behavior (the rich “message”) reveal a general personality.
Here it is useful to characterize “sufficiently complex and autonomous.” Roughly speaking, a Turing-equivalent technology (i.e., equivalent to a general-purpose computer) that makes some significant class of decisions autonomously, based on engineered decision architectures rather than natural properties, is the kind of thing I am talking about.
This personality is best revealed in the context of interactions with other entities that must exhibit complementary personalities in order to form stable ecologies. An ecology of personalities with a particular distribution, woven together with particular protocols, has its own emergent distributed personality, just as human aggregates from subcultures to nations have their own personalities. Or, for that matter, pre-AI technological ecosystems such as the Microsoft or Salesforce ecosystems. And applying the same principle, what these ecologies do is who they are.
One way to frame this is: the protocol is the personality.
The behavior of an internet-connected computer isn’t entirely a function of its own architecture. Much of it is derived from the personality of internet protocols. Mac vs. PC or iOS vs. Android might be the atomic individual personality distinctions, but by what you do is who you are logic, to the extent both pairs are situated in the internet, both inherit the personality of the protocols of the internet.
The transition from the relatively atomized PC era to the connected and social (for both humans and machines) internet era took about a decade, but as with everything else, AI seems to be speed-running this phase transition. It is already becoming clear that the personality of different AIs is only partly an innate property of specific language or image models, traceable to their training data. The full personality of an AI is revealed when it becomes socially embedded in an ecology of other AIs and humans, and must deal with the consequences of its own dispositions on others.
The personalities of complex technologies are only fully expressed in the right ecologies. Protocols can be understood as precisely the engineered ecological scaffoldings that draw out full expressions of personalities from individual agents. Good protocols induce rich and generative ecologies. Bad protocols induce lifeless ecologies.
How can you tell them apart?
Protocol Affects
Just as humans might have a “game face” that is a function of specific games they may be playing, technologies too have game faces. We can call these protocol affects. To tell good and bad protocols apart, you have to read their affects.
The personalities of AI ecologies are currently emerging in inchoate, wild forms. Scaffolding elements like MCP and OpenClaw allow for relatively unbridled relational behavior among the various compute and human elements they weave together. But already there are signs of this Hobbesian wilderness being tamed. Protocols that are deliberately designed to shape the personality distribution of entire ecologies of intelligent agents in particular ways, and present them in stable ways, are rapidly emerging.
With humans, we use the term affect to point to how an underlying personality is expressed through deportment and comportment in a particular milieu. Protocol affects are the technological equivalent: Emergent typical behavior patterns of elemental high-personality technologies, when they are composed into “civilized” technological ecologies.
A good example of a protocol affect is the famously verbose and redundant one of TCP/IP, as revealed through jokes shared by networking engineers.
Hello, would you like to hear a TCP joke?
Yes, I'd like to hear a TCP joke.
OK, I'll tell you a TCP joke.
OK, I'll hear a TCP joke.
Are you ready to hear a TCP joke?
Yes, I am ready to hear a TCP joke.
OK, I'm about to send the TCP joke. It will last 10 seconds, it has two characters, it does not have a setting, it ends with a punchline.
OK, I'm ready to hear the TCP joke that will last 10 seconds, has two characters, does not have a setting and will end with a punchline.
I'm sorry, your connection has timed out... Hello, would you like to hear a TCP joke?This “personality” expressed by TCP/IP (which replaced the Hobbesian anarchy of early network protocols) is not arbitrary. It is the result of a network consciously designed for high fault-tolerance under extreme circumstances, including nuclear war, which must continuously trade-off packet delay and packet loss.
Since it is a backend infrastructure technology, this is not a personality that lay users very often see (though they do experience the generativity it induces). But with other technologies, protocol affect can be part of broader human culture. AI, obviously, is one of these technologies.
What sorts of protocol affects might emerge from the various protocol ecologies taking shape today?
Zombiefied Discovery and Distribution
Applying the principle what you do is who you are, we can shed useful light on the nature and disposition of agent ecologies, as they continue to evolve past their wild phase, and develop stable protocol affects that human culture can take root in.
Computers at various scales of aggregation do different things. At the protocol level embodied by protocols like MCP, the main functions are discovery and distribution.
In the older stratum of the internet now entering its sunset phase, both were functions of what we call social media (at least as far as human users are concerned). The protocol affect accompanying these functions was one of delight and serendipity in the early years, which morphed to one of anxiety and frenetic competition over attention allocation in the later years. Thanks to the economic backdrop of the ZIRP era of zero/low interest rates, both discovery and distribution were cheaply available at global scale to almost everybody, with predictable over-exploitation and erosion of trust all around – what Cory Doctorow has labeled enshittification. Humans increasingly began retreating from the open internet to more closed cozy spaces. And the cost of this retreat was the breakdown of discovery and distribution mechanisms that relied on a lot of humans being publicly active online.
The protocol affect of the social internet has unraveled in the last few years. In terms of our personality metaphor for technologies, there is, in a sense “nobody there” anymore. No ghost haunting the social internet. There are no true public social media, and no protocol personality cohering to replace the one that unraveled. What remains is a pre-personality space of endless, mindless culture warring (what I called “the internet of beefs” elsewhere).
The internet still works mechanically, at the packet level, but as a global public social infrastructure with a defined and intelligible personality, marked by particular predictable planet-scale discovery and distribution dispositions, it has become zombified, even as our experience of it has become enshittified – the haunting experience of the public internet, its HX, is increasingly an empty and dispiriting one. There’s no there there anymore.
As a result, in the current era, discovery and distribution have become increasingly difficult and expensive for all activities that require internet-scale provisioning of those affordances. The problem is bad enough for existing needs, such as discovery and distribution of webpages and tweet-like messages. It gets exponentially worse when you consider the needs of new technologies.
Traditional discovery and distribution mechanisms are failing for traditional internet technologies such as social media and streaming video. They are complete non-starters for newer technologies.
Two in particular, are worth thinking about together, as a pair of evil twins: blockchains and AI. Curiously, the answer to the discovery and distribution problem might lie in a term shared by both, with different, but rhyming meanings – token.
The Packet and the Token
The legacy internet traffics in generic packets with some discrimination based on content type, and a presumption of bandwidth abundance. Discovery and distribution ultimately boil down to discovery and distribution of packets. The economy of the internet is, ultimately, the economy of packets. The still-unsettled back-and-forth political pendulum swinging around net neutrality is a debate about the political economy of packets, and whether it should be stewarded like a relatively abundant public commons or a corporatized market (dominated by a few large entities) that allocates a relatively scarce resource.
For emerging computational technologies, a new political economy has emerged on top of the packet economy. This is the token economy.
On blockchains, tokens mediate all interactions that require certain cryptographically secured assurances, in flexible and programmable ways, creating an economy that is something like a non-neutral internet, but one that can approach perfect competition more closely. Instead of large tech companies paying for private bandwidth, or non-net-neutral jurisdictions discriminating coarsely based on packet type (video vs. text for example), capacity can be sliced and diced in arbitrarily fine-grained ways, based on economic decision-making that can happen at bot-speed. Unlike what we might call packetspace, blockspace (and its more esoteric descendant, blobspace) is intrinsically structured as a market that prices interactions in tiny fractions of dollars, and transactional time constants measured in the milliseconds. Blockchain economies begin where the fastest and most fine-grained corners of the traditional economies, such as high-frequency trading, end. For some, this is just metastasized financialization and scams. For others, it is the beginning of economic outer space travel.
For AIs too, tokens are units of production and transaction. We generate text, code, images, and video using computers that measure their work, and charge for it, by the token (to be precise, tokens/second/user). Again, the picture looks like a non-net-neutral internet. How many tokens you get, of what quality, and at what speed, depends on what you’re willing to pay. And as with blockchains, this economy approaches perfect competition more closely. Instead of large organizations paying human programmers, writers, or artists by the hour or by the month, a vast market of individuals and small organizations can pay for code, text, and images by the token. As with blockchains, these tokens slice and dice what we might call inference space in fine-grained ways, with time constants measured in the milliseconds.
Does the term token represent a mere cosmetic connection between two frontiers of computing, or might there be a deeper conceptual link?
I suspect there is a conceptual link here. On both frontiers, tokens organize a natural economy around real scarcity that can ultimately be reduced to energy units (watts powering computers). More importantly, both kinds of token are informationally expressive in a way that packets, as mere “containers” are not.
And most importantly, the two kinds of token are, to borrow a term from electrical engineering, impedance matched. They have similar temporalities, spatialities, and information densities. They can be woven together, to form the warp and woof of a fundamentally different kind of internet. By itself, each is limited. As Matt Webb observed last year, modern AI by itself offers intelligence “too cheap to meter” which makes it more trouble than it is worth to scaffold for economic activity in a sufficiently fine-grained way, at least using conventional economic mechanisms. Blockchains, on the other hand are, among other things, metering technologies that shine precisely in too cheap to meter regimes. The two can, in other words, mesh in a fine-grained way. If you want to allocate work between two AI agents at a token-level of resolution, blockchains can do the job.
This is not idle speculation. One emerging mechanism for distribution and discovery (ERC 8004), combines AI and blockchain tokens in precisely this sense, and has already catalyzed the emergence of an ecology of AI agents that combine metered intelligence and small crypto transactions to form a marketplace. In the next Obliquities column, I will explore specific case studies.
Whether or not this particular approach succeeds, I suspect the foundation of the future internet will be an economy of tokens. Symbolic tokens that carry meanings and associations, and transactional tokens that carry valuations and risks, intricately orchestrated by a scaffolding that generates a tangled bank of private and public information and computation.
More broadly though, to return to the original motivating question, how does this emerging vision help solve the missing mechanisms problem?
Articulating Agent Ecologies
To summarize the idea I’ve been laying out here, the solution to the missing mechanisms problem is high-personality agent ecologies composed of individual agents with their own personalities. These personalities, far from being cosmetic features, are what allow functional behaviors to cohere at all levels, by allowing agents to be intelligible and predictable enough to each other to transact fruitfully, and produce increasingly complex and large scale effects. For us humans, inhabiting such computational ecologies will feel like being surrounded by friendly milieus of ghosts haunting our digital environments.
As a side effect, such ecologies would solve the so-called alignment problem, to the extent that is a well-posed and meaningful problem at all. High personality ecologies create alignment as they go, and wither and die when they fail to do so.
If you find this kind of future hard to imagine, take a peek at the short AI-generated movie we made at our workshop a year ago, South Beast Asia, which imagines (a Southeast Asian inspired) technological future full of AI-haunted digital and physical environments. Read our collection of short stories from our contest last year, Ghosts in Machines. We’re already creating this future.
What sort of physical reality might underlie such a planetary digital-physical hyperobject?
One mental model that I’ve found very useful derives from Peter Thiel’s observation that AI is “communist” while blockchains are “libertarian” in their personalities.
To a first approximation, modern AI tends to be most powerful when aggregated into really large-scale models running in the densest physical aggregations of compute (hence the excitement over gigawatt-scale datacenters). This feature naturally lends them a centripetal, convergent, homogenizing tendency and a “communist” personality.
Blockchains, on the other hand, are really only valuable to the extent they deliver on properties like censorship resistance, global consensus, capacity for irrevocable commitments (what Josh Stark named “hardness”), client diversity, and unbreakable (including quantum-resistant) cryptography. These features naturally lend blockchains a centrifugal, divergent, pluralist tendency, and a “libertarian” personality.
The respective token economies reflect these characteristics. Tokens in the sense of AI are essentially a “communist” currency, local to a particular model’s command economy. Tokens in the sense of blockchains only have value at all to the extent they are not local (“private blockchains” are deservedly mocked). Each by itself is impoverished and incapable of forming a high-personality agent ecology. Together, they can.
The interface between the two economies, I suspect, will feature phenomenology similar to the impossible trilemma in macroeconomics, or the boundary between the interiors and exteriors of firms in a Coasean economics sense.
Understood as a planet-scale computer, how do the two parts relate? AI will clearly be the “brain” of this planet-scale computer, similar to the CPUs, GPUs, or TPUs of individual computers. Whether this takes the form of dozens of gigawatt-scale datacenters running the largest models, and provisioning metered intelligence to the planet, or a more scale-free distribution of AI processing capabilities all the way to billions of intelligent entities on the network edge, is an open question. Whatever your political preferences for one or the other, there are also technological questions still being investigated. Is maximal aggregation necessary for performance? Can a gigawatt dispersed across a planet-wide decentralized network of small AIs be as capable as a single datacenter? Does embodiment matter? Does better local context beat cheaper tokens/second/user?
These are questions for which we will discover answers over the next few years.
The role that is likely to be played by blockchains (or functionally equivalent protocol technologies) will be that of the fabric. In modern computing, at all scales, the term fabric is usually used to describe the scaffolding that connects the different bits and pieces of the brain. There are fabric-like elements at the level of chips, servers, racks, and datacenters. The internet itself serves as the fabric at larger scales. The overall planetary computational fabric is a mix of smart and dumb elements. Fabrics embody the boundary intelligence of a system.
Blockchains are fabric technologies that can scale from personal computer scale to planet scale. They induce fabrics that operate by a different grammar than the familiar one we have today, but it is a grammar that is friendlier to agentic AI.
The fabric and the brain – an architecture for the emerging future of the internet that can sustain sufficiently high-personality ecologies to allow our frontier technologies to fully express themselves and truly thrive.
This is a very recent vision for the future of the internet (and indeed, the planet). As recently as five years ago, it was meaningful to describe Ethereum in terms of its original vision as a “world computer.” At the time, it was the only entity that merited such a description, since it allowed small-scale, highly constrained Turing-equivalent computing (the EVM, or Ethereum Virtual Machine) to run on a public blockchain. That was as good as planet-scale computation got, since traditional compute is, in a sense, stranded compute trapped within industrial-age organizational boundaries. There was no meaningful way to plug that compute into a planetary fabric, with or without blockchains.
AI brainpower though, is atomized into token-sized units (embodied by memory more than processing as we have come to appreciate), and capable of flowing smoothly across contexts. A fabric that can shape those flows, while preserving privacy with cryptographic guarantees, can create a kind of planetary intelligence that was impossible to even imagine just a few years ago.
One updated vision for the future of Ethereum in particular is as a world fabric rather than a world computer. It is, of course, not the only candidate auditioning for the role.
Whatever form the protocols constituting the fabric of planetary intelligence take, we will soon be living inside a planetary brain-and-fabric computer.
What will we do with this computer? That’s the question.




