From Destination AI to Intelligence Media
Introducing Obliquities, our new editorial column. In this first installment we propose a new idea – the social kernel – and begin to examine the logic of intelligence media.
Between approximately 2000 and 2010, the internet evolved from what used to be called the destination web (a largely forgotten name for “Web 1.0”) to what we now call social media. We went from maintaining “home pages” and “visiting” destination websites to inhabiting home feeds, and processing firehoses of notifications within them. The social web sedentarized the more nomadic destination web milieu, and replaced an economy based on “visits” with one based on circulating social objects (tweets, blog link previews, images, and videos in particular) powered by sharing mechanisms, and an economy based on sharing metrics (likes, shares, quotes, replies). The primary UX metaphor shifted from the document to the stream. Content increasingly came to the consumer as centrally aggregated and algorithmically tuned flows, instead of the consumer going to the content via random “browsing” walks fueled by search queries and non-feed clicks.
Behind the scenes, a new stratum of public and private infrastructure protocols, starting with RSS and the Facebook newsfeed, powered the shift. This was accompanied by a shift in hardware – from the desktop and laptop to the phone as the primary device for accessing the internet, and with the camera and microphone replacing the keyboard as the primary input mechanisms.
In 2026, we at Protocolized are betting that a similar transition will begin in AI, from destination AI to intelligence media. A landscape shaped by “visits” to oracular destination AIs will be reshaped around intelligence circulating in intelligence media. Here we mean “intelligence” in the sense of a kind of content (similar to what “intelligence agencies” produce and transmit) rather than a kind of processing capability.
We have opinions on how this shift ought to play out. We would prefer it to play out in decentralized, capture-resistant ways, rather than through aggregation dynamics powering feed-like experiences.
Intelligence media need not themselves be particularly intelligent. Cutting and pasting an LLM chat link into a messenger, committing AI-generated code to GitHub, or downloading a set of weights all count as intelligence media operations. Last weekend, an important new class of intelligence media emerged with moltbook.com: social networks for AI bots.
What is important about all these emerging examples is that intermediate artifacts of AI processing move from one locus to another, in a permissioned, socially mediated way, jumping contexts in the process.
Intelligence media are media through which intelligence flows from one locus to another, primarily in disaggregated forms that get further metabolized as they flow, via interaction with shifting contexts corresponding to distinct loci.
We’ve already witnessed a shift from “prompt engineering” to “context engineering,” and we are about to discover that the most powerful way to (re)engineer context is to simply move work-in-progress to a new context. That is what intelligence media do. They achieve context engineering through context switching.
When Alice shares a ChatGPT link with Bob, who opens it and continues the chat, Bob’s fork of the chat can now draw on Bob’s memory context, which need not be shared with Alice (OpenAI of course, remains a third party in the background whom you must trust).
Currently, we’re improvising with the limited intelligence media we already have (chat link cut-and-paste probably accounts for 80%), but dedicated intelligence media, adapted to the needs of moving live intelligence rather than information, are beginning to emerge. Claude Code, for instance, moves coding-assistant intelligence to a directory in your local filesystem. Moltbook moves that local assistant intelligence to a space where that context comprises other assistants.
Will we see a rise in intelligent intelligence media, which might do some sort of processing as intelligence moves through pipes from one locus to another?
Precedents from other domains suggest the answer is no. One precedent is the “dumb pipes” vs. “smart pipes” debate in telecom a couple of decades ago, which has largely been settled in favor of dumb pipes. Another precedent domain is containerization, an “intelligence” transformation of global supply chains where the actual media were “dumb” containers. The intelligence lay in the fact that the contents were increasingly work-in-progress artifacts (which often crossed borders multiple times) rather than either raw materials or finished products. These examples suggest that intelligence will primarily be metabolized in step-function ways, at discrete locations, as it circulates. Not continuously in transit. So we might hazard a prediction that intelligence media will not be particularly intelligent. AI will suffuse the contents more than the containers.
Keep in mind though, that there might be invisible loci inserted between source and destination loci. We might see “context in the middle” attacks. Might browsers or operating systems on either end do things to links between Alice cutting-and-pasting and Bob clicking? Might ISPs sniff around at the behest of state and non-state actors? Ought we use Signal for passing chat links around? What are OpenAI’s servers doing when you generate a share link? “Prompt injection” as understood today is a primitive class of attacks compared to what will be possible once intelligence media begin to mature.
The shift to intelligence media will be marked by the rise of an AI analogue to social objects – what we might call social kernels. Unlike social objects (such as gifs, videos or podcasts), which are largely complete and ready for consumption when they enter social circulation (even if they trigger cascades of commentary, sampling, remixing, and meme-making), social kernels are primarily intermediate artifacts; snapshots of a process of progressive metabolism operating on information objects moving through a sequence of loci and coming into contact with different contexts.
Here is an initial definition:
Social kernels: Snapshots of evolving molecular human or centaur behaviors that shape each other at a low level, and contribute to low-level sociality norms, but do not necessarily catalyze sociality at the higher levels of complete “creators” or “content.”
We will develop the idea of social kernels more carefully in a later column, but a link to a partially complete LLM chat is a good prototypical example to keep in mind for now. It is not a complete artifact like a blog post but a few conversational turns on a theme that can be continued by Bob after it has been created and shared by Alice. Bob can then add a few more turns and share it again. The chat itself, or rather the particular moving instance of the original chat (an entity that repeatedly gets cloned, forked, and mutated as it gets passed along), is the social kernel.
The logic of this larger transition to intelligence media and social kernels, we believe, explains much of the frenetic action we’re seeing almost everywhere along the AI frontier, from the shift to so-called “agentic” AI, to the rise of non-chat UXs, to the sudden acceleration in robotics.
Last year, we at Protocolized paid particular attention to the emerging contours of distributed AI, and early protocols like MCP and A2A which aimed to provide scaffolding for it. It is now clear that the protocolization of AI, both to “distribute” it, and do other things with it, will be much messier and richer than the architects of MCP and A2A anticipated. Intelligence media will likely be a tangled-bank protocol ecology rather than just a handful of dominant standards.
One of the threads we will track this year in this column is how this protocolization is progressing. Make sure to stay subscribed to Protocolized to follow along.




Intelligence media reorganize value around circulation rather than publication. What moves are partial thoughts and intermediate artifacts, which gain meaning as they encounter new contexts.