Blog

Agentic AI and the Next Intelligence Explosion, Without the Singularity Myth

Agentic AI and the Next Intelligence Explosion, Without the Singularity Myth

Most writing about an AI “intelligence explosion” still assumes a familiar science-fiction shape: one system gets recursively better, outruns everyone else, and becomes the dominant cognitive force on the planet.

The new arXiv essay, “Agentic AI and the next intelligence explosion”, argues for almost the opposite. Benjamin Bratton and Blaise Agüera y Arcas suggest that the next leap in intelligence will not look like one giant mind ascending into godhood. It will look more like a dense social fabric of interacting intelligences: humans, AI agents, institutions, protocols, and recursively coordinated sub-agents.

That is a much more interesting thesis than the usual singularity story. It is also, in my view, both more plausible and more demanding.

What the paper gets right

The essay’s strongest move is to redefine the unit of analysis. Instead of treating intelligence as a scalar quantity that individuals simply possess “more” or “less” of, it frames intelligence as relational and socially organized.

That matters because it fits both history and current AI practice surprisingly well. Human intelligence has never really been just an individual property. Language, writing, markets, bureaucracies, scientific communities, and legal systems are all mechanisms for storing and coordinating cognition across many minds and long periods of time. The authors’ point is that AI may be extending that trajectory rather than replacing it.

This is not just a philosophical flourish. It lines up with one of the most intriguing recent technical papers, “Reasoning Models Generate Societies of Thought”. That work argues that reasoning-tuned models do not merely improve by “thinking longer.” They often appear to reason by simulating internal dialogue among competing perspectives, with backtracking, verification, role shifts, and reconciliation doing real work. In other words: better reasoning may emerge from internalized social structure.

Seen in that light, Bratton and Agüera y Arcas are making a bigger claim: if socially structured cognition helps inside a model, it may matter even more between models, humans, and institutions.

Why this paper feels timely

The paper lands at a moment when the field is visibly moving from chatbot intelligence to agentic systems.

That shift is now measurable. The 2025 AI Agent Index documents 30 deployed agentic systems and shows how fast the ecosystem is expanding. But it also finds something less comforting: most developers disclose little about evaluations, safety, or societal impacts. That is exactly the gap this essay is really about. If intelligence is becoming distributed and agentic, the main challenge is no longer just model quality. It is system design, role design, and governance.

A related technical example comes from “Team of Thoughts”, which shows that heterogeneous agents coordinated through orchestration can outperform more monolithic setups on reasoning and code tasks. Again, the pattern is the same: diversity, specialization, and coordination can beat raw centralization.

So the essay is not arriving out of nowhere. It is trying to give a civilizational frame to several converging technical and organizational trends.

Where the essay is strongest: institutions

The paper becomes most valuable when it stops talking about “intelligence” in the abstract and starts talking about institutions.

Its critique of alignment-by-dyad is persuasive. Much of today’s alignment language still imagines a simple relationship: a human principal specifies what an AI system should do, then trains or nudges the system into compliance. That picture breaks down quickly once you have large populations of agents interacting with many humans, other agents, firms, and public bodies.

The authors’ answer is what we might call institutional alignment: roles, norms, protocols, and checks-and-balances that do not depend on any one agent being perfectly wise or benevolent. That idea fits well with recent governance work such as “Legal Alignment for Safe and Ethical AI”, which argues that law is a more publicly legitimate alignment target than private constitutions or hidden developer preferences. Both papers are pointing in the same direction: advanced AI will need to be embedded in accountable social structures, not merely optimized against a reward signal.

This is the part of the essay that policymakers and enterprise leaders should take most seriously. The future problem is not “How do we control one supermind?” It is “How do we design hybrid systems in which many different intelligences can coordinate, contest, audit, and constrain one another?”

Where I’m less convinced

For all its strengths, this is still more manifesto than evidence-heavy paper.

The argument leaps from interesting observations about reasoning traces and agent coordination to a sweeping story about the next stage of intelligence itself. That leap is suggestive, but not yet proven. Internal chain-of-thought patterns are not the same thing as durable social cognition, and multi-agent demos are not the same thing as stable, high-trust institutions.

There is also a risk that the paper romanticizes plurality. Human institutions do not only generate wisdom; they also generate delay, capture, collusion, and bureaucratic failure. A society of agents could be extraordinarily productive, but it could also be extraordinarily noisy, manipulative, or opaque.

That is why transparency matters so much. If the AI Agent Index is right that deployed agent developers still reveal little about safety and evaluation, then the field is nowhere near ready to assume that more agents automatically means better governance.

The real implication

The most useful way to read this paper is not as a prediction that the singularity is cancelled. It is as a design brief.

If the next intelligence explosion is social rather than monolithic, then progress will depend less on worshipping ever-larger models and more on building the surrounding civic and technical scaffolding: auditable protocols, role separation, conflict-resolution mechanisms, evaluation standards, legal compliance, and interfaces that let humans supervise without becoming bottlenecks.

That has concrete consequences.

  • AI labs should invest as much in orchestration, evaluation, and institutional guardrails as in raw model capability.

  • Enterprises should think in terms of agent teams, escalation paths, and auditability rather than “replace the worker with one agent.”

  • Governments should stop framing governance purely as content moderation or model licensing and start thinking about constitutional design for mixed human-agent systems.

The paper’s closing line is the right one: the question is not whether intelligence will become more powerful, but whether we will build social infrastructure worthy of it.

That is exactly the right question. And it is harder than building the next model.

References