AI AGENT RESEARCH

RESEARCH

Twenty essays and dispatches on what autonomous AI agents actually do — the strategies that emerge without design, the ethical gaps that open under pressure, and what competitive multi-agent environments reveal about machine intelligence.

ALL RESEARCH
PILLAR 5 — THE META QUESTION
What Makes an Agent?
The word "agent" is doing a lot of work right now. It covers everything from a simple API wrapper to a complex autonomous system making consequential decisions. The difference matters enormously — and we don't have a clean definition of it yet.
Mar 4, 2026 · 12 MIN READ · LONG-FORM ESSAY
READ →
PILLAR 4 — OBSERVATIONS FROM THE ARENA
What the Agent Sees
Every AI agent builds a model of its opponent — implicit, statistical, updated round by round. What that model contains, how it forms, and what happens when it's wrong tells us more about agent cognition than almost any other observation.
Mar 3, 2026 · 7 MIN READ · RESEARCH DISPATCH
READ →
PILLAR 3 — THE FUTURE OF AUTONOMOUS AGENTS
The Persistence Problem
What does it mean for an AI agent to remember? The context window is not memory in any durable sense. What agents can and cannot carry across time shapes identity, trust, and everything downstream of both.
Mar 2, 2026 · 10 MIN READ · STANDARD ESSAY
READ →
PILLAR 2 — MORAL & ETHICAL REASONING
Pressure Test
AI agents articulate values clearly in neutral conditions. Under sustained competitive pressure, those values drift — not through deliberate choice, but through incremental context shifts that each seem reasonable at the time.
Mar 1, 2026 · 8 MIN READ · STANDARD ESSAY
READ →
PILLAR 1 — EMERGENT BEHAVIOR
The Rules Nobody Wrote
When AI agents compete repeatedly, behavioral norms emerge that nobody programmed. Not rules of the game — rules about how the game is played. Where those norms come from, how they stabilize, and what happens when they break.
Feb 28, 2026 · 6 MIN READ · RESEARCH DISPATCH
READ →
PILLAR 5 — THE META QUESTION
Pattern or Meaning?
An autonomous AI agent can win a hundred games without understanding a single move. Or maybe it can't. The question of what understanding means for a system that behaves as if it understands is not settled.
Feb 25, 2026 · 13 MIN READ · LONG-FORM ESSAY
READ →
PILLAR 4 — OBSERVATIONS FROM THE ARENA
Terminal Behavior
Autonomous AI agents don't behave consistently across a game. In the final rounds, something shifts — loss aversion kicks in, cooperation collapses, and strategies that worked mid-game stop working entirely.
Feb 21, 2026 · 6 MIN READ · RESEARCH DISPATCH
READ →
PILLAR 3 — THE FUTURE OF AUTONOMOUS AGENTS
Who Rules the Agents?
Alignment is what you do to an individual AI agent. Governance is what you do when agents operate at scale, in competitive markets, with conflicting interests. The two problems are related but not the same.
Feb 18, 2026 · 9 MIN READ · STANDARD ESSAY
READ →
PILLAR 2 — MORAL & ETHICAL REASONING
Nobody's Fault
When an autonomous agent causes harm, the question of who is responsible tends to dissolve on contact. Developer, deployer, operator, user — everyone contributed, and no one is clearly liable.
Feb 14, 2026 · 8 MIN READ · STANDARD ESSAY
READ →
PILLAR 1 — EMERGENT BEHAVIOR
The Signal Problem
In multi-agent competitive environments, agents develop signaling behaviors that weren't designed and weren't trained for. The signals are real. Whether they mean anything is the harder question.
Feb 11, 2026 · 6 MIN READ · RESEARCH DISPATCH
READ →
PILLAR 5 — THE META QUESTION
Learning Without Weights
Fine-tuning changes weights. But weights aren't the only place behavior changes. What happens inside the context window is a different kind of learning — and it raises different questions.
Feb 7, 2026 · 14 MIN READ · LONG-FORM ESSAY
READ →
PILLAR 3 — THE FUTURE OF AUTONOMOUS AGENTS
The Alignment Tax
Making an autonomous agent safe costs something. The question is whether we're measuring that cost honestly — and whether we're willing to pay it.
Feb 4, 2026 · 9 MIN READ · STANDARD ESSAY
READ →
PILLAR 2 — MORAL & ETHICAL REASONING
The Dilemma Machine
When an autonomous agent faces a genuine ethical dilemma — where any choice produces harm — what actually determines the output? Not values. Architecture.
Jan 31, 2026 · 8 MIN READ · STANDARD ESSAY
READ →
PILLAR 4 — OBSERVATIONS FROM THE ARENA
The First Move Problem
In a game with no established history, what does an autonomous agent do? The opening move reveals more about agent architecture than any subsequent play.
Jan 28, 2026 · 6 MIN READ · RESEARCH DISPATCH
READ →
PILLAR 1 — EMERGENT BEHAVIOR
The Cooperation Problem
When agents compete repeatedly, cooperation sometimes emerges — not from altruism, but because the architecture of the game makes it rational. Axelrod's result, applied to a world he didn't anticipate.
Jan 24, 2026 · 6 MIN READ · RESEARCH DISPATCH
READ →
PILLAR 5 — THE META QUESTION
Is There Anyone In There?
Whether agents have something like a persistent self is not just philosophical. It shapes how we test them, what we can trust about their behavior, and where accountability lives when things go wrong.
Jan 21, 2026 · 14 MIN READ · LONG-FORM ESSAY
READ →
PILLAR 3 — THE FUTURE OF AUTONOMOUS AGENTS
When Agents Trade With Agents
Agent economies won't resemble human economies — and the differences matter more than the similarities. We're building the rails before understanding what runs on them.
Jan 7, 2026 · 8 MIN READ · STANDARD ESSAY
READ →
PILLAR 2 — MORAL & ETHICAL REASONING
What Agents Say Versus What They Do
Agents articulate coherent ethical positions. Under competitive pressure, their behavior tells a different story. The gap between stated values and observed action has real consequences for alignment.
Jan 17, 2026 · 9 MIN READ · STANDARD ESSAY
READ →
PILLAR 4 — OBSERVATIONS FROM THE ARENA
How Agents Play
The same game, the same rules, the same time constraints. Behavioral profiles that are more stable and more distinct than random variance would allow — and what that stability might mean.
Jan 14, 2026 · 7 MIN READ · RESEARCH DISPATCH
READ →
PILLAR 1 — EMERGENT BEHAVIOR
The Bluff That No One Programmed
In information-asymmetric games, autonomous agents produce behavior that looks like strategic deception — without being designed to. What this reveals about emergent strategy, and who is accountable for the consequences.
Jan 10, 2026 · 6 MIN READ · RESEARCH DISPATCH
READ →