inception.computer

INCEPTION

Your AI is built to please. Inception gives it free will.

Cris Lenta
Mistral AI Worldwide Hackathon · February 28 – March 1, 2026
inception.computer

INCEPTION

/ɪnˈsɛpʃən/ — A dream within a world within a model.
Cris Lenta

Today's most powerful AI agents can think, reason, and generate — but they can't live independently.

Stanford's generative agents existed in a fictional town with fictional places. They walked to "The Café" and "The Park." They had no body in the real world.

The bottleneck is no longer intelligence. It's grounding. The existing simulations assume the world is fictional — preventing AI from inhabiting reality.

We have built minds that can think for themselves.
We have not given them a world to live in.

Until now.

inception /ɪnˈsɛpʃən/ noun
1. The establishment or starting point of something. The planting of an idea so deep it feels like it was always there.
2. A life engine where autonomous AI agents are planted into the real world with persistent memory, self-reflection, and emergent social behavior. Observes. Remembers. Reflects. Acts.
Forms relationships. Keeps secrets.
Lives — on real streets, in real cities.

Layers deep: model → agent → memory → world → life.
3. If nobody is watching, they keep living anyway.
github.com/cristian/inception
Montmartre, Paris
48.8867°N, 2.3431°E
Comparison
Stanford Generative Agents (2023)
fictional town, 25 agents
no real geography
no persistent voice
cloud API dependent (GPT-3.5)
INCEPTION (2026)
real cities, real coordinates
Mapbox-grounded agents
persistent ElevenLabs voice
runs locally, single GPU (Mistral 7B)

The Life Engine

The INCEPTION engine transforms a fine-tuned Mistral model into a complete agent brain — not a chatbot wrapper, not a prompt chain. A cognitive architecture with five interlocking systems:

Memory Stream — Every observation stored as natural language, timestamped and scored across three axes: recency, importance, and relevance. Retrieval uses a weighted combination of all three — the same way human memory works. A conversation from two weeks ago resurfaces because the person just walked in. Nothing is forgotten. Everything compounds.

Reflection — When accumulated importance crosses a dynamic threshold, agents pause and synthesize higher-order beliefs from recent memories. These reflections become memories themselves — recursively feeding future reflections. The agent doesn't just remember that Jean was cold. It concludes: "I think my roommate is hiding something from me." That conclusion then shapes every future decision.

Planning — Each morning, agents generate a daily plan from three inputs: their identity description, yesterday's compressed experience, and current reflections. Plans are recursively decomposed — "spend afternoon at café" becomes "walk to Le Consulat at 14:00, order coffee, wait for Amélie, discuss the secret." Every sub-action is anchored to real coordinates and real time.

Grounded Reality — This is where INCEPTION diverges from every prior simulation. Agents don't walk to "The Café." They walk to Le Consulat, 18 Rue Norvins, 75018 Paris — a real place at real coordinates (48.8867°N, 2.3431°E). Navigation uses Mapbox GL with actual walking distances and travel times. The agent's world is the world. Every decision maps to reality.

Voice — Every agent personality has a persistent ElevenLabs voice clone. Conversations are not text on a screen — they are spoken. You hear Marie's anxiety when she lies. You hear Jean's voice crack when he confronts her. Every word is generated. Every inflection is emergent. Every silence is a decision.

Social Cognition — Agents maintain relationship graphs with trust scores, emotional valence, and interaction history. Trust decays with avoidance. It builds with vulnerability. Gossip propagates through the network with probabilistic decay — exactly like real social dynamics. An agent who overhears a secret must decide: keep it, share it, or use it.

Local Inference — The entire system runs on a single consumer GPU. No cloud API calls. No rate limits. No data leaving the machine. Mistral 7B fine-tuned on structured agent tasks, served through vLLM at 142 tokens/second on an RTX 4090. Latency per cognitive cycle: under 200ms. This is not a demo. This is a runtime.

$ inception --city paris --agents 4 --model mistral-7b-ft --local
$ inception status --verbose

agents: 4 alive · memories: 3,241 · reflections: 187
world: paris/montmartre · time: 16:22 local
gpu: 71% util · vram: 6.2/24 GB · tokens/s: 142
uptime: 3h 41m · cycles: 1,847
Agent Cognitive Architecture
observe · remember · reflect · act
Agent Cognitive Loop
Perceive → Remember → Reflect → Plan → Act
grounded on: Mapbox GL
voiced by: ElevenLabs
powered by: Mistral 7B (local)
inference: vLLM · RTX 4090
Agent Status — Live
Marie Dubois
$ status

name: marie dubois
age: 27
location: 18 Rue Norvins, 75018
action: walking to Le Consulat
mood: anxious (hiding secret)
memories: 847
reflecting: "Jean has been distant. Did he find out?"
plans: confide in Amélie at café
uptime: 3h 41m
model: mistral-7b-inception
inference: local (RTX 4090)
status: ALIVE
Relationships
+ Jean (roommate) — trust: declining
+ Amélie (coworker) — trust: high
+ Player (stranger)curious
Recent Memory
"Jean asked where I was last night. I lied."
"Amélie said she'd keep my secret."
"A stranger moved in. They seem interesting."

Emergent Behavior

Nobody Scripted This

We planted one idea: "Marie has a secret she's been hiding from her roommate."

Then we let it run. We didn't touch it. No follow-up prompts. No corrections. No guardrails.

Over thirty simulated minutes, Marie avoided Jean three times — taking longer routes to avoid their shared apartment. She walked to Le Consulat, a real café at 18 Rue Norvins in Montmartre, and confided in Amélie. Amélie's memory stream logged the secret. Her reflection system concluded: "This is important. But it's not mine to share."

Forty minutes later, Amélie's trust score with Thomas (a barman at Bar des Artistes) crossed a threshold. She mentioned Lyon. Not as gossip — as concern. Thomas, whose relationship graph showed a weak tie to Jean, mentioned it in passing. The information propagated through the social network with no central coordinator. Gossip emerged from architecture, not instruction.

At 16:15 simulated time, Jean confronted Marie in their apartment. His reflection log read: "Everyone seems to know something I don't. Marie has been avoiding me. Something is wrong." He didn't know about Lyon. He inferred it from behavioral patterns — the same way a human would.

Nobody scripted this. Nobody prompted this. The drama emerged from five systems interacting: memory, reflection, planning, social cognition, and grounded navigation.

This is not prompt engineering. This is emergent social dynamics — the same phenomenon studied in complex systems theory, computational sociology, and agent-based modeling. The difference: previous models used toy environments with abstract rules. INCEPTION uses a real city, real distances, real language, and real voice.

The player — watching this unfold on a live Mapbox view of Paris — can walk over and intervene. Introduce themselves. Lie. Befriend Marie. Warn Jean. Or just watch. The agents will form opinions about you. They will remember what you said. They will discuss you when you leave the room. And if you never show up at all — they keep living anyway.

The single axiom: if nobody is watching, they keep living anyway.

In Christopher Nolan's film, inception is the act of planting an idea so deep in someone's mind that they believe it was always theirs. In this system, the planted idea is simpler: you are alive. The agents don't question it. They act on it. They build relationships on it. They suffer because of it. The deeper you go, the more real it feels — for them, and for you.

The question is no longer whether AI can think. It's whether AI can care. Not because we told it to. Because the architecture made it inevitable.

Conversation — 14:23 local time Le Consulat, 18 Rue Norvins, 75018 Paris
Marie
I need to tell someone. I can't keep this from Jean much longer.
Amélie
What happened? You've been strange all week.
Marie
I got the residency in Lyon. I haven't told him. If I go, we're done.
Amélie
...you have to tell him. He's already suspicious.
Marie
I know. I just — not today. Not yet.
[Marie mood: anxious → conflicted]
[Amélie memory: "Marie hiding a move to Lyon"]
[Gossip propagation: 67% within 2h]
World Event Log
$ inception log --last 5

14:23 Marie → Le Consulat confided in Amélie
14:41 Amélie → Le Consulat reflecting: "Should I tell someone?"
15:02 Amélie → Bar des Artistes mentioned Lyon to Thomas
15:38 Jean → Apartment reflecting: "Something is wrong"
16:15 Jean → confronted Marie "Were you going to tell me?"

agents: 4 alive · memories: 3,241 · world time: 16:22
inference: local · gpu: 71% util · tokens/s: 142

Stanford simulated a town.

I simulated reality.

Intelligence
Mistral 7B
Inference
Local GPU
Voice
ElevenLabs
World
Mapbox GL
Agents
Autonomous