← Back to AiDrift

How AiDrift Sees Your Session

A short, readable primer on the model behind the drift score. The last section names the math for readers who want it.

The question behind every drift alert

When an AI coding session goes wrong, it's rarely because a single turn was bad. It's because the conversation slowly walked away from what you asked for. Files you never mentioned start getting edited. Tools you didn't need start getting called. The agent is still helpful, still competent — just not about your problem anymore.

A scalar "drift score" can tell you this is happening. It can't tell you where it went or what it's working on now. That second question is the interesting one.

The idea: a session has a shape

Every AI coding session has two things it's about:

Write those two things down and you get a picture. Some items sit near your original ask. Some sit far away. Some are clustered tightly together (the agent is in the zone). Some are scattered (the agent is exploring — or lost). That picture is what we call a session map.

A map gives you a language the scalar score never could:

Four things the session map gives you

1. An intent anchor

The session is anchored to what you originally asked for. We pull keywords and file references from your first prompt and plant them as a fixed point. Everything else is measured against this anchor.

2. Focal points

A session has a few items it revolves around. The file that keeps getting read. The function that keeps getting edited. The concept that keeps being mentioned. These focal points are the short answer to "what is this session actually about, right now." They're the rows you'd want in a one-line summary.

3. Scope distance

How far has the agent walked from your anchor? We measure the shortest path through the map from your intent anchor to where the agent is currently working. A small distance means the agent is doing what you asked. A growing distance is drift — but unlike a scalar score, it comes with a reason. "Agent moved from billing/* to auth/* over the last 14 turns" is a drift alert you can act on.

4. Overlap regions

When a session spawns sub-agents, each child builds its own sub-map. When two children's maps overlap — same files, same concepts, same cluster — they're doing redundant work. Catching this early saves compute and cleanup. It's the single drift pattern that's almost impossible to spot from a scalar score alone.

Evidence tiers: not all signals are equal

A drift signal that comes from the agent actually editing a file is not the same as a signal that comes from a keyword showing up in a user turn. We label every item and connection in the map with an evidence tier:

Every alert the dashboard shows you carries its tier. You can filter to Observed-only if you want to be strict.

Why this is honest

The session map is not a prediction model. It's a transparent accounting of things that happened and things you said. We don't invoke an LLM to decide what the session is about — we extract it from the logs you already have. We don't guess at connections — we label what we're sure about and clearly mark what we inferred.

If a drift alert is wrong, you can open the map, see the exact chain of observed events that produced it, and understand why. That's the bar. Nothing magical, nothing unverifiable.

Appendix: the math, named

For readers who want to know what's under the hood.

All of this runs locally on the logs AiDrift already watches. No paid API calls. Tree-sitter for code structure, keyword extraction for user concepts, standard graph algorithms for everything else.

Further reading

← Back to AiDrift