A story about

AI Drift

A quiet observability layer for your AI coding sessions. It notices when a conversation is starting to drift — and warns you before you waste another hour re-prompting a model that's already gone off the rails.

Built for teams that run many AI coding sessions in parallel and need a reliable signal for when quality is slipping.

Local or hosted Privacy-first Deterministic scoring 8 drift patterns

Notice. Warn. Recover.

1

The Problem

You open a new chat with a clear task in mind. An hour later you're staring at a patch that reverts your morning's work, and you can't quite remember when things went sideways. Was it the third rejection? The moment the model proposed the same broken fix twice? The point where it quietly started rewriting files you never asked it to touch?

This is drift — the slow, compounding divergence between what you asked for and what your AI assistant is actually doing. And for most of us, the only detector is a growing sense of frustration.

By the time you realize the session has drifted, you've already paid the full cost of it.

Wasted hours

You re-prompt, re-prompt, re-prompt. Each turn looks reasonable in isolation, but the session as a whole is going backwards.

Hidden collisions

Two AI assistants working in parallel quietly edit the same file. The last write wins, and the losing change is gone.

No forensic trail

Which changes came from the AI? Which from you? When exactly did the model start making things up? Nobody's keeping a record.

2

The Vision

What if every AI coding session had a little speedometer in the corner? A quiet, always-on score that told you — in real time — whether the conversation was healthy, warming up, or about to go off a cliff?

That's AI Drift.

Notice
See your chats as they unfold
87
Score
One number, 0 to 100
Warn
Only when it matters
Recover
Jump back to the last good state

Not a chatbot wrapper. Not another model judging another model. A quiet, offline system that reads the shape of your conversation and tells you when it's starting to look unhealthy — with enough warning to do something about it.

3

How It Works

01

Your chats are recognized automatically

Open a new conversation with your AI assistant. AI Drift notices, opens a new session for you, and starts keeping an eye on it. There's nothing to click, nothing to tag, nothing to remember. One chat window is one session.

chat Session tracked
02

Each turn is quietly scored

As the conversation grows, every exchange is added to a live score that reflects how healthy the session still looks. No models, no external calls — just a transparent, deterministic read of the conversation's shape.

live score
03

You're told, quietly

A small indicator lives in your editor's status bar. Green when things are healthy, amber when worth watching, red when drift is actually firing. No pop-ups, no friction — just one clear heads-up if something's going wrong, with a direct link into the dashboard.

Drift 87 ⚠ Possible drift detected fix flaky auth test
04

Good moments are saved for you

When a session is healthy and a turn goes well, AI Drift quietly marks it as a safe point you can rewind to. If drift fires later, you already know which turn to go back to — no archaeology, no scroll-hunting.

▼ safe ▼ safe
05

Everything surfaces in one dashboard

A simple web view shows your sessions, how they're trending, and what's going wrong where. It's the place to look back on a tough afternoon and understand exactly when things started to slip — and the place to ask a built-in AI assistant questions about your own history.

Sessions
4

What It Looks At

The score isn't magic and it isn't a model. It's a transparent reading of a small number of signals that, in our experience, reliably predict a session is going sideways. Each signal is something you could notice by hand — if you had the patience to read every turn closely and remember everything that came before.

Pushback

How often are you saying "no, try again"? Pushback once is normal. Pushback clustering, or pushback with repeated language, is a signal the model and the user are losing sync.

Repetition

Is the model proposing something it's already proposed — and that's already been rejected? Seeing the same answer twice is a classic sign of a conversation circling a wrong idea.

Alignment

Does what you're still talking about resemble what the session was originally about? Gentle topic drift is fine. A sharp turn away from the original goal is usually a sign the model has lost the thread.

Momentum

Beyond any single turn, is the score heading up, holding steady, or sliding? Short-term slips happen. Sustained downward momentum is what pushes a session into drift-alert territory.

Every signal exists because we watched real sessions suffer from exactly that failure mode. Nothing theoretical — just pattern-matching on what goes wrong when things go wrong.
5

Drift Types

A single score tells you that something is off. A label tells you what. When a drift alert fires, it's sorted into one of eight named patterns — each with its own contextual remediation hint on the dashboard, so you're not guessing what to do next.

Stuck Loop

The same rejected idea keeps coming back. The model is circling. Usually better to rewind than to re-prompt a sixth time.

× × ×

Rejection Cascade

Several pushbacks in a row. Confidence is eroding fast — the session is unlikely to recover without a clarified prompt or a fresh chat.

Misalignment

The conversation has drifted away from its original goal. Either the scope genuinely changed, or the model is solving the wrong problem.

Tool Churn

Repeated reads, edits, and searches on the same files without visible progress. A sign the model can't see what it needs to see.

Gradual Decay

No single cliff, but the score keeps sliding. Often a sign the context is getting too heavy. Summarizing and starting fresh can help.

Session Fatigue

Long session, quality slipping. Coherence is fading. A safe-point rewind plus a fresh chat is almost always cheaper than pushing through.

Infra

Provider-side failure — tool calls timing out, responses truncated, streams cut off. Not the model's fault, but still drift from your seat.

Agent Collision

Two AI assistants working in parallel touch the same file within a short window. One just overwrote the other — often without either noticing.

6

In the Dashboard

The score is the start. Once you have a stream of scored sessions, the dashboard lets you actually do something with them.

Ask AI about your sessions

A built-in chat panel can answer questions about your own history: "Which of my sessions drifted worst this week?", "What was going on right before the alert on Thursday?". Works with your own keys for the major AI providers.

Agent collision detection

When two AI assistants touch the same file in overlapping windows, the dashboard flags it — with the sessions, turns, and files involved so you can reconstruct what was overwritten.

Git event tracking

Every commit is linked to its originating session and flagged as AI-driven or human-driven. Answer "what did the AI actually commit today?" without grepping shell history at 11 pm.

Drift-type classification

Each alert is labeled with the drift pattern behind it, and a contextual hint suggests what to try next. Stuck loop and rejection cascade don't need the same response — the dashboard makes the difference clear.

Analytics at a glance

Per-session score charts, per-workspace rollups ordered by last activity, collision timelines, and commit overlays — so you can see at a glance where the tough hours happened this week.

Metrics-grade history

Every score, every signal, every alert is captured as a structured, timestamped metric — the same shape professional ML teams use for experiment tracking. That means your drift history is ready for the next wave of smarter, learning-based pattern detection the moment we ship it.

7

Privacy & Trust

AI Drift reads every prompt you type and every commit you make. The privacy posture is treated accordingly — not bolted on, not "coming in v2", just done.

Runs on your machine

AiDrift supports local/self-hosted deployment so your score engine, dashboard, and history can run in your own environment.

Encrypted, only for you

Anything sensitive you entrust to AI Drift — provider keys, stored transcripts, personal tokens — is written to disk as encrypted bytes. Even someone with filesystem access can't read it. The keys to unlock are yours, and yours alone.

Modern auth

Strong, modern password hashing. Short-lived sign-in sessions that rotate and can be revoked. A leaked session dies the moment it's replayed.

Clear boundaries

Sessions you track, you see. Sessions you haven't opted in on, AI Drift doesn't know about. There's no background sweep of your whole machine — only the things you explicitly point it at.

Content stays yours, always

Your data stays within your selected deployment environment. If you enable optional AI-provider integrations, only the data needed for those calls is sent to the provider you configure.

You can turn it off

A per-session mute, a per-workspace disable, a full "pause the extension" command. AI Drift is quiet by design and stoppable by design.

8

Get Started

AI Drift is currently available in beta. Setup takes a few minutes, and then it mostly disappears into the background of your editor.

The short version

  1. Sign up through the dashboard (or sign in if you already have an account).
  2. Generate a personal access token from your account settings.
  3. Install the editor extension and paste the token when asked.

That's it — the next chat you open will start being tracked. Your score indicator lives in the status bar; the dashboard is a click away.

The goal is zero friction once it's set up. If you have to think about AI Drift during the workday, we've already failed.

Detailed installation steps, keyboard shortcuts, troubleshooting, and the FAQ all live inside the app's own documentation once you're signed in — so they stay in sync with whatever version you're actually running.

9

What's Next

The core loop works: catch drift early, label it, give you a way back. The shape of the problem keeps growing as more people run more AI coding sessions in parallel — and that's where the roadmap lives.

Near-term

Longer arc

The goal has always been the same:
give you back the afternoon that drift was about to steal.
Everything else is just engineering.