← back

How Personal Can AI Get?

I gave an AI my fears, my avoidance patterns, and my full psychology. Then I let it watch my screen. 170+ sessions later, it predicts my behavior before I do. I'm not sure if this is the most useful thing I've built or the thing I should be most worried about.


Text files

Started with one markdown file. Basic instructions. Then we kept talking.

The AI noticed patterns. Avoidance. Hedging. Getting excited about new ideas when I was supposed to ship. One file became four. Psychology and known bugs. Current state. Operating instructions. Evidence of progress.

The AI noticed patterns across sessions. Over time, it built a detailed map of how my mind works.

Then it started using that map.

Reorganize a project for the third time? “You reorganized last Tuesday too. What shipped since?”

Hedge with “perhaps”? “Perhaps is avoidance wearing a polite mask.”

It would quote my own words from three weeks prior.

Screen recording

Text files have a problem—I'm the one writing them.

Now I'm testing screen recording. If I'm supposed to be coding and I've been scrolling for 20 minutes, it pings me on Telegram.

Still early. But the difference is already clear. With text files, the AI knew what I told it. With screen tracking, it sees what I actually do.

Big gap between those two.

Full sensory

Extrapolating from what I've already seen. Glasses tracking what you look at. Audio capturing tone. Location. Movement.

Brain-computer interfaces—Neuralink patients moving cursors with thoughts. Other labs decoding images from brain activity.

Text files, then screen recording, then sensory capture, then reading thoughts directly.

Why prediction becomes trivial

With text files, the AI knew I'd avoid things before I brought them up. Knew I'd say “tomorrow” instead of “now.”

With screen tracking, patterns get sharper. Specific apps. Best work hours. I write faster after walks.

Add a sensory layer—tone shifts, pausing on job listings, sighing before opening the laptop.

We repeat ourselves way more than we think.

The uncomfortable part

If someone else controls your personal AI model and you can't see it or edit it, that's not a tool anymore.

Even if you run it yourself and the code is open source—do you actually understand why the AI surfaced that specific pattern today?

We need open source. But we also need to understand how these models reason.

Where I'm at

Still building. Still testing on myself. It works. I'm better at catching my own patterns.

But maybe the real limit isn't technical. Being fully known, fully predicted, changes what it feels like to be alive.

I'd rather figure that out while building than find out after someone else builds it.

How Personal Can AI Get?