878 Conversations with AI: What I Actually Learned About Myself
878 conversations. 86 days. 40,417 messages. All sitting in a 3.2GB SQLite database.
I queried it. Here's what showed up.
Setup
Since November 2025 I've been using an AI coding agent as a thinking partner. Not just for code. For everything.
Four markdown files give it memory across sessions. Who I am. What I'm doing now. How it should behave. My patterns. Every session it reads these files and picks up where we left off.
86 days of that produced 878 sessions across 9 projects. The database is queryable. So I queried it.
Numbers
- Sessions: 878
- Messages: 40,417
- Days: 86
- Avg/day: ~10
November: 7 sessions. December: 94. January: 503. February (half month): 274.
What got built:
| Project | Sessions | Messages | Time |
|---|---|---|---|
| Personal AI system | 278 | 10,983 | 43 days |
| Client work | 190 | 10,154 | 21 days |
| 3D data visualization app | 141 | 7,362 | 10 days |
| AI deception research game | 61 | 5,075 | 29 days |
The game hit the Hacker News front page. The 3D app went from zero to deployed with payments in 10 days. But the interesting stuff is in the patterns, not the output.
1. The AI sees my loops
I searched the database for phrases related to self-doubt. Found them in 10+ sessions over 3 months.
January: worried my work wasn't competitive. February: admitted I wasn't pushing hard enough because I felt comfortable. Two weeks later: comparing myself to other builders, feeling behind.
Each time it felt new. It wasn't. Same doubts, almost same words, month after month. I kept rediscovering things I'd already discovered.
The AI could see the loop. I couldn't. Sessions feel fresh emotionally even when the data says otherwise.
So I built a "doubt map" into the system. Now when self-doubt shows up, the AI doesn't let me explore it again. It pulls up what happened every time I acted anyway. 660+ GitHub stars. Front-page HN. A signed contract. Just the evidence.
2. Voice, not typing
8 deep reflection sessions in 86 days. All 8 on voice input (speech-to-text). Zero from typing.
When I type, I edit myself. Plan. Organize. The session produces tasks.
When I speak, the mess comes out. Run-on sentences, half-thoughts, contradictions. And somewhere in that mess, the thing I actually needed to say.
Typed:
"lets plan the next features"
Voice:
"I'm just overthinking it. I want to build the perfect thing but I haven't even started the basic thing. I keep planning instead of doing because I don't want to see it fail. I just need to ship the ugly version."
That took 30 seconds. I'd been going in circles for weeks typing about the same thing.
If you only type to your AI, you're filtering yourself.
3. "Tomorrow" never meant tomorrow
Searched for "I will record tomorrow," "I will ship tomorrow," "I will publish tomorrow." 12+ hits.
| Date | What I said |
|---|---|
| Jan 6 | "tomorrow I will post it as soon as I wake up" |
| Jan 9 | "I will record it tomorrow" |
| Jan 9 | "I will record it tomorrow" (same session, again) |
| Jan 26 | "tomorrow I will ship it" |
| Feb 14 | "Tomorrow I will record a video" |
| Feb 15 | "Today I will record a video" |
Every "tomorrow" meant 3+ days later or never.
The only time things actually shipped on time: after someone (me or the AI) asked "why not in the next 30 minutes?"
So now that's a rule. When I say "tomorrow," the system fires back: "You've said 'tomorrow' 12+ times. Why not now?"
No opinions. Just data.
The 30% Tax
30% of my sessions produced nothing. Empty greetings, endless planning, or tinkering with the AI system itself.
The tool I built to be productive became a place to hide.
So I added rules. "Tomorrow" triggers a call-out. Research gets capped. Self-doubt gets met with evidence of past wins. The AI doesn't judge. It just remembers better than I do.
Screen recording
The text files have a problem. Me and the AI are the ones writing them. We describe my patterns, but we're the ones with blind spots about those patterns.
Now I'm testing something more direct. System records my screen. Tracks what I'm actually doing. If I'm supposed to be coding and it catches me scrolling for 20 minutes, it pings me on Telegram.
Still early. Still figuring out what's useful vs what's just annoying.
But the difference is already clear. Text files: the AI knew what we told it about myself. Screen tracking: it sees what I actually do.
Big gap between those two.
878 conversations and the main thing I got out of it isn't about AI. It's that the AI has perfect memory for patterns I can't see because I'm inside them.
I thought I was using it to build software. I was. But the database shows I was also figuring myself out. Accidentally, messily, one voice session at a time.
The conversations are the product. The code was a side effect.
I open-sourced the system: AI Life Assistant (660+ stars). Four markdown files (SOUL.md, USER.md, AGENTS.md, NOW.md) give AI persistent memory across sessions.
