Oct. 5, 2025
After months of building infrastructure and hitting walls, after documenting why Personal AI is harder than it looks, after questioning what it even means for an AI to “know” someone, I feel I am stuck. I feel like I am at a crossroads with multiple paths forward, none clearly superior, each with its own trade-offs that will only become apparent after you have traveled far into it.
This paper is an inflection point in my series. My previous paper have been written after the fact, they documented what I discovered. Now we have caught up - this paper documents the decision point itself - the messy middle where theoretical understanding meets practical choices, where vision meets implementation constraints, where perfect becomes the enemy of good enough.
Sep. 21, 2025
Every employment contract contains a comfortable fiction: when an employee leaves, they take their “experience” but leave the “work product”. The employee’s brain somehow perfectly segregates patterns learned from documents produced, insights gained from data analyzed, capabilities developed from problems solved. Companies have always accepted this fiction because human memory is imperfect, degrades over time, and can’t be audited.
We pay senior employees more precisely because they carry this accumulated experience. We value “10 years of experience” on resumes. We conduct exit interviews trying desperately to capture some fraction of what is walking out the door. We even created an entire knowledge management industry that largely fails at preventing expertise from leaving with people. The professional economy is built on individuals owning their cognitive development while companies own their work output.
Sep. 7, 2025
During the months of building infrastructure for personal AI - the memory systems, the conversation persistence, the architectural insights documented in my previous posts - one aspect seemed like a straightforward task: feed my daemon my context so it could understand me.
I had backups for blog posts, I scraped decades of forum discussions to extract my posts, I had (some) work documents, presentations, emails, I had a good chunk of my library as ebooks on my hard drive, etc. The equation felt simple: process this content through an AI, and it would understand who I am.
Sep. 6, 2025
I am not an AI researcher. I am a systems architect who tinkers with technology, and I have spent months trying to build something that shouldn’t be this hard: an AI that actually knows me. Not in the creepy surveillance way, but as a genuine cognitive companion that learns and grows through our interactions.
What I discovered was that I kept hitting the same walls regardless of approach - I may be bad at implementation, but that doesn’t account for everything. These walls felt fundamental, not incidental. Eventually I realized I wasn’t failing at building personal AI - I was discovering that current architectures might not support what I was trying to build.
Aug. 31, 2025
Every time I launch an AI assistant, the same ritual of rebuilding context begins. “Remember, I’m working on improving my fiteness level for that trip to the mountain.” “The constraint is that I can’t modify the timeline for that project.” “My team has three junior developers who need mentoring through this.” Each conversation starts from zero, despite having had similar discussions dozens of times before. There are features that help dealing with that - “Projects”, “Memory”, etc. - but they depend on your subscription level and they come with their own set of limitations.
Aug. 23, 2025
I wanted to build my daemon.
Not just another chatbot, but something closer to a cognitive companion that knows me completely while maintaining its own perspective. Something that understands my thinking patterns, remembers my intellectual journey, and challenges me from a position of deep familiarity with who I am.
The concept felt achievable. I’d been self-hosting services for years, running things such as media servers, home automation, and other tinkering projects. Adding AI infrastructure seemed like a natural extension. Privacy wasn’t just a nice-to-have - it was fundamental: to build something that truly knows you requires exposing your most intimate cognitive patterns. That data couldn’t live on someone else’s servers.
Aug. 17, 2025
Imagine an AI that knows you intimately, completely, growing with you through life. Not a servant that follows commands, but a companion that understands your thinking patterns, remembers your intellectual journey, and challenges you from a position of deep knowledge. This isn’t science fiction. We have the technology to begin building true cognitive companions today. But we’re drifting toward a future where these intimate systems serve the interests of others, not our own. This is a correctable trajectory, if we recognize what is at stake and act with intention.