Every time I launch an AI assistant, the same ritual of rebuilding context begins. “Remember, I’m working on improving my fiteness level for that trip to the mountain.” “The constraint is that I can’t modify the timeline for that project.” “My team has three junior developers who need mentoring through this.” Each conversation starts from zero, despite having had similar discussions dozens of times before. There are features that help dealing with that - “Projects”, “Memory”, etc. - but they depend on your subscription level and they come with their own set of limitations.
Meanwhile, the world’s best AI researchers are consumed with a different problem entirely. They’re racing to build artificial general intelligence - systems that can match or exceed human capability across every possible domain. Billions of dollars flow toward making models that can simultaneously excel at quantum physics, ancient Sumerian translation, and competitive Pokemon strategies. The underlying assumption is that more capability, more knowledge, more parameters will eventually solve everything.
But here is what I have come to understand after months of trying to build something genuinely useful with current AI: this is the wrong target. The gap between what individuals actually need from AI and what the AI labs are building is not just large - it is categorical. We don’t need AI that knows everything. We need AI that knows us.
Consider what actually limits AI’s usefulness in your daily life. Is it that ChatGPT doesn’t know enough about obscure mathematical theorems? That Claude can’t write marginally better poetry? That Gemini’s reasoning about edge cases in quantum mechanics isn’t quite sophisticated enough?
Or is it that every AI interaction treats you as a stranger?
When I discuss a technical problem with AI, the bottleneck is not the model’s intelligence. These systems can already reason about complex architectures, suggest sophisticated solutions, and spot patterns I missed. The bottleneck is context - spending the first twenty minutes of every conversation explaining my constraints, my goals, my team’s capabilities, our technical debt, our past decisions and their rationales. The AI is plenty smart. It just doesn’t know me.
This pattern repeats across every domain where AI could theoretically help. Career decisions require understanding not just job markets but your specific journey, your actual skills versus your paper credentials, your unstated preferences, what energizes versus drains you. Health discussions need medical knowledge of course, but that knowledge is worthless without your history, your adherence patterns, your lifestyle constraints, what interventions you have already tried. Creative work benefits not from generic inspiration but from understanding your aesthetic, your influences, your half-formed ideas that have been percolating for months.
The current AI paradigm treats every interaction as stateless, every user as generic, every conversation as isolated. We rebuild context endlessly, like consulting a brilliant professor with perpetual amnesia. No matter how intelligent that professor becomes, the amnesia remains the binding constraint.
There is a different vision of AI’s relationship with humanity, one that does not make headlines or attract VC funding. Instead of building systems to replace human intelligence, we could build systems to augment each human. Instead of creating generic superintelligence, we could create personalized amplification.
Imagine an AI that has been with you for years. Not a succession of stateless conversations, but a continuous relationship. It knows your intellectual journey - not just what you know, but how you came to know it. It remembers not just your decisions, but the reasoning behind them, the alternatives you considered, the factors that mattered to you. When you face new challenges, you face them together, combining your human intuition with its perfect recall and pattern recognition.
This is not the AI as brilliant stranger. It is AI as cognitive companion - what I have illustrated as a daemon, borrowing from Philip Pullman’s conception of an external soul that knows you completely while maintaining its own perspective. Your daemon would not need to be smarter than you in some absolute sense. It would need to be smart enough to engage meaningfully with your thinking while knowing you deeply enough to actually help.
The key insight is that your daemon plus you, together, form a generally intelligent system. But it is a very specific general intelligence - one that is generally capable across the domains that matter to your life. Your daemon does not need to know everything about ancient Sumerian. It needs to know everything about you, including your interest (or lack thereof) in ancient languages.
This reframing changes everything about how we evaluate AI progress. The question is not “can this model score higher on graduate-level physics exams?” but “can this model help me think better about my actual problems?” Not “does it know more facts than me?” but “does it know which facts matter to me?”
Here is what seems contradictory but is not: building AI that knows everything is harder than building AI that knows you. The surface area of “all human knowledge and capability” is infinite and expanding. The surface area of “what matters to one person’s life” is bounded and specific.
Current models struggle with the universal challenge. They must simultaneously handle poetry and programming, medicine and music, history and hypotheticals. Every capability must coexist without interference. Every new domain risks degrading performance in others. The training process becomes an impossibly complex optimization problem, trying to satisfy millions of conflicting constraints.
Personal AI faces a simpler challenge with a more complex solution. The challenge: deeply understand one person. The solution: infrastructure that allows continuous learning from interactions rather than trying to pre-train all possible knowledge. Your daemon does not need to know about subjects you will never encounter. It can learn alongside you when new interests emerge.
This focused approach also solves the evaluation problem that plagues AGI development. How do you measure progress toward general intelligence? Every benchmark becomes a target for optimization, every evaluation suite gets gamed. But personal AI has a clear success metric: does it actually help this specific person think better? The evaluation is continuous, contextual, and impossible to game because it’s grounded in real utility.
The continuous learning aspect fundamentally changes the economics. Current models require massive retraining to incorporate new knowledge. Personal AI could adapt incrementally, learning from every interaction. The computational cost gets amortized across the relationship’s lifetime rather than front-loaded into training. A model that has been with you for ten years knows things about you that no amount of initial training could capture - not just facts, but patterns, preferences, the evolution of your thinking.
Building toward personalized AI rather than generic AGI requires fundamentally different infrastructure. The current race optimizes for scale - bigger models, larger training runs, more powerful data centers, to encapsulate more knowledge and serve one model to millions of users. Every improvement benefits everyone equally, which sounds democratic but actually is not. When OpenAI makes GPT better at reasoning, every user gets the same improvement applied to the same blank-slate starting point.
Personal AI inverts these requirements. Instead of one massive model training run, we need systems that support millions of individual models continuously learning. Instead of optimizing for the largest possible context window, we need memory architectures that preserve nuance across years. Instead of centralizing compute in massive data centers, we need approaches that can run on hardware individuals actually control.
The infrastructure needs are real but achievable. We need architectures optimized for incremental learning from interaction rather than batch training on massive datasets or inference on millions of parallel interactions. We need memory systems that don’t just store information but understand relationships between memories across time. We need privacy-preserving approaches that keep personal data under user control while still enabling sophisticated reasoning.
These are not impossible problems. They just are not the problems anyone is seriously working on, because they do not lead toward AGI. Every research lab is adding more parameters, instead of figuring out how to make models truly personal. Every breakthrough makes models more capable, not more adaptative to individuals.
Think of it like the difference between building faster mainframes and inventing personal computers. The mainframe approach keeps pushing toward more powerful centralized systems. The PC approach required different thinking entirely - not “how do we make the mainframe smaller?” but “what architecture serves individual users?” Personal AI needs its own architectural revolution.
The AGI race has characteristics of a coordination trap. Every major player knows current approaches have limitations, but no one can afford to pursue alternatives while competitors push toward AGI. The incentives all point the same direction: be first to build the most capable model, capture the market, then figure out what next - and try to avoid world domination.
The standard objection to personal AI runs: “Current architectures can’t support millions of individual models. Continuous learning causes catastrophic forgetting.” This puts the cart before the horse. We are not solving these problems because we are not trying to solve them. The entire field optimizes for generic capability rather than personal adaptation.
When the AI community does tackle personalization, it is usually through fine-tuning - taking a large model and slightly adjusting it - or RAG - feeding the model with a relevant summary of the past. This is like trying to build personal computers by slightly modifying mainframes. Real personal AI might require fundamentally different approaches - architectures designed from the ground up for continuous learning, modularity that allows capabilities to be added without disrupting existing knowledge, protocols rather than platforms.
The opportunity cost is staggering. We could have AI that grows with children through their education, adapting to their learning styles and building on years of interaction. We could have AI that helps people navigate career transitions by deeply understanding their skills, preferences, and patterns. We could have AI that supports mental health through continuous relationship rather than crisis intervention.
Instead, we’re building toward a future where a few massive models serve everyone generically. It is not that AGI is impossible or undesirable. But we are sacrificing achievable augmentation for speculative replacement. We are choosing generic over personal, scale over sovereignty, capability over relationship.
Personal AI isn’t science fiction. The core technologies exist. Language models are already capable enough for meaningful augmentation - they just lack the infrastructure for personalization. The remaining challenges are engineering problems, not fundamental barriers.
The timeline could be surprisingly short. A focused team could build a proof-of-concept personal AI in months, not years. The first version wouldn’t need to be perfect - just clearly useful in ways that generic models can’t match. Once people experience AI that actually knows them, that learns from every interaction, that maintains relationship across time, the demand will become undeniable.
This is not about rejecting advances in generic AI. Those capabilities could bootstrap personal systems. But instead of using powerful models as monolithic services, we’d use them as foundations for individual adaptation. The intelligence is sufficient; what is missing is the infrastructure for making it personal.
Your daemon doesn’t need to be generally intelligent about everything. It needs to be intelligently general about your life. That is a fundamentally different target, requiring fundamentally different infrastructure, enabling fundamentally different futures.
The race toward AGI will continue. The billions will keep flowing. The models will keep growing. But perhaps, quietly, on different tracks, we can start building what people actually need: AI that knows us, grows with us, and serves us alone.
That race hasn’t started yet. It is waiting for someone to recognize that it is the one worth running.