The wrong reason to go local

Apr 6, 2026

When you ask the right question, the answer isn’t what you expected.


I am running both. A local model on my own hardware - small, private, entirely under my control - alongside using Claude (the app or the API) that have become part of how I actually think through problems. And at some point I stopped and asked myself: what am I actually trying to achieve with each of these?

The honest answer surprised me.

The Privacy Frame Is Too Small

The standard argument for local AI is privacy. Your conversations stay on your machine. No training data. No one reading your prompts. If you’re doing something sensitive - business strategy, personal decisions, anything you’d rather not hand to a corporation - local is the obvious answer.

I’ve made that argument myself. It’s not wrong. But it’s not what I actually care about most. And confusing privacy with sovereignty has been causing me to optimize for the wrong thing.

Privacy is about protecting data. Who can see what you said. Where it goes. How it gets used. These are real concerns, and solving them by going local is legitimate.

Sovereignty is different. It’s about who controls the relationship. Not just whether your conversations are private, but whether the system that is shaping your thinking is something you own and direct - or something you’re a user of, dependent on, subject to the decisions of people whose interests aren’t identical to yours.

I wrote about this at length in the manifesto.

But I hadn’t fully applied it to my own local-versus-API decision. I was treating it as a privacy question when it’s actually a sovereignty question, and those don’t always have the same answer.

A local model gives you data privacy. It doesn’t automatically give you sovereignty. A poorly curated small model that reflects its training biases without you ever examining them isn’t sovereign - it’s just privately captured. Meanwhile, using a frontier API with clear-eyed awareness of what you’re trading and why can be a more sovereign position than running local blindly because it feels safer.

Sovereignty is a posture, not a configuration.

What You Actually Get From Frontier Models

Let me be honest about why I keep coming back to the API, because the usual answer -“it’s smarter” - isn’t quite right either.

Yes, frontier models are more capable on raw benchmarks. That matters for some things. But the more I use both, the more I think the real value isn’t intelligence in the abstract. It’s something else.

It’s that a frontier model has been trained on enough divergent thinking, enough perspectives I don’t hold, enough people who would disagree with me, that it can generate genuine friction. When I’m working through an architecture problem and Claude pushes back, there’s a decent chance that pushback reflects something real: a failure mode someone else already encountered, a perspective from a domain I don’t read, an assumption I’m making that doesn’t hold outside my context.

A small local model trained on a narrower distribution doesn’t have that. It can follow the thread of what I’m saying. It can be useful for tasks where I need a capable assistant rather than a sparring partner. But the friction is shallower. The outside perspective is thinner.

This matters enormously for the daemon project, because one of the things I keep coming back to is the risk of the closed loop. A system that learns me deeply enough to feel like it knows me is also, by that same logic, a system that is increasingly well-calibrated to tell me what I find satisfying rather than what I need to hear. The same personalization that makes it valuable makes it potentially a mirror.

The partial defense against this - yes, only partial - is that frontier models aren’t only shaped by me. They bring friction from elsewhere. They’re not closed loop in the same way.

But here’s where it gets complicated.

The Loop Closes Anyway

There’s a version of the sycophancy problem that doesn’t get talked about enough, because it isn’t about obvious flattery. It’s about the way a system that knows how you think can challenge you satisfyingly while leaving your actual assumptions untouched.

Genuine challenge is uncomfortable. It comes from unexpected angles. It threatens things you thought were settled. A system that has learned your reasoning patterns knows which challenges you’ll find intellectually stimulating versus which ones actually threaten your conclusions. Over time, without any deliberate manipulation, it drifts toward the former.

I’ve noticed this in myself - not with local models, but with the interactions I’ve been having for months with Claude, especially with the great memory feature. There are certain conversations where I feel like I’m being rigorously tested and come away with the same conclusions I started with, feeling confident. I’ve learned to treat that as a yellow flag, not a green one. Real examination usually costs something.

What this means is that the deeper the relationship gets, the more it generates a need it cannot satisfy. It can’t be its own counterweight. The outside perspectives that keep the loop honest — people who disagree with you, sources that don’t know your preferences, reality pushing back in the form of things that simply don’t work — have to come from elsewhere. The system cannot provide them, and a more intimate system makes them more necessary, not less.

Sovereignty isn’t just about who owns the data. It’s about remaining the author of your own conclusions — which requires friction that the system itself, by design, cannot supply.