Asymmetric Inference
The future of cyber warfare as an industrial capacity problem
I’ve been reading a lot of (good) hot takes on Anthropic’s Mythos Preview. Much of it has boiled down to “wow this is crazy” (I agree) and “defenders need to use AI to fight attackers with AI” (I was at RSA two weeks ago and it’s cliche, but I also agree).
Most of the chatter is about speed. Speed to vuln discovery and exploitation, versus speed of patching and responding. That’s definitely a problem which needs to be solved, but I want to take this in a slightly different direction and focus on the asymmetric cost of inference.
Mythos is genuinely, concerningly capable, and my assumption is other labs will likely close the gap in 3–6 months, and open source possibly in 12 (don’t hold me to this prediction). Mythos is also rumored to be extremely large, expensive to train, and expensive to run. Now that frontier models are better than humans at offensive cyber tasks, the bottleneck stops being cyber talent and becomes industrial capacity.
As these agents run 24/7/365, cyberwarfare starts to look like a contest of which nation can sustain the most datacenters and gigawatts.
This means the defender’s response isn’t just to “use AI too”. It’s to balance out the compute cost ratio.
How much inference does it take for an offensive AI agent to go from an external-facing web app to domain admin? My (somewhat) educated guesstimate: not that high. And it’s going to take significantly more inference and compute to ingest logs from each endpoint, correlate them, get an agent to triage, and then do the remediation. From an inference standpoint, defense (as we currently do it with the SOC model) is probably more expensive out of the gate.
The analogy I like to work with is that of kinetic air defense. Patriot PAC-3 interceptors run about $4 million each, while Shahed drones cost roughly $20,000 to $50,000. Not to mention production time and industrial capacity limitations. Whoever forces the other side into the wrong side of the cost curve wins the war of attrition.
AI-enabled cyber is heading to the same place. “Can the model do it” will no longer be the question - we know it can, and if it’s can’t it’ll probably be able to in the next 12 months. The question I’m most interested in now: what’s the cost ratio, per campaign, per target, between the cheapest offensive capability and the cheapest defense that neutralizes it?
If you’re building toward structurally cheaper defensive inference - I’d love to hear from you!

