The Algorithm Went to War. Someone Forgot to Ask What For.
The prelude: the Pentagon’s clash with Anthropic over Claude’s safeguards foreshadowed the war’s worst miscalculations. The military pushed to strip ethical guardrails and deploy AI for autonomous targeting. Anthropic warned of hallucinations and escalatory risks. Neither side fully yielded, and the result was strikes on civilian schools, driven by stale data and algorithms trusted beyond their limits. The war didn’t cascade to the closure of Hormuz despite using AI. It led to strategic failures because humans stopped questioning it.
There is an old distinction in military theory between the map and the territory. The map is the cleaned out representation, clean, legible, bounded. The territory is the thing itself, rough, resistant, ambiguous, alive. Every military disaster in history involves, at some level, a headquarters that fell in love with its map and stopped looking at the territory. The United States military went to war against Iran with the most sophisticated map in the history of warfare. That, it turns out, was a major problem.
The story of Operation Epic Fury will be studied in war colleges for a generation. Not primarily as an air campaign, not primarily as a test of long-range strike capability, but as the first large-scale combat demonstration of what happens when artificial intelligence is used to compensate for the absence of genuine operational planning rather than to enhance it.
The distinction matters enormously, and it is one that the assessments have so far been reluctant to state plainly: AI-assisted targeting, AI-generated logistics modelling, AI-synthesised intelligence fusion. These are legitimate force multipliers when they are layered onto a foundation of coherent strategic thought, human dimension. When they are substituted for that foundation, they produce something uniquely dangerous: high-confidence answers to the wrong questions, delivered at a speed that forecloses the long deliberations that might have caught the error.
The first failure was in targeting. The AI systems used to generate the initial strike packages were trained on decades of signals intelligence, satellite imagery, and open-source data. They were extraordinarily good at identifying what was visible. They were structurally incapable of accounting for what Iran had deliberately made invisible: the hardened redundancies, the dispersed reconstitution capacity, the civilian-military integration of the defence industrial network that meant striking a logistics node was simultaneously striking a hospital supply chain or a school.
This is not a criticism of the technology. It is a criticism of the epistemology, sciemce of knowing. When an algorithm returns a target list with confidence intervals, the human tendency especially under time pressure, especially in a command culture that had been rewarding data-driven decision-making for two decades, is to treat the confidence interval as a measure of reality rather than a fumction of the model’s internal consistency. The artificially produced “map” became more real than the rough territory of Iran. The percentage became more persuasive than the analyst who had spent years inside Iranian operational doctrine and was raising her hand at the back of the room. Speed felt right and the lure of winning with HAL became too tempting because it was easy.
The second failure was in escalation modelling. The campaign planners had access to sophisticated AI tools for predicting Iranian response behaviour. These tools were trained on historical deterrence data, on game-theoretic frameworks, on decades of Iranian signalling. What those tools could not model was the political logic of a regime calculating not against an abstract adversary but against its own domestic audience, its proxies’ credibility calculations, and the personal survival imperatives of a leadership cohort that had watched what happened to every regional actor that blinked.
Escalation is not a data problem. It is a judgment problem. No artificial judgement exists, no models for wisdom. These requires not the ability to process ten thousand variables simultaneously, but the wisdom to know which three variables actually drive the decision. That wisdom lives in human expertise accumulated over careers, in the kind of granular cultural and political intelligence that cannot be scraped from open sources and cannot be inferred from behavioural patterns alone. When you hollow out that expertise, as the US intelligence community had been progressively doing, reorienting careers around data science rather than regional mastery, and replace it with algorithmic bias and over-confidence, you have not upgraded your analytical capacity. You have replaced slow, uncertain wisdom with fast artificial ignorance.
The third failure was the deepest, and it is the one that Thucydides would have recognised instantly. The campaign lacked a clear answer to the question that must precede every operational plan: what does victory actually require?
AI is extraordinarily good at optimising toward a defined objective. It is useless at interrogating whether the objective is the right one. The systems used in Epic Fury were tasked with maximising degradation of nuclear programme infrastructure within defined escalation parameters. They performed that task with remarkable precision. What no system flagged, because no system had been asked, was whether infrastructure degradation was the correct operational objective given the actual strategic goal, which was presumably some durable change in Iranian behaviour and regional posture, not merely the temporary physical setback of a programme that had already demonstrated reconstitution capacity.
This is the oldest mistake in warfare, dressed in new tech clothes. Clausewitz wrote that war is the continuation of policy by other means, meaning that military operations are always and entirely in the service of a political purpose and will, and that the moment military logic becomes self-referential, the campaign has already begun to fail. The AI systems had no Clausewitz module. They optimised beautifully within the problem as defined. Nobody had thought hard enough about whether the problem was defined correctly.
None of this is an argument against artificial intelligence in military planning. It is an argument against magical thinking about what artificial intelligence can do.
The US military’s institutional drift over the past decade has been toward treating data as a substitute for judgment, and algorithmic confidence as a substitute for strategic clarity. This drift was not driven by the technology. It was driven by a bureaucratic culture that rewards measurable outputs over unquantifiable wisdom, by a procurement cycle that favours expensive technical systems over the slow, unglamorous investment in human expertise, and by a political environment in which the ability to show a senator a graph is more valuable than the ability to explain why the graph is asking the wrong question.
AI did not fail in the Iran campaign. The institutional culture that reached for AI to fill gaps that AI cannot fill, is what failed. The algorithm went to war. It simply had no idea what it was fighting for.
The implications extend well beyond American defence procurement. Europe is in the early stages of a rapid military build-up, constructing defence industrial capacity, reconstituting operational planning depth, investing in the technologies of future warfare.
The temptation will be strong, it is always strong, to buy the most sophisticated tools and trust them to compensate for the harder, slower, more expensive work of rebuilding genuine strategic thinking culture.
Finland knows something about this. The Finnish Defence Forces have maintained amd should continue to cherish, through decades of pressure to modernise cheaply and quickly, a stubborn commitment to the primacy of human operational judgment.
The territoriality of that approach, its insistence on knowing the actual ground rather than the model of the ground, its cultivation of commanders who can think rather than commanders who can manage. This is not nostalgia. It is, as the American experience has now demonstrated at painful cost, the irreplaceable foundation on which all legitimate military technology rests.
The lesson of Epic Fury is not that AI is dangerous. It is that competence cannot be outsourced. Not to algorithms. Not to contractors. Not to allies. Not to anyone but to well educated strategic wisdom.
The roughness of territory and the complexities of the human domain will always outlast the map.
– Mika Aaltola
3 Kommentit
Lähetä kommentti
Kolmas Persianlahden sota: Amerikka yksin
Kolmas Persianlahden sota: Amerikka yksinHistorian käännekohdilla on tapana esittäytyä vasta...
Eurooppa pelaa shakissa murto-osalla nappuloistaan. Mitä tapahtuu kun se pelaa kaikilla?
Eurooppa pelaa shakissa murto-osalla nappuloistaan. Mitä tapahtuu kun se pelaa kaikilla?Venäjän...
Puheenvuorot – Suositus EU:n ja Kanadan tiiviimmästä yhteistyöstä nykyisessä geopoliittisessa tilanteessa, mukaan lukien Kanadan taloudelliseen vakauteen ja suvereniteettiin kohdistuvat uhat (keskustelu) – Mika Aaltola – Tiistai 10. maaliskuuta 2026 – Strasbourg – Lopullinen versio
Lähde : © Euroopan unioni, 2026 - EP


The U.S. mistakes in the Iran conflict are not primarily technological—they are epistemic and strategic.
They stem from:
– overconfidence in models
– underinvestment in human strategic competence
– political misalignment with allies
– managerial rather than strategic command culture
– failure to respect the friction of real terrain and human behavior
Your framework captures this with unusual precision.
It is so obvious, what you are telling to us. But it needs deeper understanding to make it look so obvious. When you start thinking the history for the last hundred years, there are so many cases where maps have directed the planning instead of terrain. And where the goal has not been clear. Even before AI. Thank You for exellent analysis.
Mielenkiintoinen kirjoitus, ajatuksia ihan nykypäivän business-maailman AI-innoituksessa käytettäväksi, ettei ihanbperus talonpoikaisjärjen käyttö unohdu.