National Security

Spinning The OODA Loop Faster: How Can AI Help Military Decision-Making To Be Faster & Better?

Published on
October 28, 2025
An immediate priority for force transformation should be a shift towards greater use of autonomy and Artificial Intelligence within the UK’s conventional forces. As in Ukraine, this would provide greater accuracy, lethality, and cheaper capabilities
– Strategic Defence Review 2025: Making Britain Safer: secure at home, strong abroad.

Of the many common themes between the Strategic Defence Review 2025 (SDR25)1 and its predecessor, the Integrated Review (IR21) 2, perhaps the most significant is the need to harness rapid technological developments, particularly Artificial Intelligence (AI) to gain operational advantage. For UK Defence, establishing a law of progress in the UK has been challenging, outside UK’s support to Ukraine and Cyber and Specialist Operations Command3, very patchy. Events in the Middle East and Ukraine have demonstrated with the clarity that only conflict can confer how incorporating AI in a range of capabilities can deliver military advantage on the battlefield or compensate for numerical overmatch. The area that shows promise for the greatest potential advantage is in command and decision-making, made possible by the iterative improvements in ‘compute’ to support sophisticated large language models (LLM) and Agentic AI4. Given its ability to work through multiple problems simultaneously without prompting, Agentic AI offers very significant enhancements to military decision-making5 in the very near term. If appropriately trained, the analytical insights of Agentic AI are moving from speculation (however well-informed) towards prediction 6.  

Agentic AI refers to artificial intelligence systems capable of pursuing complex, multi-step goals autonomously, interpreting human intent, and taking adaptive actions across dynamic environments without constant human supervision.

From Speculation to Prediction

In matters of strategy, operations and tactics prediction has not sat easily with military practitioners. UK and NATO military doctrine describes a combination of science and art in the development of concepts, plans and the execution of operations. Military training and education have traditionally placed a high premium on operational lessons, battlefield studies and ‘the rhymes’ of military history. This has meant that the human aspects of military decision-making, such as intelligence assessments, are inescapably subjective. The capabilities of computer modelling have hitherto been constrained to measurables such as logistics, finance and weapons effects. People and social constructs, military or civilian, are considered as a random and unpredictable element. Even with effective Operational Analysis based on scientific methodology, selecting (for example) an Enemy Course of Action is ultimately a judgement call based on all available facts, intelligence analysis and the commander’s own experience and intuition: in other words, speculation or the best guess. This is not pejorative as it has been a feature of all social interactions, for example in commercial markets where financial investment is literally ‘speculation’. But the ability of multiple AI agents to simulate decision-making processes in real time, test hypotheses, learn from outcomes and adjust their internal models, can deliver a probabilistic precision of likely human behaviours and responses that is far more than speculative. In military terms, this should mean modelling the ‘human terrain’ with the same degree of accuracy as stock consumption and weapon effects on targets.

Prediction is as significant a concept as modernism and postmodernism. It’s the defining reframe of our generation, and we’d better understand it:

  • Postmodernism was why we built businesses the way we currently do: why we treat innovation as “capital at risk”, why we have the “front end / back end” service model, and other themes that are so pervasive they’re invisible.
  • “Prediction” is a distinct break from postmodernism. It is what comes next.
  • It can be hard to tell, in the moment, whether you’re looking at the beginning of something or the end of something. Many early AI artifacts that we think are “the future” are actually the final form of the old thing, rendered perfectly at their moment of obsolescence.
  • All value creation now is about prediction.
  • Prediction markets give us a unit we can understand, like how patents represented progress a century ago, that organized our ambition and represented a stake in the system.

Alex Danco, Prediction: the successor to Post-Modernism

Whether or not we are living through an epochal social change7, it is undeniable that AI in general and LLMs and agentic AI in particular, are fundamentally changing informational and commercial decision-making. It is axiomatic that the UK military decision-making will undergo a similar generational change, ideally in partnership with industry. By adopting Agentic AI in human terrain mapping, wargaming and the use of virtual assistants, it can move towards what Keith Dear recently termed Prediction Centric Warfare8.

From Simulating to Predicting Human Behaviour?

The parallels between the UK military and commercial marketing bear close comparison, particularly given the UK military’s audience-centric approach9. Both commercial markets and battlespaces are complex adaptive systems affected by how people behave and react to stimuli and where outcomes emerge from myriad interacting agents (people) who make decisions based on a host of cultural and personal biases. In marketing, agentic AI is having a profound impact on the industry, as it can predict consumer decisions by simulating millions of micro-decisions, identifying emergent patterns and testing theses. Marketing companies’ ‘employment’ of thousands of AI agents (‘digital twins’) to predict consumer habits is already achieving 80–90% predictive accuracy10 and can reallocate advertising spend in real time based on AI-simulated responses11. Despite relatively high set-up costs12 the ability to iterate and improve accuracy makes the through-life costs of an agentic AI campaign very cost-effective over time.

The UK's audience-centric approach. An audience-centric approach recognises that people are at the heart of competition; it is decisions and behaviours that determine how competition is conducted and resolved. Audiences are segmented into three general categories - public, stakeholders and actors - depending on their ability to affect our outcomes. A sophisticated understanding provides the focus for planning and executing activity to create or maintain the attitudes that constitute behaviour. Commanders, with an understanding of the strategic narrative, can then conduct target audience analysis to identify the effects that they wish to create.
– Extracted from JDP-01

The military intelligence use case is self-evident whether by modelling populations (such as the response of citizens in Gaza to the arrival of a peacekeeping force) and individuals (such as Russian commanders). Agentic AI companies have already developed and deployed ‘digital twins’ and upscaled them to model the behaviour of nominated communities. Given their sensitive intelligence use, much of this is happening in the classified space but having witnessed the speed at which an agent can be trained and give accurate responses to focussed questions, the implications for the intelligence community are highly significant. War is ultimately about changing cognitive behaviour, coercing an adversary to conform to your will. So, the insights that well-trained agents could give, individually or at scale, could be game-changing in terms of predicting an enemy’s next move or intention. Although it would be wise to adopt this technology gradually and measure agents’ performance against objective comparators, the potential advantage demands rapid experimentation and early incorporation into a commander’s toolbox.

From Wargaming to War Prediction?

The logical extension of this capability is wargaming in which friendly, hostile, civilian, governmental and NGO AI agents can interact within defined operational parameters. Although wargaming is well-established in the UK military, it is still largely analogue, which trains the methodology into the user but does not fully exploit the potential of improving decision-making. The battlefield laboratory of the Middle East and Ukraine is demonstrating how LLMs and agentic AI can transform military wargaming into adaptive simulations capable of generating, evaluating, and refining courses of action (COAs) at machine speed. The Israeli Defence Forces (IDF) have developed AI-enabled mission rehearsal environments using generative adversarial models to simulate responses to strikes or incursions, helping commanders assess escalation pathways before strikes13. The Ukrainian Army’s Delta C2 system now incorporates digital twins as part of AI simulations enable planners to run rapid ‘micro-wargames’ before approving operations, which has dramatically shortened planning cycles and improved tactical adaptability14.

We fight over forecasts more than over map squares. That is uncomfortable, because modelling human decision‑making under fire is damnably hard. But discomfort does not free us from the obligation. We cannot not predict. We either model and influence those decisions - or we leave outcomes to inadequate heuristics.
– Cassi Blog, Prediction Centric Warfare

Operational evidence, therefore, suggests that the benefits of accelerating the use of agentic-AI based wargaming will be considerable. As with predicting human behaviours, the opportunity to derive rich insights with great rapidity will allow planners to test COAs against thousands of AI-generated enemy adaptations before execution, as the US Air Force is doing with tactical air combat simulation15. An added benefit, assuming robust connectivity, is that the digitisation of wargaming means it can be distributed thereby reducing the need to gather large, command-heavy groups together that are vulnerable to enemy targeting. And finally, as the commercial marketing world is beginning to demonstrate, the accuracy of using agents at scale is driving out a significant degree of uncertainty in an industry which was previously at the cutting edge of human behaviour analysis. We ought to take heed.

Harnessing the Potential of Prediction through the AI Assistant

Adding an agentic layer onto a high-grade LLM has created proficient AI Assistants which can a heavy cognitive load off the user and assist decision-making through enhanced comprehension, predictive judgement, versatility and laying out the rationale for reaching judgements. The increasing number of commercial AI Assistants either in advanced stages of development or in delivery, such as Moveworks16, Amazon’s Seller Assistant17 and the proposed Compliance Brain Assistant18 are making a significant business impact in workforce efficiency and productivity. It might be tempting to assert (as some have) that human decision-makers can be circumvented or lack agency given the autonomy of agents. But as Anthony King argues, humans are the essential component to any AI capability19 and agentic AI is no different, including in the aforementioned commercial examples. They have become effective through feedback loops that are designed and run by humans and iteratively trained with human oversight based on whether their outputs bear scrutiny.

As a ‘team mate’, the potential for agentic digital staff assistants will be huge and, under the supervision of commanders and their staff, they can add significant intellectual bandwidth. Project Maven is harnessing this capability to analyse ISR data to generate targets, presenting commanders with ranked recommendations and confidence scores rather than raw imagery. Again, the Ukrainian Army’s Delta system has decision companions, advising brigade and corps-level staffs on enemy movements, supply chain risks, and optimal sequencing of drone or artillery strikes. As well as expanding the cognitive bandwidth and adding confidence to decision-making, they have also helped to prevent the expansion of HQ staff, a priority for UK military as it strives to minimise its C2 footprint and maximise its lethality. The adoption of AI assistants by course members at the UK Defence Academy is an encouraging step towards establishing human-machine teaming in the next generation of commanders.

How Can we Spin the OODA Loop Faster?

We are experiencing a generational shift in social and business frameworks where value added is no longer about the smart use of data but about the ability to model and predict human behaviours and interactions, violent or otherwise. As warfare is an inherently human endeavour, the UK military must do more than just keep pace. Although it is not yet clear whether agentic AI represents the perfect ‘final form’ of the old way or the start of the new normal, it is a technology that must be adopted with pace and rigour. To gain operational advantage, the UK must pair technological acceleration with in-service adoption, through professional military education and experimentation in the field and in barracks.  We need leaders who can interrogate AI products, integrate them ethically and legally, and remain accountable for the strategic, operational or tactical outcomes. The OODA loop of the future will spin fastest for whoever is able to harness best the predictive capability of LLMs and agentic AI within a seamless human machine team.  

If we are to win this competition, and keep winning it, we must work out how we do so, and a reasonable start-point is to focus on the following five questions:  

When prediction quality improves, outcomes improve. When it degrades, we pay in blood, treasure and time. We live in the exponential age – where technological change is continuously accelerating – creating profound disruption. The UK’s Strategic Defence Review tells us we live in age of uncertainty. Uncertainty and change can only be bounded and managed with prediction The science of prediction – human and algorithmic, has never been more important
– Cassi Blog, Prediction Centric Warfare

How will it affect the nature of Command?

We are moving from ‘guessing what is on the other side of the hill’ to knowing in detail what is there and what it is likely to do next. The advantage for whoever best uses the technology is profound, but it will not come risk-free. The greater the degree of autonomy, the greater the risk of compromising the responsibility of Command, particularly legal and ethical dimensions. So, commanders must understand how the AI or the agent works, its degree of training, its autonomous parameters and how to work with it. They must be more than a ‘prompt engineer’, indeed they should be an expert who is able to create a process of continuous improvement where humans build trust in the agent as it is trained and optimised. Rarely have the old saws of ‘show workings’ and ‘trust but verify’ been more relevant.

How will it change C2?

Human-machine teaming ought to have the most significant impact on military headquarters since the introduction of battle computation. The potential of AI agents to increase cognitive bandwidth and tempo should realign staff intellectual load away from generating staff products towards verification, data curation and quality assurance. In the immediate term, agentic AI can augment headquarters as advisory clusters comprised of function-specific digital assistants which should reduce the number of staff officers in a headquarters and allow for greater dispersion, both vital for survivability in the battlespace. This means staff officers who are masters of their field. It also comes with a high training and education tariff given the technical challenge of integrating agents and optimising their performance. So, the human team should be a blend of civilian employees of AI companies, civilian experts and expert military staff.  

How will it change Culture?

There are many commentators who will regard the notion of prediction in military affairs as so much Kool Aid. But the narratives and frameworks that aid human understanding remain just as relevant and the cognitive process involved in devising them just as useful in learning the military craft. This is not a choice between predicting and not predicting; it is between intuitive, untested bets and explicit, accountable ones. The ambition to strive for prediction in human behaviour is not a search for clairvoyance but the continuing effort to dispel the fog of war and improve response time. The cultural shift required to do so will doubtless require intrinsic and extrinsic stimuli, particularly strong leadership and career incentives. It will not replace military academic and conceptual study but enhance it.

How will it change data management?

The introduction of predictive-quality decision-making places an acute focus on data. Whether biases are trained in or out, data quality and curation will be critical to the accuracy of the output. Therefore, data literacy must be elevated to data fluency to ensure that commanders and staff are able to detect anomalies or outliers as they are immersed in the planning process. This will require cross-reference points to be designed into the autonomous process at certain points to allow for human verification and data checking but without compromising decision-making tempo. It will also require the continuous auditing and curation of data as it is verified, classified and stored. This is particularly important as agentic AI will also be a key adversary tool to create cyber and algorithmic effects against us (e.g. algorithmic poisoning), so we must have a defence.

Are we deciding fast enough?

Many of the points in this final section have been recognised on UK Defence’s experimentation. Within Dstl’s programme ‘Building the Digital Targeting Web’, the potential of AI agents has been recognised in the ‘Deciders’ pillar20. But the pace of adoption in the battlespace and proximity of the threat to UK interests in NATO, suggests that it is neither extensive nor fast enough. Upscaling UK Defence project teams will increase the bandwidth of both experimentation and engagement with industry and academia. Hiring in AI expertise to key headquarters and deciding bodies will create more intelligent customers. Including LLMs and AI agents in every command post exercise, particularly in military academies and schools, will force the pace on bottom-up innovation.  

Adopting LLMs and AI agents to enhance command and decision-making at a speed that is relevant to current conflict is not without risk. But the potential for achieving decisive advantage far outweighs the risks and overheads for not doing so. The trick will be for the UK military to configure itself to allow rapid adoption through experimentation and continuous improvement.

References

1. SOURCE »

2. SOURCE »

3. UK Cyber and Specialist Operations Command (CSOC) was renamed on 1st September 2025 as an SDR25 measure. It was previously called Strategic Command from  

4. The definition of Agentic AI in Belfer Centre for Science and International Affairs, 2024  

5. See, for example, Nadibaidze et al (2024), AI in Military Decision Support Systems, Autonorms Project and Farnell, Richard and Offey, Kira, “AI’s New Frontier in War Planning: How AI Agents Can revolutionise Military Decision-Making”, Field Artillery, e-Edition, 27 March 2025.

6. SOURCE »

7. Danco, A, SOURCE »

8. SOURCE »

9. See JDP 1 and NATO’s AJP-01.

10. For example, Unilever’s Consumer Twin, see https://www.technologymagazine.com/articles/how-unilever-is-reinventing-product-marketing-with-ai and Meta’s AI Sandbox, see SOURCE »

11. Google Performance Max, SOURCE »

12. Coders and compute time could cost up to £2.5M for a major campaign which models an entire market.

13. SOURCE »

14. SOURCE »  

15. S0URCE »

16. SOURCE »

17. SOURCE »

18. SOURCE »  

19. King, A., AI, Automation and War: The Future of Command, Cambridge University Press, 2024, p.157

20. Defence Science and Technology Laboratory(/government/organisations/defence-science-and-technology-laboratory)

CONTRIBUTED by
Stuart Skeates
Lieutenant General (Retired), Stuart Skeates CB CBE, served in the Army for 34 years, and after retiring became a Senior Civil Servant. Educated at the Judd School Tonbridge, King’s College London, and the Royal Military Academy Sandhurst, he was commissioned into the Royal Regiment of Artillery in 1988. During his career, he served in the United Kingdom, Germany, Saudi Arabia, Kuwait, Northern Ireland, the Balkans, Cyprus, Afghanistan, and Iraq. Stuart has extensive leadership experience on military operations and spent the latter 20 years of his career in joint operations, with coalitions, with the US Marine Corps and with NATO. He was also Commandant of the Royal Military Academy Sandhurst. Stuart holds degrees from Kings College London and the Cranfield University Management School, and completed the advanced and Higher Command and Staff Course, the NATO Pinnacle Course, and the Major Projects Leadership Academy. After leaving the Army, he worked on small boats and illegal migration initially in the Cabinet and subsequently in the Home Office.
Read more
Subscribe to Karve's quarterly roundup newsletter

Including market trend insights, company updates and info on innovation funding streams, growth strategies and other helpful scale-up tactics for your organisation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post