National Security

The Theory & Practice of Delivering AI-at-Scale in Defence

Published on
July 15, 2025

In the last 30 years, nothing has quite captured the imagination – and anxiety – of business leaders, technologists, and the public like artificial intelligence (AI).  

In recent years, we've seen AI emerge from the shadows as a transformative force, promising to revolutionize business practices and drive significant economic benefits. However, when technology leaders’ wild (if unsurprising) claims are supported by economists, social commentators, and politicians, then it is time to sit up and take note.

This rush to take advantage of AI’s potential has spread to the whole of the defence apparatus: the Ministry of Defence (MoD), the Armed Forces, and defence industry. Over several decades we have seen widespread deployment of digital technologies in many areas of defence. The UK Defence AI Playbook, issued in January 2024, highlights a wide cross section of current uses of AI, from enhanced object detection in satellite images to predicting equipment failure to optimise the management of spare parts. AI has even been acknowledged as playing a part in collating the latest UK Strategic Defence Review.  

As a result, in modern warfare, ongoing military conflicts, notably in Ukraine, highlight how digital technologies are embedded in every aspect of defence, with AI increasingly influential. Indeed, the conflict in Ukraine has even been described as “a living lab for AI warfare”.  

Yet, despite the enthusiasm from AI advocates, a troubling reality with broader AI adoption is emerging. There is, within some quarters, a growing disconnect between the initial excitement surrounding the theory of AI’s impact and the realities of its implementation. A UK government review on developing AI capacity and expertise in UK defence highlighted that “MOD needs to undergo a wider cultural change to adapt to a world where military advantage is increasingly delivered by digital capabilities and cheaper platforms that can be rapidly developed, deployed and iterated”.  

This raises fundamental concerns: are the current barriers to AI adoption temporary stumbling blocks, or are should we expect another disappointment for large-scale digital transformation?

In Defence of AI

The defence sector's relationship with AI is both inevitable and complex. As Kenneth Payne explores in his influential work "I, Warbot", we are witnessing a seismic shift from human-centred to machine-driven decision-making. AI systems are increasingly capable of processing vast amounts of data and making rapid decisions that were traditionally the domain of human strategists and commanders.

This transformation is accelerated by the sector's unique operational environment. Defence organizations operate under conditions of extreme volatility, uncertainty, complexity, and ambiguity (VUCA) – precisely the conditions where AI's capabilities for pattern recognition, predictive analytics, and rapid decision-making can provide decisive advantages. From enhanced object detection in satellite imagery to predicting equipment failures for optimal spare parts management, AI is already demonstrating its value across multiple defence applications.

Christian Brose, in "The Kill Chain", emphasizes the urgency of this transformation. Traditional approaches that rely on legacy systems and processes are becoming not just outdated but potentially dangerous. In the face of rapidly evolving threats and technological capabilities, organizations that fail to adapt quickly risk becoming obsolete – a consequence that in the defence sector extends far beyond commercial considerations.

The AI Adoption Paradox

Despite the hype and notable successes in specific areas, such as image recognition, language translation, and forecasting, widespread AI adoption across areas of critical operational impact face several formidable obstacles. Led by integration challenges, security concerns, and privacy issues, doubts about the cost-effective deployment of AI are emerging.

Overcoming these barriers to broad adoption is critical in such a complex area as defence. In recent years, the UK MoD has highlighted the challenges of realizing the benefits of AI across its domain. It has recognised that digital transformation of the UK defence capability is one of the most critical strategic challenges of our time. According to a 2022 policy statement, the UK government’s goal is “to adopt and exploit AI at pace and scale, transforming Defence into an ‘AI ready’ organisation and delivering cutting-edge capability”.  

The challenge, MoD has recognized, is how to achieve this goal in a diverse, complex organizational setting involving in a wide range of growing threats in an uncertain world where operational needs must be balanced with unappealing budgetary choices. Furthermore, it is a challenge made even more difficult with the disruptive nature of AI and the consequence of its use in decision making where lives and livelihoods are at stake. A difficulty that was clearly identified in the policy statement:

…the issue may not lie in ‘what’ the capability is designed to do, but ‘how’ it does it, and how we ensure that AI is used effectively and appropriately.

This is a recognition that, in addition to operational issues, more fundamental questions about ethics, bias, and safety arise when deploying AI at scale in the defence sector. This leaves many people feeling trapped in a cycle of limited experimentation and unpalatable strategic choices. They struggle to identify where AI fits within their current operational constraints, who has responsibility for broad AI deployment, which policies and practices now require adjustment, and how to demonstrate AI’s tangible value to its users and stakeholders.

This gap between AI's promise and its practical implementation is what I call the "AI Adoption Paradox".

The Two Faces of AI Adoption

To understand this paradox and chart a path forward, we need to recognize that AI adoption typically unfolds in two distinct phases.  

The Experimental Phase

In this initial stage, researchers, technologists, and data scientists lead the charge, focusing on small-scale, isolated use cases. Data access might be limited, robust infrastructure services may not be in place, and quality isn't always perfect, but that's manageable at this stage – the goal here is proof of concept. Success is measured by technical performance, and funding tends to be project-based and incremental.

Recent surveys indicate that almost half of defence organizations globally have already implemented AI solutions at this experimental level, with another quarter running pilot projects. This widespread initial adoption demonstrates both the sector's recognition of AI's potential and its capability to execute technical implementations.

The Enterprise-wide Phase

Following this, broader financial, strategic, and political concerns dominate as organizations consider the step up to large-scale, integrated systems. The focus shifts from technical novelty to strategic implementation for competitive advantage. Success criteria evolve to emphasize operational outcomes and ROI. Funding becomes strategic and continuous. Most critically, this phase demands deep integration across a variety of functions and systems, requiring a broad set of cross-functional collaboration skills.

A recent Boston Consulting Group (BCG) survey found that 65% of aerospace and defence AI efforts are still only in the proof-of-concept phase. Only one in three is improving the business in measurable ways. To make progress requires a strategic approach that recognises and addresses the key barriers to delivering AI-at-Scale – an approach that most organizations remain ill-prepared to implement.  

The AI Scale-Up Challenge

Overcoming the “AI Adoption Paradox” means moving from experimental AI implementations to enterprise-wide deployment. This requires simultaneous progress across six critical areas, each presenting unique challenges in the defence context.

1. The Data Foundation Challenge

Scaling AI requires a robust, accessible, and high-quality data foundation. Many organizations struggle with data silos, inconsistencies, and privacy concerns. In the defence domain, for example, a report on the MOD’s data strategy concluded that despite a rising volume of data from their increasing number of sensors, they are finding it harder than ever to isolate the insight from the information.

2. The Tussle for Talent

The defence sector's talent requirements extend beyond the typical AI team composition. While data scientists and AI engineers remain crucial, defence applications demand additional expertise: domain specialists who understand operational contexts, security professionals who can navigate classification requirements, and ethicists who can address the moral implications of AI in life-and-death decisions.

An IBM survey into Ai in defence revealed significant skills shortages limiting AI expansion across defence organizations. Moreover, the sector must compete with commercial organizations for scarce AI talent while offering different value propositions – mission purpose and national security impact versus purely commercial incentives.

3. The Culture Change Conundrum

Military culture values human judgment, experience, and leadership in critical decisions. Integrating AI systems that may recommend or even make autonomous decisions requires a fundamental shift in how personnel at all levels understand their roles and responsibilities.

Furthermore, integrating AI across the enterprise requires a significant cultural shift. Building AI literacy and addressing resistance and fear across all levels of the organization is a crucial starting point. All too frequently, however, these are superficial efforts to justify management strategies without engaging sufficiently with the difficult human aspects of change management. An EY study into the role of Ai in military asset management found that this shift requires a multifaceted approach that combines education, practical experience, and a supportive innovation culture.  

4. The Infrastructure Imperative

Defence AI systems face infrastructure requirements that extend far beyond commercial applications. They must operate across diverse environments – from sophisticated command centres to forward operating bases with limited connectivity. They require computational power that can function in contested environments where adversaries actively seek to disrupt or exploit AI systems.

Added to that, AI consumes massive amounts of data in how it is trained, tuned, and applied. It demands significant computational power and advanced infrastructure to process that data in algorithms that encode complex, deep analytics. Many organizations find their legacy systems strain under the weight of AI requirements. Modernizing IT infrastructure to cope with these demands is always a prerequisite for AI-at-Scale, yet often too readily overlooked in favour of other more attractive tasks. In domains such as defence, such concerns are compounded with a myriad of security and interoperability issues.  

5. The Governance &Accountability Framework

The prominence of AI in defence has created unprecedented governance challenges. Defence leaders must develop comprehensive frameworks that address not just data management and algorithmic bias, but also questions of accountability in autonomous systems, compliance with international humanitarian law, and the ethical implications of AI-enabled weapons systems.

If nothing else, the prominence of AI today has thrown down the gauntlet to leaders and decision makers everywhere – face up to the obligations of broad data collection, management and use, or suffer the legal, financial, and reputational consequences. Developing a comprehensive AI governance strategy is no longer optional – it’s essential. But more than that, the principles embodied in governance strategies must be a fundamental part of daily thinking and actions. Efforts such as the UK’s dependable AI in defence directive (JSP936) provide a good first step. Yet, deciding where and whether AI can act beyond being a human-focused tool and take on an autonomous decision-making role requires addressing significant issues.

6. The Technical Debt Trap

Defence organizations operate substantial technology stacks that have evolved over decades to support critical missions. While highly capable, this accumulated technical debt creates significant hurdles for AI-at-Scale implementation. Unlike commercial organizations that can sometimes replace entire systems, defence organizations must integrate AI capabilities with existing platforms while maintaining operational continuity.

Across defence, substantial effort, time, and resource is devoted to maintaining increasingly complex technology stacks. In 2022, the UK’s National Audit Office (NAO) report that Defence Digital estimated it will spend £11.7 billion over 10 years updating or replacing legacy systems, and upgrading to modern replacements. Yet, organizations launch into new AI projects without addressing underlying technical issues within their software infrastructure. This accumulated “technical debt” can create significant hurdles when moving toward AI-at-Scale. Building on this shaky foundation destabilizes the robust, resilient, and reusable framework required for AI-at-Scale delivery.

Delivering Responsible AI for Defence

The defence sector's adoption of AI-at-Scale carries unique responsibilities that extend beyond technical implementation. The "three R's" of AI – Responsibility, Reliability, and Robustness – take on heightened significance when AI systems operate in contexts where human lives and national security are at stake.

Bias mitigation becomes critical when AI systems influence decisions about threat assessment, resource allocation, or targeting. Defence organizations must implement rigorous testing protocols while recognizing that traditional bias detection methods may not capture the full range of challenges in operational environments.

Reliability and robustness requirements in defence exceed those in most commercial applications. AI systems must maintain performance under adversarial conditions, with limited connectivity, and in high-stress environments where failure is not an option. This demands comprehensive testing and validation processes that simulate the full range of operational scenarios.

Transparency and explainability present particular challenges in defence applications. While stakeholders need to understand AI decision-making processes, operational security requirements may limit the level of detail that can be shared. Defence organizations must balance the need for explainable AI with security considerations and the fast-paced nature of operational decision-making.

The Road Ahead: The Future of AI-at-Scale in Defence

As we look to the future of AI, there is significant cause for optimism at the opportunities that it presents and the responsibility this brings. We're on the cusp of AI systems that can dramatically enhance human capabilities, streamline complex operations, and unlock insights from vast stores of data. But realizing this potential in domains such as defence requires more than just technological advances. It demands a holistic approach that addresses the technical, organizational, and ethical dimensions of AI implementation in contexts that are often uncertain, ambiguous, and require balancing a variety of competing risks. As leaders, practitioners, and academics in this field, we have a duty to guide this transformation responsibly.

In domains such as defence, the challenge can be overwhelming. Recent reviews have found that even though our understanding of military applications and implications of AI is growing, it's still from a relatively weak foundation. Discussions often overemphasize certain high-profile concerns such as lethal autonomous weapons systems (LAWS) while neglecting other crucial areas such as strategic planning and pre-emptive equipment maintenance. The focus on tactical issues overshadows strategic considerations. Furthermore, the short-term consequences of AI in defence often take precedence over the longer-term, additional effects that could have the most significant impact.

Hence, the path to AI-at-Scale in defence is not an easy one, but it is undoubtedly one of the most important journeys now taking place. By working together, sharing knowledge, and maintaining a commitment to responsible approaches, we can harness the transformative power of AI to drive meaningful change in the defence domain and more broadly across society.

CONTRIBUTED by
Alan Brown
Alan is Professor in Digital Economy at the University of Exeter and Research Director at the Digital Policy Alliance specialising in AI and digital transformation topics. With nearly several decades of experience in the USA, Spain, and UK, he has held senior roles at Carnegie Mellon University, Texas Instruments, and IBM as a Distinguished Engineer. He co-founded the Surrey Centre for the Digital Economy and co-founded the Defence Data Research Centre. Alan has authored six books, including "Surviving and Thriving in the Age of AI" (2024), and consults with leading organizations in the public and private sectors. He is a Fellow of the British Computer Society.
Read more
Subscribe to Karve's quarterly roundup newsletter

Including market trend insights, company updates and info on innovation funding streams, growth strategies and other helpful scale-up tactics for your organisation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post