Defence & Security

Revolutionising Defence Intelligence, Surveillance & Reconnaissance (ISR) with AI

Published on
February 13, 2024

AI has the potential to perform tasks that normally require human intelligence, such as perception, reasoning, learning, decision-making, and communication. While it may not yet be sufficiently capable of some of these tasks without supervision and correction, the rate of progress has been truly astounding.

Even existing capabilities of AI offer a force multiplier to augment and automate many aspects of defence ISR by enhancing the speed, accuracy and efficiency as well as reduce the risks and costs for human personnel and resources. AI can provide a competitive edge and a strategic advantage over adversaries who may also use AI or other technologies.  

With complex systems it can be useful to abstract down to the elemental components, which align with the stages of the OODA Loop process; input, process and outputs.

The OODA loop is a four-step decision-making model that helps one act faster and more effectively than one’s competitors. The four stages are:

  • Observe: gather and organize data about the current situation and environment.
  • Orient: analyze the data and update your understanding of the reality.
  • Decide: choose the best course of action based on the available information.
  • Act: implement the chosen plan and monitor the results.

The OODA loop can help you solve problems, adapt to changes, and learn from feedback in complex and uncertain situations. Any opportunity to improve the accuracy and speed of our own OODA loop could enable us to react faster, disrupt the adversary’s OODA loop and reduce any first mover advantage an adversary might have.

More Inputs = Increased Observation

AI can process and analyse large volumes of data from various sources, such as sensors, satellites, drones and computer networks, and extract relevant information to provide faster and more accurate insights, leading to better situational awareness and operational effectiveness.

With more processing power and capabilities, we can use more data in our process. This reduces the reliance on any single data source and makes the system more accurate and objective. Similar to SEIM technology in cybersecurity, which combines data from different sources for correlation and enrichment to enhance detection and response, AI takes this ancient concept to new heights. As Aristotle said, “The whole is greater than the sum of its parts.”

Computers, mobile devices, smart devices, and the networks that connect them are expanding rapidly in number and complexity. These devices and networks generate and store vast amounts of data, which can be valuable sources of data for intelligence, surveillance & reconnaissance. By exploiting this data, we can gain insights into the activities, capabilities, intentions, and vulnerabilities of adversaries and allies. However, exploiting this data also poses challenges, such as ensuring its accuracy, security, and legality.

Increased Accuracy and Speed = Accelerated Orientation

By detecting and identifying objects, patterns, and anomalies in complex and dynamic environments, AI can enable more precise and reliable target recognition and identification, reducing errors and collateral damage. This can help generate actionable intelligence products that can inform and guide the orientation of the decision-makers and operators.

According to a report by the UK MOD, AI can reduce the time required to analyse satellite imagery by 95%, and increase the accuracy by 30%, and in 2020 the US DoD used AI to detect and track ballistic missiles in a live-fire test, thereby demonstrating the potential of AI to enhance missile defence systems.

Human + AI = Better Decisions & Actions

AI can enhance the decision-making and planning process by providing recommendations and predictions based on data-driven models and simulations. This can improve the outcomes and impacts of operations in complex and uncertain situations. AI can also support the execution of actions by providing the necessary information and guidance to the operators and assets, help monitor and assess the effects of the actions, and provide feedback and learning to improve the OODA loop.

AI can also improve human-machine teaming by enabling seamless communication, coordination, and collaboration between humans and AI-enabled systems. This can result in more capabilities, skills and confidence for the human operators, as well as supporting their creativity, curiosity, and intuition. AI can also help the human operators evaluate the possible courses of action and their consequences, as well as anticipate the adversary’s reactions and countermeasures.

What are the challenges and risks of AI in defence ISR?

When we take decisions and actions based on evaluations from AI models, we need to consider the accountability, transparency and trustworthiness of AI systems and fully understand their logic. This way, we can interrogate it and ensure it aligns with our intentions, as well as consider the wider potential impact on human dignity, rights, and societal values.

This can also raise ethical, legal and moral issues, especially around the use and regulation of autonomous weapons systems (AWS), and the possibility of human error, bias, or mindless deference to AI analysis rather than thinking for ourselves.

AI can also pose technical and operational challenges, such as the reliability, robustness and security of AI systems and their data, as well as the compatibility and interoperability of different AI platforms and standards.

Moreover, AI can create new threats and vulnerabilities, such as the possibility of AI misuse, abuse, or error. Adversaries with a deep understanding of these systems may seek to exploit or manipulate them. In seeking to accelerate decisions and actions, if AI is enabled to make decisions itself, a miscalculation or factors these models cannot consider could create the potential for unintentional escalation of conflicts.

How can we develop and adopt AI in defence ISR in a responsible and trustworthy manner?

AI is a rapidly evolving and disruptive technology that has the potential to transform the defence sector and the international balance of power. However, it also poses significant challenges and risks that require careful and collaborative management and governance.

Therefore, it is important to develop and adopt AI in defence in a responsible and trustworthy manner, involving multiple stakeholders from government, industry, academia, and allies.

It is important to establish and implement ethical principles and guidelines for the design, development, and deployment of AI systems, such as the UK MOD's “Defence AI Ethical Principles” or the US DoD's “Ethical Principles for AI”.

There are some exceptions for defence in terms of transparency compared to other industries applying the principles to ensure and enhance the accountability and explainability of AI systems and their decisions. These include using audits, reviews, and feedback mechanisms, and providing clear and understandable information and communication to the users and potentially the tax paying public if necessary.

To strengthen and secure the resilience and robustness of AI systems, we need to follow the highest standards of information security, and test and validate the systems – including their performance – under different scenarios and conditions. We also need to provide adequate training, education, and support to the users. This will foster trust and confidence from the users of AI systems, by ensuring the quality, accuracy, and reliability of the data.

Finally, we must seek to cultivate a collaborative and cooperative approach among different stakeholders, partners and allies in AI, by sharing best practices, standards, and experiences, and engaging in dialogue, consultation, and coordination on AI-related issues and challenges.

References

  1. Ministry of Defence. Defence Artificial Intelligence (AI) Playbook. GOV.UK. 2024. Source »
  2. Toffoli J. What is Intelligence, Surveillance, and Reconnaissance (ISR)? Clarifai. 2022. Source »
  3. Artificial Intelligence in defence, diplomacy, and decision-making. Bruegel. 2024. Source »
  4. Thompson A. How China is using AI for warfare | CSET Georgetown.edu. 2022. Source »
  5. DOD Adopts Ethical Principles for Artificial Intelligence. U.S. Department of Defense. 2024. Source »
  6. Speeding Up the OODA Loop with AI - Joint Air Power Competence Centre. 2021. Source »
  7. Johnson J. Automating the OODA Loop in the Age of AI - Nuclear Network. 2022. Source »
  8. Anderson W, Husain A, Rosner M. WHY TIMING IS EVERYTHING.; 2017. Source »
  9. Dewalt K. The OODA Loop is the Foundation of Your AI Strategy. Medium. 2023. Source »
  10. Eric Schmidt, Chair. Final Report. National Security Commission on Artificial Intelligence. Source »
Written by
Graeme Manzi
Graeme is a Senior Security Engineer at Resilience based in London. He has extensive experience in cyber security consulting and risk management with a focus on critical national infrastructure and financial services. Prior to his time in consulting, Graeme served as a Royal Marines Commando specialising in secure data communication, and received a commendation in this role while on Operations in Afghanistan. He holds an MSc with distinction in Information Security from Royal Holloway, University of London and is GIAC & ISACA certified in industrial control system security (GICSP) and secure cloud computing architecture and audit (GCLD, GPCS, CCAK).
Read more
Subscribe to Karve's quarterly roundup newsletter

Including market trend insights, company updates and info on innovation funding streams, growth strategies and other helpful scale-up tactics for your organisation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post