AI in the military is advancing, of that there is no doubt, but it is crucial to know what works for each solution area so that the customer doesn’t assume that what is done in the civilian market is directly translatable to the military. Mikael Grev, Avioniq's CEO, delves into some of the challenges of incorporating AI into the military space, and the novel approach Avioniq is taking to harness its potential for improving air combat operations.
AI is proving to be an excellent tool for solving certain types of problems in civilian life, particularly where it is acceptable for the results to be a bit fuzzy. In the civilian sector, accuracy, repeatability, and understanding of underlying motives are not primary requirements, so the possibilities of AI are boundless.
In the complex and sensitive military environment, where accuracy can mean the difference between life and death and humans will need to remain “in the loop” for the foreseeable future, we cannot simply copy over civilian AI solutions to military applications.
The type of AI that has garnered the most general interest is generative AI, which includes ChatGPT, image and video creation services. It essentially involves feeding data, such as a text query, an audio file, an image, and receiving a response as data, like text, audio or image.
A characteristic of generative AI is that the number of distinct combinations of inputs and outputs is virtually infinite. When you generate an image from text, it will be different every time. This makes it challenging to verify that the result is correct – or even good – since these AI models rarely have a reference point for what the correct answer actually is. To know if the answer is “good”, a domain expert must evaluate it, which is very resource intensive.
In the military sector, if you wanted to apply generative AI to a mission set such as air combat, the answer could also vary each time. The clarity of what constitutes a “good” response is also uncertain. The difference is that experts (combat pilots) who know the difference between a “good” and a “bad” response are in short supply – and using their skills as domain experts to train AI systems is low on the priority list.
To make things more complicated, generative AI can only answer questions when the entire domain problem is contained within the question itself. This could be solved by AI experts evaluating whether the answer is more or less correct in relation to the question, but when the solution is part of a larger context – with implicit information in the question that is not included as input data, or where information and conditions exist in other systems – it becomes trickier.
A simple example is that it is relatively easy to create an AI that plays chess incredibly well but challenging to create AI that plays equally well in cooperation with a human. Similarly, it is relatively simple to create an AI that is incredibly efficient and almost unbeatable in air combat if the system can do as it pleases and there are only known, clear rules. But it quickly becomes very complex and requires significant domain knowledge from pilots if you want to create an AI that conducts air combat in collaboration with an operator, where the latter might have conditions that the AI is not aware of.
The root of both problems lies in the boundaries where humans and machines must understand each other to work together. Today, AI cannot, and will probably never be able to – except in specific cases – effectively explain why it acts in a certain way that gives the operator the ability to change certain unprogrammed priorities and conditions.
In air combat, the winning process is a continuous trade-off between different solutions. It is easy to underestimate the complexity of the training that goes into the preparation of the fighter pilot for combat, and his/her role within the fight. Pilots undergo rigorous, multi-year training and selection processes, and develop an intrinsic ability to estimate threats and engagement timing as they gain experience in the cockpit.
To replace this intrinsic fighter pilot knowledge with AI, even during certain combat sequences, the AI would need to be fed virtually the entire pilot training syllabus (as has been suggested), risk assessments (difficult to quantify), and all mission information (constantly evolving) for the AI to generate a solution that doesn’t require human understanding.
Therefore, for the foreseeable future, the operator-in-the-loop will need to understand and continuously be able to correct what is happening, using their experience and applying all the “soft” information that changes as a mission unfolds.
The consequences of AI losing a chess game are manageable, but for military applications the stakes are sky-high. This is especially true in the air arena where events unfold extremely rapidly and mistakes have significant consequences.
This does not mean we cannot use AI for the military air arena. Far from it. But it also does not mean that all approaches to apply it in the current market are equal.
At Avioniq we have been using AI for air combat applications since 2016. The disclaimer is this: from the start, we have been aware of AI’s limitations in air combat, and instead of asking the question “how to create an AI that is unbeatable in air combat” (which is a poorly defined problem), we have broken down the challenge into smaller pieces. We call this Verifiable AI.
We ask smaller, simpler questions and use simulation to directly verify the output. So, we ask:
“How many g's must the aircraft turn to avoid the incoming missile if it is of type AA 10-C?”
This is an exact question with an exact answer. If we have a model of the missile and can verify the output, the answer is a simple number between one and nine.
Knowing the minimum number of g's the aircraft needs to turn to evade an incoming missile is an important parameter in air combat. The pilot does not want to turn too little (getting hit) or too much (becoming passive and losing speed). Verifiable AI delivers the solution, and in doing so takes the cognitive load of this specific decision off the operator, leaving him/her free to focus on the decisions at a higher level of abstraction.
In this way, we say that we augment – not replace – the operator. We aim to enhance pilot effectiveness by managing complex tactical computations by providing the means to visualize critical tactical opportunities, and allowing pilots to leverage their training and experience for nuanced judgements. The skill in developing this capability is knowing the right questions to ask to get the AI to produce output that will enhance the pilot’s own performance by freeing up his/her cognitive load to focus on the big mission questions. We believe the best domain experts for this development are combat pilots or air battle managers with technical backgrounds.
We are applying this technique to many questions in the air combat mission. And each of these AI systems can then be combined and integrated into combat aircraft, air defense systems, and C2 systems without replacing the human with a machine. This results in iterative development over time, without the Big Bang problem that the dream of comprehensive AI entails.
Another question we might ask is: why now? There are a number of AI firms targeting fully autonomous fighter jet programmes that could become viable in the post-2040 timeframe at the earliest. On the other hand, we are targeting today’s combat aircraft, and in fact our technology is flying in operational aircraft already.
To understand why we – and our partners – consider it critical to unlock Verifiable AI capabilities on today’s 5th generation combat aircraft, we must consider some stark realities facing Western air forces in the real-world context.
For better or worse, until recently the West has not faced a viable peer or near-peer threat for decades. The effect of this has been a lack of urgency in the development of next-generation aircraft, to the point that development has been allowed to slow dramatically. Airframe numbers have dropped almost across the board, as have pilot numbers. To sum up, many Western nations have fewer aircraft, and fewer pilots to fly them. To make matters worse, the very fact of this lack of real-world threat has further reduced our ability to face it in the future, simply because the pilots we do have are less experienced in the kind of fight they could need to undertake in the coming decades.
At the same time, the threat itself is changing. Today’s Beyond Visual Range (BVR) weapons reach far beyond the ranges for which current in-use decision support systems (such as the Weapon Engagement Zone, or WEZ) were designed. Decades ago, an air-to-air missile such as the AIM-120A AMRAAM offered a range of around 75km (40 NM), increasing to 150km (93 NM) with the latest versions. Today’s BVR missiles eclipse these ranges, with systems from both China and Russia known to exceed 300km (186 miles).
A missile with that kind of range takes minutes to arrive, and a lot can happen in flight during that time – both inside the missile and its guidance systems, and for the aircraft being targeted – which has substantial opportunity to evade if it can manoeuvre early enough. The issue is that today’s systems only show the enemy’s maximum engagement envelope to the targeted aircraft pilot, which makes it difficult to know exactly when or where to manoeuvre because as the length of the missile flight increases, the possible outcome space grows exponentially.
The result is a widening gap between the threat and the pilot’s ability to counter it based solely on the experience he/she has faced in combat or even live/simulated training scenarios. This is the gap we believe is best served by Verifiable AI. Systems that can take information from available sensors keep track of historical data – such as when the enemy may have fired a weapon – and mathematically predict likely outcomes and present them visually to the pilot, showing the available (and unavailable) tactical options, present an extremely compelling solution for the myriad challenges facing Western air forces. The result would be improved capability for today’s combat aircraft and improved situational awareness for those that fly them.
Independent tests of our solutions show a several-hundred-percent increase in air combat effectiveness simply by introducing the right Verifiable AI. When one aircraft becomes worth three, it can be translated into billions in increased customer value per air force, so there is no doubt that software that harnesses AI has a bright future in military systems.