Are a company’s values a fixed point of reference that provide the foundation to weather all change? Or should they be understood as principles that can and should adapt to a changing reality, providing the necessary flexibility to permit an organisation to grow rather than become left behind?
There are few changes as profound as those connected to the adoption of AI and machine learning in the business environment.
AI offers enormous leaps in the ability to process huge amounts of data quickly and reach conclusions based on that information and this will only increase as quantum computing comes into use. For everything from HR and logistics, through to marketing strategy, customer interactions and help desks, AI is already affecting the way that people within an organisation relate to each other and their wider environment.
Incremental process changes
Many of the changes appear incremental rather than revolutionary - a process change here, a new source of information or way of calculating risk there - but that change is clearly happening. These changes are certainly not limited to the business world, and in many ways are more visible in the military environment. AI is already starting to transform military activities, including logistics, intelligence and surveillance, and even weapons design. AI applied to games has demonstrated that novel, game-winning strategies can be developed that, if applied to a contested military environment, might give an edge over an adversary. For the US Navy, uncrewed systems such as the Sea Hunter - a robotic warship - are starting to be deployed with the 4th Fleet, to start demonstrating their potential.
As far as autonomous weapons are concerned, the official UK position is that there will always be a person in (or at least on – monitoring and able to intervene) the loop making any life-or-death decisions. However, this does rather ignore the defensive uses of AI in many integrated systems where any human operator monitoring and intervening in a system would render that system too slow to be effective. It is clearly counterproductive to have human response times involved in near-light speed processes. This inevitably means that autonomy is likely to creep in through defensive systems, whatever the stated position of any government. And since there’s no hard and fast definition of a defensive weapon, and a powerful incentive not to be left behind, the spread of autonomous systems is most unlikely to stop there.
As entities that claim to be values-based organisations, how is the military responding to these challenges? Specifically, how are traditional organisational values (and standards) being considered in light of the changes that are taking place?
The military's values-based foundation
The values-based foundation for all western militaries represents virtue ethics approach to training and education in the military. Just like many professional militaries, the British armed forces invest a huge amount of effort in ensuring that those they promote into positions of authority have the character to be able rise to the challenge of their new position. For the British Army, the identified virtues (or values as they are referred to here) are Courage, Discipline, Respect for Others, Integrity, Loyalty and Selfless Commitment. This virtue ethics approach concentrates on the importance of character and how we can nurture the right types of behaviour, with the institutional expression of those virtues drilled into each person so they are clear what type of virtues the organisation wishes to see manifested in its people. The idea is that they are internalised through conscious training and unconscious institutional diffusion: ‘This is what we should do’. The more we do the right thing, the more it becomes habit and therefore part of the stable disposition that informs one’s character. The hope is that, by ‘fostering such behaviours, and promoting those who consistently demonstrate them, people will be able to do the right thing when the situation demands it’.
But what do those values look like when something fundamental changes? Back in 2017, Dan Maurer presented a paper at the Royal United Services Institute in which he asked “Does K-2SO deserve a medal?”
An example droid scenario
Imagine a droid that, while in no way looking human, is assigned to an infantry squad. Let’s say this one is named “J.O.E.” for Joint Omni-oriented e-Soldier. Joe has two arms and two legs, and can keep up with Spec. Torres in a foot race, and can—if need be—qualify on a rifle as a marksman… Thanks to a generation of “Google.ai” improvements, Joe can translate languages, is well versed in local history and cultural studies of any area to which he is deployed, and is able to connect to this far future’s version of the Internet. Joe is a walking, talking library of military tactics and doctrine. Joe is programmed to engage with each soldier on a personal level, able to easily reference the pop culture of the time, and engage in casual conversation… Aside from being made of ink-black metallic and polymer materials and standing a foot taller than everyone else, Joe is a trusted member of the squad. And Joe can tell a dirty joke.
What moral obligations do the rest of the squad owe Joe? “He’s” one of the team, so would they sacrifice themselves for “him”? Or is Joe simply a tool? If so, what stops them from all being considered as mere instruments? What does being part of the team actually mean? What does this mean for a values-based organisation?
The interpretation of values in changing circumstances
While the values themselves may not need to change, the way they are interpreted in light of the changing circumstances will of course change and adapt - context is important, or you deny the ability to respond and adapt to changing reality. That doesn’t mean that the values themselves are watered down. A healthy organisation needs to reflect on its own values and what they mean as part of an ongoing process and this is an essential part of any claim to be part of a genuine profession.
Values will always need to be interpreted in context and therefore they have to be far more than just a slogan on the wall. For example, ‘courage’ is a value (or virtue) that is supposedly easy to understand, but what courage looks like on a patrol in Afghanistan may be very different to the courage required by an administrator who wants to question the receipts submitted by a Commanding Officer, or the Chief of the Defence Force when faced with a questionable direction from the Prime Minister (an argument made clearly by Whetham in the Brereton Report). Thus, the organisation at every level needs to party to an ongoing discussion about its values and what they mean as things inevitably change.
Ensuring new technology doesn't undermine the principles of war strategy
It is also important not to get carried away with the potential of new ways of doing things, particularly where technology appears to give you an edge over a competitor. As Whetham and Payne argued In Defence of Uncertainty back in 2019, there is some merit in not having all of the answers (or at least in not believing that you do). Military planners have been seeking the Holy Grail of being able to see through the fog of war that introduces so much uncertainty and doubt into military decision making. Whether this was the Revolution in Military Affairs or Network Centric Warfare, the tantalising promise of reducing friction has been extremely attractive.
AI appears to offer the same thing today – machine learning and the ability to crunch huge amounts of data can generate innovative answers and do it quickly so one can get inside an opponents’ OODA loop in a way that provides a distinct relative advantage. However, caution is not always a bad thing (for example, see the chapter on Patience in Military Virtues). Even without all the concerns of seeing inside the black box to be aware of inherent biases and potentially skewed assumptions in the AI’s calculations, one of the potential risks is not so much the generation of a wrong answer, but the overwhelming temptation to act with confidence and certainty in response to ‘an answer’ even when caution is the appropriate course of action. This isn’t about the machines going rogue and making decisions themselves, but about people dialling down their critical faculties and acting on the information that is presented as if it is somehow infallible.
Staying true to proven systems and processes
Without good systems and processes in place to provide checks and balances, the answers generated by an AI system are likely to be taken as definitively correct, even when they are very clearly wrong from any objective position. This kind of machine-bias is demonstrated every time we see someone driving into a river or getting stuck in a narrow lane while following their SatNav. That is problematic enough on our roads but is a problem of a higher order when translated into the military domain. This misplaced confidence risks removing the psychological uncertainty that introduces caution into planning, the removal of which could be extremely dangerous in a conflict situation that ultimately has the potential to escalate into a nuclear exchange (see for example the Cuban missile crisis).
AI is often presented as a way of reducing costs by eliminating expensive people. If this is so, the people remaining will need to be trained and educated to a higher level to be able to ensure that AI remains a tool rather than an accidental master. While the stakes may be different where it is stock price or market share at risk rather than lives, these are all lessons that should be borne in mind in the business world as well as the military one.