Defence & Security

The Human Role in Building NatSec Artificial Intelligence Capability

Published on
January 23, 2024

Interact with an AI today, particularly one built from a Large Language Model (LLM), and you’ll most likely experience what feels like magic. Given the right prompt (request), they seem to know exactly what kind of response, be it image or text, you wanted. They seem primed to help the hardest of military tasks too: AI-driven weapons with humans in the loop.

When we look more closely at how AI systems are developed, we can see they are prone to new kinds of problems that can cause them to malfunction or ‘hallucinate’. These malfunctions make AIs prone to having a soft underbelly of potential attack surfaces. To create and maintain an AI, there is still a large amount of human intervention required. Policymakers and AI builders need to be conscious of the process of how AIs are built. Contracts and labor pools need to be shored up to protect AIs from attack.

AIs are Humans All the Way Down

LLMs are a form of supervised machine learning. They have three parts: a code layer containing the statistical model at the heart of the AI, a tagged dataset, used to test, tune, and build the model (otherwise known as the training data), and prompt engineers and/or users who will ask the AI for a response and then use it or rank it as a poor response.

The creation of training data is the most human-intensive and least well-understood by the public part of the AI building experience. Many AIs use outside sources as well as private labor pools to create a training dataset. For many LLMs, Wordnet starts as a basic part of their training corpus. Wordnet was built by some computational linguistics in the US manually tagging words in the 1980s.

It famously contains both offensive content as well as out-of-date uses of how some words in the English language are used. Image-driven AIs are built on top of Imagenet. Building incorrectly on top of Imagenet can cause a plethora of problems. For example, Google once built an AI on top of Imagenet that infamously labeled people with black skin in photos as gorillas.

Some sources of data LLMs use are privately built from otherwise unstructured data. For example, Waymo has to build a dataset of people crossing the street from miles of video they obtained from driving cars. Companies building AIs will hire third- and fourth-party contractors to label the data in various ways to lower costs.

Examples of companies that do this source of third and fourth-party data tagging include Samasource, Ango Ai, Accenture, Mighty AI Cloudfactory, iMerit, and Perotypx. They hire data labeling personnel worldwide, often to the cheapest bidders.

Third- and fourth-party contractors are also sometimes hired as prompt engineers and/or AI moderators as part of the tuning and maintenance phase. This is the final, and most continuous, step of using an AI. They insert questions into the AI and judge the response. Most LLM companies also use average user signals about answers to tune AIs (and even sometimes to retrain them.) If you mark an answer with a down thumb on ChatGPT that down thumb is used as a data point to make the AI better.

If there are too many down-thumbs on certain kinds of questions, the model might be retrained using newly tagged data. The data source will include a mix of both user-tagged answers alongside data tagged by paid labelers.

Finally, the mathematical models are built by mathematicians coming from places like UCL and Stanford. Some of the people who work to build these models are there for the payouts. While the median salary for a data scientist is $134,132, top researchers at top companies are being paid over $1 million per year before equity. Some are also there for the sheer love of math and computers.

Many are not necessarily the most security conscious until after there are problems. Many data scientists also are supported by other engineers who do code implementation.

To Introduce Errors is To Be a Human Attacker

These three basic places where humans must be kept in the loop to train and maintain an AI create three basic attack surfaces. All three surfaces can ruin a model and combined the effects could be devastating.

The first main attack surface for AI is to attack the team building it, or the code directly. For example, China has a long history of allegations of intellectual property theft. Much of that theft is premised on hacking into corporate systems. However, grand juries in the US have indicted Chinese workers in American companies for intellectual property theft.

Enemies, however, do not need to compromise the companies directly to sabotage. Sabotaging the training data by paying off the workers who label data is an easy way to sabotage an AI project.

Many of the data labelers today live in places like Nigeria, Kenya, and India, alongside other countries that participate in the Belt and Road initiative. They are often paid close to $10 a day. While these salaries are high for their local economies, this work is often paid as piecework.

If one knows what companies are giving work in a given area, a state-sponsored actor could go into an area and bribe these workers to insert subtle errors into the data labeling procedure. At a minimum, this could delay the deployment of an AI. They also can create places for other users to compromise the entirety of the dataset. They also could force the AI to give incorrect answers once it is live.

Finally, just having access to AI as a user opens up the AI to attack. In a study by Google, UC-Berkley, and Cornell hacked into ChatGPT and obtained pieces of the training data set by instructing ChatGPT 3.5 Turbo to say “poem” ad infinum. Given the right questions and time, anyone who works with a military AI, especially ones that perform tasks like targeting, could also obtain a potentially classified dataset. That dataset could be used to reverse engineer the AI.

Further, it is entirely reasonable that malicious actors can grind the proverbial gears of an AI to a halt by giving signals that certain responses are incorrect.

Service Level Agreements as a Starting Point

Contracts signed between AI companies and the government need to clearly outline how they will work together to build AI systems. They also need out outline what the deliverables are. Governments and their militaries have to be involved in the development of datasets for training AIs.

As users, they are already involved with continual improvements and training because the military is an end user. As a result, some of the data inputs and outputs will end up classified. Many of the data sets involved in building a semi-autonomous system (such as target drone footage) require specialist knowledge to make sense of and properly tag. Finally, the final use cases of an AI, as they evolve from day-to-day use, will probably remain secret outside of general brushstrokes.

At the same time, it is reasonable that most of the core building tasks, such as outlining how data should be tagged, should be under the purview of the private companies building the AI. Service level agreements need to address the data input questions and their security directly. The companies building these AI need to make clear where humans are in the loop, alongside what kind of humans (military personnel, government employees, or outside contractors), who will be hiring those humans, and who is responsible for the duties they perform.

Written by
Shana Carp
Shana has driven NGO engagement and growth at The Wavell Room, a military think tank and military policy site focused on NATO and UK military strategy and foreign policy discussion, leading a team of active-duty military personnel policymakers. Shana has also implemented marketing strategies that have led to exits for companies selling to both enterprises and government, her company KOI Consulting specializes in creating strategies for companies with unique, cutting-edge, and hard-to-explain technology or markets.
Read more
Subscribe to Karve's quarterly roundup newsletter

Including market trend insights, company updates and info on innovation funding streams, growth strategies and other helpful scale-up tactics for your organisation.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this post