"Hey Siri – stop the war"
As the information war enters a dangerous new phase, the UK is outnumbered, outgunned, and out of time. We need a Secretary of State for Resilience to coordinate our response to the pervasive threat posed by information disorder.
_________________________________________________________________________
If, as you scroll through YouTube, Instagram or TikTok, you were served a video through which a Russian organisation called 715 Team was crowdsourcing financial support for the invasion of Ukraine, what would you do about it?
What could you do? Call the police?
If you’re in the UK, then this content may or may not represent a breach of the Online Safety Act by the platform operator. If you’re in the EU, it might fall under the Digital Services Act. Will your emails to Ofcom or the EU Commission elicit the robust response you’re after?
‘Punishable by a fine means legal for a price’
Ideally, the content would be taken down and the account removed. The relevant social media company would be issued with a fine.
In practice, you know to a moral certainty that the authorities, like the platform operator, would do very little. That’s assuming you knew how to report the offending content in the first place.
So you’d probably do what I did. Something that is, on reflection, deeply significant: nothing.
Welcome to the information war
The UK is engaged in – but is not fighting – an ongoing, undeclared information war.
Our adversary is bypassing traditional defence capabilities, targeting not just our critical infrastructure but something more fundamental: our critical faculties. In every home, on every phone, we are being exploited, bullied, scammed, coerced and deceived. We cannot ‘unsubscribe’ from this attack or hit ‘unfollow’.
While this information war might not pose an immediate threat to our physical safety in the same way that an armoured column would, the implications for national security are every bit as real and ultimately more consequential. The war is being fought over what we believe: who we trust, who we follow, what we’re willing to do or endure just so we can be left to get on with our lives.
How we fare is a question of our resilience. Resilience, however, can be almost as hard to define as it is to defend.
Rethinking resilience
For decades, Britain has conceived of national resilience primarily in narrow, mostly physical terms: the ability of infrastructure and services to withstand or rapidly recover from shocks.
The UK government’s formal apparatus is structured accordingly: witness the little-known, all-but-invisible Resilience Directorate within the Cabinet Office, the Resilience Framework, and the National Risk Register.
Formal resilience efforts are almost exclusively focused on ensuring the continuity of essential systems and services in the face of civil emergencies like terrorist incidents, cyber attacks and natural disasters.
This understanding of resilience is not wrong. It’s just woefully incomplete and dangerously narrow. Authorities focus on these difficult but well-defined problems at the expense of more ambiguous, more significant issues.
Simultaneously, in the context of the general conversation around defence and national security, the definition of resilience is too broad to be useful. If a threat isn’t very clearly a matter for MoD, GCHQ, MI5, MI6, the NPCA or the FCDO, it goes in the ‘resilience’ bucket.
Defining resilience matters because when we lose it — when citizens do not trust their government, institutions or each other — even the most physically or cryptographically resilient infrastructure becomes vulnerable. More importantly, the free, open society that this infrastructure exists to support will have ceased to exist.
This deeper understanding of resilience as trust-dependent has profound implications for national security policy. Information integrity – the ability of our information environment to support public debate based on accurate, consistent, reliable information – emerges as a critical strategic concern as vital to UK sovereignty as our airspace or territorial waters.
The evolving information battlefield
The character of war in the information space is changing even faster and more profoundly than it’s changing on land, at sea and in the air. In terms of pace, scale and sophistication, it’s accelerating at machine speed.
State-sponsored companies have established elaborate networks of inauthentic accounts (bots) that spread misinformation across and between every social media platform. Outfits like the Social Design Agency and Company Group Structura LLC offer, in effect, disinformation-as-a-service, augmenting automated social media campaigns with custom websites designed to appear as legitimate government websites and mainstream news sources.
This is not new. This is not news. In fact, these methods are starting to seem almost analogue in the AI era. The character of information warfare is changing, in part, because its nature is also changing. Until now, it’s been a matter of people using machines to try to influence other people. Today, it’s machine-on-machine. Humans are merely the instigators and the victims. Relatively straightforward misinformation campaigns on social media platforms have morphed into something far more sophisticated and insidious.
Today, thanks to undervalued and underfunded organisations like the American Sunlight Project, we’re beginning to understand how Large Language Models are being used by the likes of the Pravda network to generate and publish misleading content at scale.
This content isn’t designed to fool you. It’s not even designed to be seen by you. It’s hosted on global networks of websites that aren’t intended for human consumption. These videos and articles are optimised to be read and ingested by search engine algorithms and AI data scrapers.
This is LLM Grooming. It affects the results of your Google searches. It influences AI summaries you get when you ask a question online. It skews the information you get from the optional-not-optional AI chatbot you see whenever you open WhatsApp.
The UK's current regulatory framework for dealing with harmful and illegal content is centred on the Online Safety Act (OSA). As a piece of legislation, it’s as typical of the UK as the British bobby: sober, deliberative and – potentially – devastatingly competent. Thanks to the OSA, we now have 17 clear, concise definitions of harmful and illegal online content rooted in well-established, real-world legislation.
The regulator mandated with enforcing the OSA is Ofcom. It, too, is not unlike the British bobby: conspicuously unarmed.
In theory, Ofcom could turn X, YouTube, Facebook or TikTok off at the wall. In practice, the typical penalty for serving up harmful or illegal content will almost certainly be a financial one. This is why, when you or I encounter one of 715 Team’s many interlinked channels on YouTube, Instagram or TikTok, when we’re invited to join its Telegram channel, when we’re directed to a page where we can donate cash to the Russian war effort, we do nothing.
A defence in disarray
This learned helplessness stems from the fact that Britain’s institutional response to the threat posed by information disorder remains fragmented, uncoordinated and without leadership. Ofcom may have been handed responsibility for the OSA, but there’s a stark lack of political accountability.
So, yes – Ofcom has a mandate to regulate social media platforms and AI companies. And yes – Ofcom reports to the Secretary of State for Culture, Media and Sport. But this arrangement, whereby a crucial component of national security falls under the remit of the ‘Minister of Fun’, reflects a fundamental misunderstanding of the true nature of the threat and of the stakes involved.
In the UK, no single organisation holds responsibility for identifying, attributing and reporting foreign influence and misinformation campaigns at scale. We have the necessary talent and technologies, but we have no equivalent, for example, of the French agency, VIGINUM. Nor do we have the necessary mechanisms by which we can coordinate a response.
This scattered approach contrasts sharply with the coordinated strategies of Britain's adversaries.
Russia's information warfare efforts are integrated with conventional military operations, diplomatic initiatives, and economic pressure. China's approach to information control domestically, like its influence operations internationally, demonstrates similarly rigorous strategic integration.
The sovereignty paradox
When considering the issue of foreign influence operations, we should resist the temptation to frame them as ‘problems’ because ‘problems’ suggest ‘solutions’. The information war is an ongoing, insoluble predicament, and this perspective should inform our response.
This is because one straightforward ‘solution’ – adopting the authoritarian approach of countries like China, with its Great Firewall and comprehensive censorship apparatus – would undermine the very values we want to defend. To quote Peter Pomerantsev, the Soviet-born British journalist and author, a little out of context, "Here ain't going to be here if you take that attitude; here is going to be there."
Another straightforward ‘solution’ is to do, as a matter of policy, nothing. To deregulate, and then put our faith in a deeply perverted, deeply disingenuous definition of ‘free speech’, and trust that the truth will out.
This bind explains Britain's hesitancy to act decisively. Political leaders fear appearing to establish a Ministry of Truth that determines what information citizens can access. This legitimate concern, however, has produced political passivity rather than a sophisticated, proactive response. We can break this bind if we accept that our response, whatever it may be, will inevitably be partial and imperfect.
If nobody leads, nobody can follow: the case for a Secretary of State for Resilience
Britain needs a coherent, coordinated, up-to-date and genuinely strategic plan for securing its digital sovereignty just as it has (to put it mildly) commissioned no shortage of strategic reviews dedicated to the defence and security of its physical sovereignty. It needs to do this fast, then needs to find the money and the political will to put its plan into action.
This requires institutional innovation. The distributed responsibilities currently spread across government departments must be consolidated under clear leadership with the authority and resources to coordinate an effective response.
A Secretary of State for Resilience, positioned at the cabinet level, could provide this leadership.
This role would not involve establishing a censorious authority that dictates truth, but rather creating conditions where truth can be discovered, discussed, and debated. Its responsibilities would include:
- Coordinating government activities relating to ‘conventional’ risks as defined by the National Risk Register, including the necessary collaboration with industrial and academic partners
- Integrating interdepartmental responses to threats and risks that fall outside or across the remits of MoD, the security services, and the regulator
- Holding social media platforms and AI developers meaningfully accountable for information mis- and disinformation, AI misuse, and information disorder
- Developing early warning systems for emerging information threats. The necessary tools exist: the same technologies that are being used to generate and disseminate misinformation can be used to identify, attribute and report it
- Investing in education initiatives to build ‘cognitive resilience’ against manipulation, working with schools, universities and media organisations to engage the public, and equip the civil population with the skills needed to navigate the ever-evolving, increasingly perilous information space
- Supporting independent research from organisations such as the Centre for Emerging Technology and Security into the detection and attribution of influence operations
The Secretary would need the technical expertise to understand evolving threats. They would also need the political will to make difficult, often imperfect judgment calls about appropriate interventions. This will invariably mean being accountable for judgement calls that accommodate the need for openness, transparency and free speech on the one hand, while giving explicit backing to Defence, Intelligence and law-enforcement organisations who are currently able to engage in the information war but lack ‘top cover’.
More importantly, the Secretary will need to be accountable for defining and communicating a clear strategic direction when it comes to restoring and safeguarding the country’s information integrity, as well as its institutional, infrastructural, industrial and social resilience.
And this, inevitably, leads to the most important aspect of the job. Not only will the Secretary of State need to be accountable, they will need to be visible.
‘…the police are the public and [...] the public are the police’
Let us imagine that there is, in fact, a coordinated, effective apparatus that addresses the issues of resilience and, in particular, information disorder. Let’s imagine that this system integrates the efforts of MoD, the Security Services, the police and the regulator, and then reports via the National Security Council or Joint Intelligence Committee to the Prime Minister.
Let’s also imagine that, for entirely legitimate operational reasons, the Government elects not to comment on the workings of this apparatus. Even if such a system exists, there’s still a vital issue that’s unresolved because the maintenance of information integrity, like the maintenance of law and order, is reliant on trust. The reason British police are so effective despite being unarmed – the reason they are the envy of almost every other country – is that they operate with the consent and cooperation of the public. But the public can neither consent to nor cooperate with a system that it doesn’t know exists.
This, above all, is why we need a visible, publicly accountable figure such as a Secretary of State for Resilience. It’s a matter of public confidence, so that when we encounter harmful or illegal activity, or when we see an unambiguous, state-sponsored attempt to undermine our national security on our way to work – or to school – we’re more inclined to spend the half second it takes to hit the ‘report’ button.