There are arguably fewer stars in the sky these days than there are articles about AI flying around so it’s fair to ask "what on earth can this author add to the body of texts other than noise?" In short, I hope to provide a clear articulation of what matters when it comes to the security of AI. More specifically, ensuring it is Secure by Design so that it can be readily deployable and adaptable in the National Security space.
Gamekeeper turned poacher
I am fortunate to have begun my career in the realm of software development. Literally writing code for five years. The subsequent twenty years have been spent on the security side of the technology space seeking to ensure the confidentiality, integrity and availability of organisations’ data. More recently I was asked to help modernise the UK MOD’s approach to cyber resilience and accreditation. This included drafting its Secure by Design Policy. Having published the policy in 2022, I now find myself as the Head of InfoSec at an AI company focused on defence that is faced with having to implement Secure by Design AI empowered capabilities. Karma perhaps as I find myself gamekeeper turned poacher or as the industry likes to say, forced to “eat ones own dog food”. This hopefully gives some context, and credibility to the approach advised in this article.
When asked "So what do you think of AI?", I have found myself leaning on an analogy from quantum mechanics of all places explaining that AI is subject to what I have termed "Schrodinger’s Hype". When something, like AI, is both overhyped and underhyped at the same time. Apologies to proper quantum mechanics students. I go on to explain that predictions of ‘Terminator style’ destruction of humanity fall into the overhyped bucket, but the appreciation of just how much AI will impact our daily lives is also starkly underestimated by most. Even many of those journalists churning out the countless AI articles.
Take the smartphone, launched 10 years ago, as a good example of something which has become all pervasive and impactful and yet few would have predicted at its launch how impactful it would become to the overall economy and our daily lives of work and leisure at the time. Even Steve Jobs’ seminal launch focused on the symmetry of the phone, music player and camera in one. The focus was still on the convenience of its use in these three facets, and not in its ultimate ability to help navigate and enable e-commerce.
New Tech <> Old Problem
While AI is a new and often novel technology when it's applied to long intractable problems (whether developing next-gen Electronic Warfare or generating pictures of owls as generals!), when you strip away the hype, the security industry can rely on its simple long-standing principles and practices to ensure its safe use. It is after all the “information” part of the “Information Security” business. To elevate it as anything else risks losing sight of the basic objectives: its confidentiality, integrity and availability. Having said that, it will require security professionals to understand the world of AI to appropriately understand the threats that manifest from something which will provide so much opportunity. For a good insight into the wider types of considerations, the AI Red Team at Google gave a good talk at DEFCON recently.
Secure by Design
What is Secure by Design (SbD) and how does it begin to factor into how AI is being developed for defence and wider government? In short, SbD is a risk-based approach to managing the cyber risk of capabilities delivered to defence (and govt as a whole). Implied in that statement is the need to focus on a greater understanding of the capability’s context, use and threats, and so in turn enable meaningful management of its risks with them being either designed out, or the necessary controls being designed into what is ultimately delivered. Easier said than done. But at the heart of it all is one core aspect up front. Accountability. Through proper implementation of the appropriate roles and responsibilities with the related governance, a program is far more likely to succeed than the outcome of any of the other principles can achieve in their own right.
Practicing Secure by Design
The best way to approach SbD is making it a collective effort by hosting a workshop that involves all the key players in the capability delivery. Understanding and defining the context is first and foremost a program 101 objective, not a security one. What have we been asked to deliver and how? What data will our “widget” handle, who will work with it and where will it be in use etc. On a recent program, this step helped us determine a rapid and effective means of securing the capability and more importantly it meant everyone both understood the importance of the security work subsequently required but were also able to collectively contribute to the delivery of the outcome.
By doing the above, we were able to rapidly develop and deploy a system to enable highly sensitive work utilising AI and do so without an onerous regime of security documentation. We were able to focus on ensuring the understanding was achieved first and foremost amongst the team. This aligns with recent thoughts from MOD’s Hugh Tatton Brown regarding the application of security principles.
Yes, there are plenty of new facets to consider regarding AI, but fundamentally there is the need to understand the context of how it will be trained and used and to plan for the threats those scenarios present. Conducting a workshop to ensure the context and threats are understood and answer “how do we deal with them”, will mean that much of the work is done to ensure the capability of Secure by Design can flow from there. You will already have smart and technically capable people in the room. Having posed the basic but right questions should lead to the basic but necessary answers to achieve the secure outcome.