Happy New Year, KRCP! We have quite a few new subscribers, so I thought it would be a good idea to start the New Year with a quick post about us and what we do here. KRCP aims to have an unfiltered conversation-style discussion about all things AWS-related. We always welcome new voices, especially if you want to respond to one of our posts and tell us why we’re wrong (or right). The goal is a friendly, stimulating conversation.
You may have noticed I’ve been pretty quiet on KRCP lately. Thanks to Brad for doing a lot of the heavy lifting. 2024 was a hectic year for my family as we had multiple trans-Atlantic moves and welcomed a baby girl in early October. So, I have been in the throes of newborn life. But I am ready to shake the intellectual cobwebs off and share a bit about what I’ve been working on lately.
When I’m not helping a baby learn to laugh, I’ve been working on a book chapter about national security governance and military AI. As many of you know, my research mainly concerns human control questions (policy, legal, operational) of AI-enabled autonomous weapons. Don’t worry – I'll discuss that again soon. Still, I was asked to write this chapter on an interesting topic, and I thought trying something new might be fun.
So, I’ve been thinking through issues of AI governance, why national security is so often excluded from global governance initiatives and the limits of international law in the governance discussion. Let’s get into it.
There is no shortage of calls for AI governance. We have seen initiatives at international and national levels to identify and regulate uncertainties and risks related to AI. These initiatives often point to the lack of accountability and transparency in AI, the unequal distribution of its benefits, and the consequential effects of risks, such as bias, surveillance, and disinformation. For a truly global perspective, check out the UN HLAB on AI’s final report, which lays out a roadmap for AI governance.
However, some experts, particularly at this global level, have noted the absence of military applications in many of these governance initiatives. The UN HLAB report does not substantively address defense or security applications of AI (aside from noting security concerns), despite some of the members’ expertise in military AI. And certainly, global efforts still call for the ban of AWS, with some state support in international forums.
There are obvious reasons for this exclusion of military applications from governance initiatives. National security concerns and applications would impact the scope and likely limit the potential for these regulatory and policy initiatives. Experts calling for national security governance often do not engage with the many (many) reasons that traditional governance pathways cannot adequately govern the national security space.
A few reasons that come to my mind:
No international oversight. States would not want to subject a rapidly developing capability to external oversight and potential limitations. Particularly as AI can offer:
Competitive advantage in combat. AI-enabled systems can offer a qualitatively new capability and offer a strategic advantage over adversaries, particularly near-peer.
Dual-Use Nature. Governance initiatives have struggled to delineate civilian and military applications, as the capability can easily blur the lines. For this reason, efforts focus on concrete applications that typically do not involve many potential military applications.
There is a lack of global consensus. Governance initiatives for defense and security struggle because it is nearly impossible to reach an international consensus. This is because legal obligations (or interpretations of those obligations) differ, ethical red lines and preferences differ, and security priorities and interests differ and rapidly change in an evolving security landscape. In short, more factors feed a lack of consensus than those that help states reach a consensus about military AI.
Geopolitical landscape. Related to the challenges for global consensus, tensions between the US and China (among others) include central concerns about emerging technologies and AI-enabled systems, especially weapon systems. States locked in that kind of competition are unlikely to accept (or seek out) external governance measures.
Enforcement capacity. Ensuring compliance and enforcing security measures is challenging, even if governance frameworks were established. If there is a security interest in breaking these measures, then it is difficult to establish trust in adhering to agreed-upon restrictions.
For these reasons and others, formal governance initiatives for military AI are unlikely to be implemented. However, that is not to say that nothing will impact the development and guidance of AI implementation.
A few years ago, I co-authored a paper with the brilliant Matthijs Maas, arguing that strategic partnerships centered on the development of military AI will be a strong pathway for governance. The article explores the concept of strategic technology partnerships and discusses the uses, practices, and distinct operational requirements involved in military AI strategic partnerships. We examined four cases: (1) the US-led AI Partnership for Defense (PfD); (2) the AUKUS partnership; (3) robust China-Russia cooperation on military AI; and (4) transatlantic (especially US-UK) cooperation. The article discusses the implications of these partnerships for broader AI governance stakeholders, standing military alliances like NATO, and the development and usage of military AI itself.
Others have recently made similar arguments. For example, Brianna Rosen made some great points in a WotR post. Like Rosen, Maas and I argue that international law alone is insufficient to regulate military AI and function as a primary governance mechanism. International law requires vastly different behaviors based on whether actions occur in an armed conflict, as one example of a legal distinction with different obligations. Ultimately, international law cannot regulate across the portfolio of use cases, even within the military realm alone.
Strategic partnerships will offer substantial guidance for states as they develop these capabilities. The importance and utility of interoperability and strategic integration offer substantial incentives for working alongside trusted partners, particularly outside the US. These partnerships support governance by defining and delineating interests and agendas, promoting cooperation among partners toward a shared goal, (potentially) curating industry and academia to support co-development, and establishing principles and frameworks for partner implementation and use.
These partnerships are an essential tool in the governance arsenal. While they don’t solve many of the issues laid out above, they are, in my opinion, the best tool for guiding development and implementation, with more potential and many global policy forums.
What do you think? Did I miss key challenges to military AI governance? Have I put too much promise in strategic partnerships?
Let me know in your comments. Preferably before I finish my book chapter. :)
There's one other quasi-governance pathway: Track II dialogues (with the potential to lead to Track 1.5 or even Track 1). When governance of national security-related AI is not feasible, for all the reasons you list, Track II offers bilateral and multilateral options for states to discuss crucial AI topics in settings that avoid the limitations inherent in other AI global governance forums.
These dialogues have their own limitations, and absent follow-on action and agreements at the Track 1 level will not result in concrete national commitments to AI governance. Yet they offer candid conversations, an opportunity to establish a baseline understanding of concepts and terms, and a way to build a path towards state-to-state official dialogues.
Great piece, Lena. Two additional points on governance which I find are often overlooked/underappreciated are 1) what I'd call the "Sense test" and 2) what might be considered the "SIrUS test" (Superfluous Injury and Unnecessary Suffering) inspired by the 1868 St. Petersburg Declaration banning explosive anti-personnel rounds.
To 1), there are certain AI capabilities that would seem to have very limited (if any) military value, but raise a number of concerns surrounding how AI systems function, whether they are biased, brittle, or otherwise compromised, etc. For example, facial recognition technologies are one thing where it is hard to see a huge military reason for them, but they raise a host of ethical and legal concerns pretty quickly.Given that they don't make a lot of sense, but raise a lot of hell, this capability (and others failing the "Sense test") could be candidates for discrete things that might see reasonable governance measures, even between adversarial powers. In general though, the question, "Does this thing make sense?" can be a powerful way to limit developments of weapons which might have very niche use cases, but create particularly thorny ethical and legal challenges.
To 2), as AI capabilities increase, there will be some design architectures and uses that may be seen to inherently imply that systems are imposing superfluous injury and/or unnecessary suffering. As an example, if AI-enabled systems used in anti-personnel roles become highly capable and can reliably aim for discrete parts of the human body, then designing them to aim, say, for the head, may be deemed to violate the SIrUS test, in virtue of implicating massively higher fatality rates than is the norm now (see, e.g., https://tile.loc.gov/storage-services/service/ll/llmlp/SIrUS-project/SIrUS-project.pdf, or my own article at https://www.tandfonline.com/doi/full/10.1080/15027570.2020.1849966). Backing certain discrete pieces of governance on accepted norms of war presents a way to establish some governance, especially when one can show that a weapon is doing the job of neutralizing an enemy *and then going beyond that*. The governance question is then not whether a weapon provides advantage (let's assume it does), but whether, "Does this weapon impose more harm than necessary when conferring the same advantage?". Some AI-enabled systems likely will, and these seem the sort of thing that could be governed more easily, especially if the governance takes the form of tempering development of systems rather than seeking to ban them outright.
Just a couple of thoughts which I hope are helpful!