Greetings partygoers!
My colleague Lena and I just attended a really good Track 1.5 dialogue at the University of Edinburgh Law School in Scotland. The event was sponsored and run by the Paul Tsai China Center at Yale Law School. A quick shout out to that team, because they did a fantastic job of planning, organizing, and executing this dialogue. Absolutely top notch in every way.
Because of the nature of the dialogue I won’t be naming names or attributing thoughts or ideas to anyone, but I feel the need to capture my impressions because I realized some things perhaps worth sharing.
Scholars from Europe, the U.S., and China attended to discuss autonomous weapon systems and the role of military legal advisors in the context of International Humanitarian Law. Predictably we ended up all over the place, which is good. When you’re drinking crazy Chinese sorghum liquor in Diagon Alley with a bunch of law nerds, things can get surprisingly sporty.
So let me go over a couple of my “hmmmm” moments.
In my experience in discussions of this kind with this type of audience, the Western reaction to China’s policies on lethal autonomous weapons could be described as politely mystified. Unsurprisingly, I think some of this comes down to language but it is also a factor of how both sides understand and communicate about the technology that enables lethal autonomy.
China is adamant that no one should construct fully autonomous weapons, and there should be a legally binding instrument to prevent that. I’m not sure that China believes the U.S., and the West in general, agree.
Unsurprisingly the U.S. thinks that China is trying to get the U.S. to abandon all development of autonomous weapon systems while China secretly continues to develop them. Elsa Kania calls this “Strategic Ambiguity” and suggests that it is an intentional strategy to constrain the West.
There is probably some truth to this, maybe even a lot of truth, but I am coming around to thinking that perhaps our views on autonomus weapon regulation are pretty similar. We are just misunderstanding each other because we are communicating poorly.
The unofficial U.S. response to China’s position on fully autonomous weapons is often, “of course we wouldn’t build FULLY autonomous weapons, that’s crazy.” But the policy documents are not as clear. If you look at prominent policy documents on both sides, the Chinese definition of a fully autonomous weapon and the U.S. definition of an autonomous weapon are not incompatible.
The unofficial Chinese definition of a fully autonomous weapon:
“LAWS should include but not be limited to the following 5 basic characteristics. The first is lethality, which means sufficient payload and for means to be lethal. The second is autonomy, which means absence of human intervention and control during the entire process of executing a task. Thirdly, impossibility for termination, meaning that once started there is no way to terminate the device. Fourthly, indiscriminate effect, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios and targets. Fifthly evolution, meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations”
The U.S. definition of an autonomous weapon (my emphasis in italics):
A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.
Givn a definition that broad, you could forgive the Chinese for thinking that the U.S. is being strategically ambiguous to constrain them.
During the dialogue, a question that kept going through my mind is, if the Chinese think a fully autonomus weapon is something that, by definition, humans cannot control, why do they think the U.S. and the West would make them? To answer that I started to think about how we communicate about the underlying technology.
To me, part of this problem comes down to the crazy stuff people say about artificial intelligence. The knowledge gap about artificial intelligence in policy circles, though descreasing, is a major obstacle to effective regulation of lethal autonomy.
While listening to my esteemed collegues from China last week I became reasonably convinced that for them, artificial intelligence in the context of full autonomy is literally a non human intelligence. It is not a probabilistic software model that classifies hot dogs on the internet. This is something that is not only intelligent, it is sentient. It has a subjective experience and a will to pursue its own ends.
In their mind, humans have no business arming a fully autonomous (sentient) machine for any purpose. This machine would be capable of deciding when, where, and to what puprpose lethal force should be used. Unsurprisingly, I think we all think that sounds terrible. But why do the Chinese suspect we might not agree with them?
When communicating policy options about lethal autonomy the West does a terrible job of separating current AI technology like computer vision and natural language processing from aspirational AI like artificial general intelligence (AGI) and artificial superintelligence (ASI). They say things like “the AI decides this, or the AI thinks that.” There is not a real undersatnding of the reality that most autonomous systems are only using the “AI” for object classification or maybe an interface (LLM). The “decisions” are being made by deterministic software: if an object meets criteria A, follow it, etc. But policy people are not dumb, quite the opposite, but they’re fed this kind of language for many reasons.
Policy wonks are often heavily influenced by pundits and tech celebrities that give a warped sense of what the technology can do now and what it is about to do next. The week prior to this dialogue, Geoffrey Hinton received the Nobel Prize in physics for his work on machine learning and neural networks. Hinton is one of the biggest personalities in discussions on lethal autonomous weapons and apocalyptic AI, and is on record earlier this year stating that there is a 50% chance that an AI will try to “take over” in 5-20 years.
Sam Altman of OpenAI has made a few vague predictions from “soon-ish” to “end of the decade” for AGI. And of course, you can’t talk about AI hype without Elon Musk, who seems confident that AGI will arrive and be smarter than the smartest human by 2026.
These expert opinions are ingested by the policy community and repeated in dialogues about autononous weapons, this one was no exception. When the Chinese hear this I suspect they could be thinking “holy crap, can they build that already? Are they putting it in weapons?” To me it makes sense that they would immediately say “No fully autonomous weapons!” And then desire the U.S. to more clearly commit to that in policy documents like DoDD 3000.09.
Ultimately I think the real solution is a legal and policy community that is better educated on what the technology can actually do and what is reasonable to expect in the next decade. This is happening but I’m always surprised how apocalyptic AI predictions make their way into serious discussions. I think that this partly contributes to the fact that there is very little movement in the international community seeking to regulate autonomous weapons. However it is getting better.
There has been a slow turn back to reality happening in the International Humanitarian Law community. When I read the latest by the ICRC, I can’t help but think “welcome back to where the development and operational communty was five years ago.” Both the activist and policy communities are sorting through all the hype and starting to bring their policy recommednations back into reality, meaning using IHL to prevent unnecessary suffering rather than preventing the development of apocalypti weapons that will destroy humanity. Don’t get me wrong, I don’t want those either, I’m just saying they are not imminent so let’s focus our efforts on what’s real.
The other major observation I had last week was seeing the difference between the different perspectives between those who own the mission and those who own the law.
My impression of the corpus of the laws of armed conflict through the ages was that it was drafted by people who had fought wars that decided to codify that you can win wars without destroying everything and everyone. That much of the violence of some wars is capricious and unnecessary. You can still fight to win without being a bunch of murderers. The laws of armed conflict always struck me as practical and reasonable.
As autonomy has arrived on the scene it seems like much of the discussion around the laws of armed conflict are more oriented towards making it harder to wage war. Autonomy seems like it will make waging war easier and cheaper but no less destructive. Probably moreso. So I get the instinct to say we should avoid that. But if history has taught us anything about war, it’s that humans will always find a way to wage war, and that it is always better to win a war than to lose it. Nations will always do what they have to to win.
If law seeks to deny nations the tools they think they need to win, nations will not adopt those laws. This is why the U.S. and China will not forego autonomous weapons. The mission requires them. The owners of the mission will not give them up. The owners of the law will probably have to bend and make progress where they can. Which is a noble pursuit.
Thanks for reading. We’re working on our series about what it takes to actually field autonomous weapons. Hopefully our first guest post will be here soon to talk about object classfication challenges.
Until then…
Very interesting that the Chinese understanding is basically "LAWS are Terminators". The conclusion that "Terminators are bad" makes a lot of sense, given that understanding. I find it odd that they would import that definition to discussions with US policy-makers though (or people following the US/ICRC definition), as our definition is pretty obviously *very* broad. It also seems odd to me that the Chinese view assumes indiscriminateness but then argues that LAWS have to be regulated/prohibited; if they're indiscriminate, then they *already are* prohibited by IHL. Knowing these nuances of other states' positions is critical to moving forward though!
To the need for clarity and mitigating silly hype, I could not agree more (for a shameless plug, I actually had something published on this in spring this year: https://link.springer.com/article/10.1007/s43681-024-00448-z). It often seems to me that the biggest obstacle to sensible regulation is tech bros and activists who want to weigh in but haven't carefully considered either the technologies themselves or the ways these are/will be used by militaries (or both). Grounding our debates in careful consideration of real warfighting scenarios, real practiced doctrines of state militaries, and real capabilities/limitations of systems is the only way we're going to have a hope of finding sensible regulations that enough parties will sign onto for them to be meaningful.