Sorry I’ve been remiss in keeping up my end of the discussion. Let’s just pretend that I had to step away to refill my drink. It’s a good metaphor because over the last several weeks there have been many autonomy and AI-related events and commentary in the news best discussed over beer. The White House released the Executive Order on AI. Lots to like and lots to improve. The UK had its AI symposium. Technocrats in charge again. Kissinger keeps raising the stakes by invoking nuclear arms control regimes. Michele Flournoy has reminded us that the military is using AI to accomplish military tasks but ultimately isn’t very good at it. T.X Hammes is telling us to get over our squeamishness and just do it already, while we’ve seen some updates from Human Rights Watch, where you get incredibly insightful ideas by people like Bonnie Docherty and then really terrible, inflammatory, and mostly inaccurate media pieces like this one.
I confess to finding it a little exhausting because in the public consciousness we’re not making great progress towards consensus on where to start an informed and circumspect discussion on AI-enabled military capability. The only consensus seems to be that any time the term “AI” is used in conjunction with anything, we should be afraid, very afraid. Fear can be useful, but it rarely generates good policy outcomes.
In your last post you wondered about geo-political issues raised by AI-enabled autonomous systems. First, I think that everyone should check out the paper by one of our subscribers, Lt Gen (ret) Jack Shanahan, on the security dilemma triggered by AI. That is a great place to start and maybe we can get him to do a quick piece for us if we ask nicely. But before I dig too far into that I want to relate an observation from the past couple weeks.
One of the reasons I didn’t publish this post on time last week is that I’ve been sitting on several panels related to AI in the last couple weeks. The topics included the ethics of developing and employing autonomous weapons, the dangers posed by generative AI, and “AI and Truth”. The funny thing about these panels is I ended up saying a few things over and over again just to try to frame the discussions. In the spirit of trying to frame our discussion on AI, autonomy, and geopolitics I’ll repeat them here. You may be thinking “yeah, yeah, tell me something new,” but a common set of assumptions may help us to pour some water on the fires of doom.
So, here are three assumptions that I think must be made to have a productive conversation about AWS. Maybe we need more but let’s start with these. First, war is bad. It’s killing and destroying for political purposes and that is not something humanity should be doing. Wow that was really insightful! Most rational people know and accept that. But oftentimes discussions of AWS seem to drift into constraining a new weapon because it has increased killing potential, and because killing is bad, we shouldn’t build stuff that makes it easier. If this was our only assumption this might be okay. But it isn’t.
Our second assumption is that, despite war being universally bad, humans will continue to make war on each other, and most human societies face the potential for some kind of war; civil or state on state. Do I need to cite examples?
Let’s ignore Russia/Ukraine and Israel/Palestine for the moment. Let’s look at the list of ongoing conflicts. How about the Myanmar civil war? 11,846 deaths this year. The Ethiopian civil war? Somewhere between 10,000 and 109,000 deaths last year. Sudan? 11,000 deaths this year.
A war between major powers will be much worse. We can imagine what the numbers will be like if the U.S. and China start fighting in the Western Pacific. Estimates in a CSIS wargame suggest tens of thousands killed in the first few weeks. And that’s just the combatants. Unfortunately, a war between the U.S. and China no longer seems far-fetched. It now seems almost inevitable. But the idea that wars will occur in the future is not new or surprising. The likelihood is built into international law.
In the face of reality, international law assumes that war will occur despite being outlawed by Article 2(4) of the United Nations Charter. How do I know? Because of the existence of IHL that governs the conduct of combatants in war, deciding what is legal in war and what is not. There is a whole industry built around governing the conduct of combatants in war. This is done because governments know that war will occur and we should probably try to keep the harm to what only needs to be done to win, that is to restore a just peace.
If we assume that war is likely, possibly inevitable, what is to be done? What past wars have taught us is that losing a war is not good. In the best cases, losing means social upheaval. In the worst cases, genocide. Considering the significance of consequences like social upheaval, genocide, and everything in between, we are pushed to our third assumption: It is better to win a war than to lose it.
Let’s compare post-war Germany and Japan to post-war America and Europe. The societies that existed in Germany and Japan prior to World War II essentially no longer exist while the allies gained 70 years of economic dominance and prosperity. Disclaimer: I’m not bemoaning the loss of Nazi Germany and Imperial Japan here. Good riddance. I’m just noting that the result of losing that war was not the desired outcome for the losers.
On a smaller scale, look at what happened to the losing sides in Sudan, Rwanda, Myanmar, and Yemen to name a few. Losing is not good, therefore states and societies will do what is necessary within their means to win.
I can distill these three assumptions into a process that is easy to describe but really complex in the details: risk management. When preparing for the possibility of war, states and societies will decide how to act within their means to buy down risk of failure by considering a trade-off between harm and benefit. In simplest terms, states will estimate the probability and severity of harm that an action might generate, and then weigh that estimate against the probability and significance of the benefit that might result from that action. I’ll symbolize this in a formula so I can show my kids that I do actually use math at work. Where H is harm and B is benefit: (Hp) (Hse) : (Bp) (Bsi)
If the math comes out significantly in favor of the benefit side of this equation, particularly if the severity of harm estimation is high in any case, most rational actors will pursue that course of action. In this case, development and employment of AWS.
This process is occurring right now. The problem is that we don’t really know when or if the benefits from deploying AWS are going to be good enough to outweigh the potential harms. We think they will increase the speed, tempo, resilience, and flexibility of warfighting functions to the point where winning a conflict is much more likely. We also think that we are close to building systems that will outperform humans both tactically and ethically, or perhaps improve humans tactical and ethical performance. But that capability has not yet been demonstrated in any meaningful way. Including in Ukraine.
So where does that leave us? Probably with a bunch of people wanting to write in and tell me all the things I didn’t mention. Fair enough. But anecdotally, in every instance where I’ve briefed these basic assumptions, the discussion seems more productive. This includes discussions with activists who would prefer a total ban. We often end up finding common ground because the risk calculation above just won’t allow states to agree to a total ban. What states might agree to is minimizing unnecessary suffering through constraints and guardrails, which is really what the point of IHL is all about any ways.
As always, thanks for reading. Lena and I would love for readers to throw ideas out in the comments or submit for a full post if you’ve really got a lot to say! Have a great week.
Thanks Brad. Today I submitted my 3,500 word piece on AI and geopolitics to the Italian geopolitics journal Limes. I hope to be able to share it once published. I would like to believe it will be a useful contribution to this dialogue.
I made a few points on LI about TX Hammes’ AWS article. While I agree with many aspects of it, I disagree strongly with his decision to drag DODD 3000.09 into his argument. A non-sequitur as far as I’m concerned.
I would also like to hear Lena’s (and Heather Roff’s, Dave Barnes’) views on it. While I understand his “we have a moral obligation to build AWS”, I’m still somewhat uncomfortable with that framing. You could just as easily counter-argue that we have a moral obligation not to kill fellow humans.