There's one other quasi-governance pathway: Track II dialogues (with the potential to lead to Track 1.5 or even Track 1). When governance of national security-related AI is not feasible, for all the reasons you list, Track II offers bilateral and multilateral options for states to discuss crucial AI topics in settings that avoid the limitations inherent in other AI global governance forums.
These dialogues have their own limitations, and absent follow-on action and agreements at the Track 1 level will not result in concrete national commitments to AI governance. Yet they offer candid conversations, an opportunity to establish a baseline understanding of concepts and terms, and a way to build a path towards state-to-state official dialogues.
This is a great point. Would you think of Track II (or 1.5) as falling under the umbrella of strategic partnership? It is a different forum and less formal than other partnerships I discuss in the paper with Maas. But I wonder if these dialogues don't fit under some version of a strategic partnership. I'll have to think about this some more, too.
Great piece, Lena. Two additional points on governance which I find are often overlooked/underappreciated are 1) what I'd call the "Sense test" and 2) what might be considered the "SIrUS test" (Superfluous Injury and Unnecessary Suffering) inspired by the 1868 St. Petersburg Declaration banning explosive anti-personnel rounds.
To 1), there are certain AI capabilities that would seem to have very limited (if any) military value, but raise a number of concerns surrounding how AI systems function, whether they are biased, brittle, or otherwise compromised, etc. For example, facial recognition technologies are one thing where it is hard to see a huge military reason for them, but they raise a host of ethical and legal concerns pretty quickly.Given that they don't make a lot of sense, but raise a lot of hell, this capability (and others failing the "Sense test") could be candidates for discrete things that might see reasonable governance measures, even between adversarial powers. In general though, the question, "Does this thing make sense?" can be a powerful way to limit developments of weapons which might have very niche use cases, but create particularly thorny ethical and legal challenges.
To 2), as AI capabilities increase, there will be some design architectures and uses that may be seen to inherently imply that systems are imposing superfluous injury and/or unnecessary suffering. As an example, if AI-enabled systems used in anti-personnel roles become highly capable and can reliably aim for discrete parts of the human body, then designing them to aim, say, for the head, may be deemed to violate the SIrUS test, in virtue of implicating massively higher fatality rates than is the norm now (see, e.g., https://tile.loc.gov/storage-services/service/ll/llmlp/SIrUS-project/SIrUS-project.pdf, or my own article at https://www.tandfonline.com/doi/full/10.1080/15027570.2020.1849966). Backing certain discrete pieces of governance on accepted norms of war presents a way to establish some governance, especially when one can show that a weapon is doing the job of neutralizing an enemy *and then going beyond that*. The governance question is then not whether a weapon provides advantage (let's assume it does), but whether, "Does this weapon impose more harm than necessary when conferring the same advantage?". Some AI-enabled systems likely will, and these seem the sort of thing that could be governed more easily, especially if the governance takes the form of tempering development of systems rather than seeking to ban them outright.
Just a couple of thoughts which I hope are helpful!
Thanks for these excellent comments, Nathan! Thanks for the link to your paper; I will check it out.
In your mind, is governance the same thing as legal? For your second point, I think the questions you pose for governance are more of a legal issue. I guess when I think of governance, it is something broader that aims to impose boundary conditions on development and use that considers legal, ethical, political, and operational considerations. Does it mean something different to you? That would be interesting to know.
But I like these two tests you propose -- I will look more into it! Thanks!
To your questions, I don't see governance as being simply (or only) legal regulations. "Governance" arguably includes any and all sets of norms that limit, circumscribe, guide, or otherwise impact on end-results or deployments. Professional codes of ethics, for example, do not represent legal rules, but they certainly place a hefty load of governance on medical professionals, engineers, military personnel, etc. The U.S. not being party to the Additional Protocols but having them almost entirely incorporated into U.S. doctrine is another good case, where we have shown a clear interest in the "governance" aspect of the treaty, even if historical or structural factors continuously prevent the U.S. from joining the legal regime outright.
However, though I wouldn't say that "governance" is only law, law certainly informs and creates a large amount of rather concrete governance. Many of the core laws (especially everything to do with "superfluous injury and unnecessary suffering") are moreover rooted in deeply set moral and martial norms, and focusing on the law can be just a nice shorthand for going through all the historical development. The law having a precise formulation one can leaf back to also has some pragmatic benefits, as it allows for a common(ish) language about the norms that parties can engage with.
There's one other quasi-governance pathway: Track II dialogues (with the potential to lead to Track 1.5 or even Track 1). When governance of national security-related AI is not feasible, for all the reasons you list, Track II offers bilateral and multilateral options for states to discuss crucial AI topics in settings that avoid the limitations inherent in other AI global governance forums.
These dialogues have their own limitations, and absent follow-on action and agreements at the Track 1 level will not result in concrete national commitments to AI governance. Yet they offer candid conversations, an opportunity to establish a baseline understanding of concepts and terms, and a way to build a path towards state-to-state official dialogues.
This is a great point. Would you think of Track II (or 1.5) as falling under the umbrella of strategic partnership? It is a different forum and less formal than other partnerships I discuss in the paper with Maas. But I wonder if these dialogues don't fit under some version of a strategic partnership. I'll have to think about this some more, too.
Great piece, Lena. Two additional points on governance which I find are often overlooked/underappreciated are 1) what I'd call the "Sense test" and 2) what might be considered the "SIrUS test" (Superfluous Injury and Unnecessary Suffering) inspired by the 1868 St. Petersburg Declaration banning explosive anti-personnel rounds.
To 1), there are certain AI capabilities that would seem to have very limited (if any) military value, but raise a number of concerns surrounding how AI systems function, whether they are biased, brittle, or otherwise compromised, etc. For example, facial recognition technologies are one thing where it is hard to see a huge military reason for them, but they raise a host of ethical and legal concerns pretty quickly.Given that they don't make a lot of sense, but raise a lot of hell, this capability (and others failing the "Sense test") could be candidates for discrete things that might see reasonable governance measures, even between adversarial powers. In general though, the question, "Does this thing make sense?" can be a powerful way to limit developments of weapons which might have very niche use cases, but create particularly thorny ethical and legal challenges.
To 2), as AI capabilities increase, there will be some design architectures and uses that may be seen to inherently imply that systems are imposing superfluous injury and/or unnecessary suffering. As an example, if AI-enabled systems used in anti-personnel roles become highly capable and can reliably aim for discrete parts of the human body, then designing them to aim, say, for the head, may be deemed to violate the SIrUS test, in virtue of implicating massively higher fatality rates than is the norm now (see, e.g., https://tile.loc.gov/storage-services/service/ll/llmlp/SIrUS-project/SIrUS-project.pdf, or my own article at https://www.tandfonline.com/doi/full/10.1080/15027570.2020.1849966). Backing certain discrete pieces of governance on accepted norms of war presents a way to establish some governance, especially when one can show that a weapon is doing the job of neutralizing an enemy *and then going beyond that*. The governance question is then not whether a weapon provides advantage (let's assume it does), but whether, "Does this weapon impose more harm than necessary when conferring the same advantage?". Some AI-enabled systems likely will, and these seem the sort of thing that could be governed more easily, especially if the governance takes the form of tempering development of systems rather than seeking to ban them outright.
Just a couple of thoughts which I hope are helpful!
Thanks for these excellent comments, Nathan! Thanks for the link to your paper; I will check it out.
In your mind, is governance the same thing as legal? For your second point, I think the questions you pose for governance are more of a legal issue. I guess when I think of governance, it is something broader that aims to impose boundary conditions on development and use that considers legal, ethical, political, and operational considerations. Does it mean something different to you? That would be interesting to know.
But I like these two tests you propose -- I will look more into it! Thanks!
Happy to help, Lena!
To your questions, I don't see governance as being simply (or only) legal regulations. "Governance" arguably includes any and all sets of norms that limit, circumscribe, guide, or otherwise impact on end-results or deployments. Professional codes of ethics, for example, do not represent legal rules, but they certainly place a hefty load of governance on medical professionals, engineers, military personnel, etc. The U.S. not being party to the Additional Protocols but having them almost entirely incorporated into U.S. doctrine is another good case, where we have shown a clear interest in the "governance" aspect of the treaty, even if historical or structural factors continuously prevent the U.S. from joining the legal regime outright.
However, though I wouldn't say that "governance" is only law, law certainly informs and creates a large amount of rather concrete governance. Many of the core laws (especially everything to do with "superfluous injury and unnecessary suffering") are moreover rooted in deeply set moral and martial norms, and focusing on the law can be just a nice shorthand for going through all the historical development. The law having a precise formulation one can leaf back to also has some pragmatic benefits, as it allows for a common(ish) language about the norms that parties can engage with.
Hope the added info is of some use!
Thanks for this, Nathan!