I would go so far as to say that the target nomination/ID process is "easier" for autonomous systems since it remains a binary determination. Someone/something either is, or is not, a valid military objective. Some ambiguity can be injected around those taking a "direct part in hostilities", but even so, to your point. errors will be ex post facto knowable.
The real challenge comes when we allow systems to make qualitative proportionality analyses. Not only is the jurisprudence on this limited, but only the most egregious cases have been found to violate treaty and customary legal obligations (leaving aside the fact that the States that wage the most war are not party to Additional Protocol I or subject to ICC jurisdiction).
The "accountability gap" has entered the chat... 🤖
Is this a moral objection against the idea of AI target nominations in general, or could there be a future where AI efficacy is proven greater than a 95th percentile human, in which case it’s ok?
For the record, I’m always opposed to it, for the same reasons I would always oppose replacing a human jury with an ensemble of AI models.
The moral objectors fall into what's know as the "deontological" camp. They believe that non-humans making use of force decisions and taking human life is inherently morally repugnant. Y
our latter question imagining a future in which machines outperform humans, ushering in newfound efficiencies in the battlespace is called the "consequentialist" camp. They believe that the ostensible operational improvements and better adherence to the Law of Armed Conflict outweigh any moral concerns.
There is an honest debate that humanity must/should have about this. But I suspect the frog is being boiled one degree at a time, and the water is getting ever hotter thanks to the war in Ukraine.
Thanks for jumping in! I don't think it's a moral objection. I think it's an engineering objection but not in the way you suggest. I'm not concerned about the efficacy of culling through large datasets and providing an output that seems logical. Instead, I'm skeptical that any model can be engineered to "be human". And because it can't "be human" the model cannot understand human purpose sufficiently to dictate human action or agent action in the service of human purpose and intent. Even if we were to construct a superintelligence, I'm skeptical that by definition it could ever sufficiently model what it means to be human so I don't believe it could generate or simulate sufficient perspective or empathy to decide why humans should do something. It might have an external view of "this is what's best for humans", but it would never have the skin in the game that's required to be human. What I mean by that is that it would not face the same risks and realities that humans do so I'm skeptical it could ever share our perspective sufficiently. Could it get close? Maybe. Could it be good enough? In a lot of cases, probably. But good enough isn't good enough. Humans have to be in control of their own destiny for better or worse. Turning over target nomination is one of the early steps towards allowing machines to be in control of why things happen. I think we should stay away from that. Slight caveat, if you want to go down the road of superdeterminism to challenge the idea of human control in the first place, I'm game.
Some great suff here, Brad!
I would go so far as to say that the target nomination/ID process is "easier" for autonomous systems since it remains a binary determination. Someone/something either is, or is not, a valid military objective. Some ambiguity can be injected around those taking a "direct part in hostilities", but even so, to your point. errors will be ex post facto knowable.
The real challenge comes when we allow systems to make qualitative proportionality analyses. Not only is the jurisprudence on this limited, but only the most egregious cases have been found to violate treaty and customary legal obligations (leaving aside the fact that the States that wage the most war are not party to Additional Protocol I or subject to ICC jurisdiction).
The "accountability gap" has entered the chat... 🤖
Is this a moral objection against the idea of AI target nominations in general, or could there be a future where AI efficacy is proven greater than a 95th percentile human, in which case it’s ok?
For the record, I’m always opposed to it, for the same reasons I would always oppose replacing a human jury with an ensemble of AI models.
The moral objectors fall into what's know as the "deontological" camp. They believe that non-humans making use of force decisions and taking human life is inherently morally repugnant. Y
our latter question imagining a future in which machines outperform humans, ushering in newfound efficiencies in the battlespace is called the "consequentialist" camp. They believe that the ostensible operational improvements and better adherence to the Law of Armed Conflict outweigh any moral concerns.
There is an honest debate that humanity must/should have about this. But I suspect the frog is being boiled one degree at a time, and the water is getting ever hotter thanks to the war in Ukraine.
Thanks for jumping in! I don't think it's a moral objection. I think it's an engineering objection but not in the way you suggest. I'm not concerned about the efficacy of culling through large datasets and providing an output that seems logical. Instead, I'm skeptical that any model can be engineered to "be human". And because it can't "be human" the model cannot understand human purpose sufficiently to dictate human action or agent action in the service of human purpose and intent. Even if we were to construct a superintelligence, I'm skeptical that by definition it could ever sufficiently model what it means to be human so I don't believe it could generate or simulate sufficient perspective or empathy to decide why humans should do something. It might have an external view of "this is what's best for humans", but it would never have the skin in the game that's required to be human. What I mean by that is that it would not face the same risks and realities that humans do so I'm skeptical it could ever share our perspective sufficiently. Could it get close? Maybe. Could it be good enough? In a lot of cases, probably. But good enough isn't good enough. Humans have to be in control of their own destiny for better or worse. Turning over target nomination is one of the early steps towards allowing machines to be in control of why things happen. I think we should stay away from that. Slight caveat, if you want to go down the road of superdeterminism to challenge the idea of human control in the first place, I'm game.