Discussion about this post

User's avatar
Lane Odom's avatar

Is this a moral objection against the idea of AI target nominations in general, or could there be a future where AI efficacy is proven greater than a 95th percentile human, in which case it’s ok?

For the record, I’m always opposed to it, for the same reasons I would always oppose replacing a human jury with an ensemble of AI models.

Expand full comment
Michael M. Smith, LL.M.'s avatar

Some great suff here, Brad!

I would go so far as to say that the target nomination/ID process is "easier" for autonomous systems since it remains a binary determination. Someone/something either is, or is not, a valid military objective. Some ambiguity can be injected around those taking a "direct part in hostilities", but even so, to your point. errors will be ex post facto knowable.

The real challenge comes when we allow systems to make qualitative proportionality analyses. Not only is the jurisprudence on this limited, but only the most egregious cases have been found to violate treaty and customary legal obligations (leaving aside the fact that the States that wage the most war are not party to Additional Protocol I or subject to ICC jurisdiction).

The "accountability gap" has entered the chat... 🤖

Expand full comment
2 more comments...

No posts