Killer Robot Cocktail Party has grown substantially in the last few weeks, so to those who are new - we are glad you’re here! I want to remind everyone that KRCP welcomes submissions from anyone who would like to respond to posts or offer a completely different topic (within reason) that should be on the table.
Today, we have our first guest post. Major Tramazzo is an active-duty Army judge advocate and a military professor at the Stockton Center for International Law in Newport, Rhode Island, where he co-teaches a course on the Law of Armed Conflict.
John previously served as the Regiment Judge Advocate for the Army’s 160th Special Operations Aviation Regiment (Airborne) at Fort Campbell, KY. He has also served as a legal advisor within the Joint Special Operations Command at Fort Bragg, NC and the Army’s 10th Mountain Division at Fort Drum, NY. He has deployed to Afghanistan and to Jordan multiple times and has traveled to the EUCOM and AFRICOM areas of responsibility for temporary duties.
He holds an LL.M. from the Judge Advocate General’s Legal Center and School in Charlottesville, VA; a J.D. from the University of Baltimore School of Law; an M.A. in Defense and Strategic Studies from the U.S. Naval War College; and a B.A. from the University of Richmond.
Without further ado… here is John’s response.
A few weeks ago, I posed a question to Lena during lunch in Newport. What is the difference between an AI system nominating a target for destruction and a non-Department of Defense analyst (e.g., an Intelligence Community detailee or a foreign analyst) nominating a target for destruction? In other words, does it matter who (or what) nominates a target, so long as the Target Engagement Authority vets, validates, and adjudicates the target in accordance with the law of armed conflict (LOAC), applicable policy, and commander’s intent?
I don’t think so.
But Brad disagrees. On February 19, he published this piece arguing that AI systems should never, ever, nominate a target. In this post, I offer a different perspective. Specifically, I lay out three reasons why I support deliberate, AI-driven target nominations during armed conflict. I conclude with thoughts about automation bias, including a hypothesis that future warfighters will be more skeptical of AI than their predecessors.
Three Points
First, there is no obligation for a military commander to understand the specific methodology that a nominating source (human or machine) relied on to reach a factual conclusion about a potential target. Target nomination is merely a starting point, albeit an important one. What matters is that a military commander understands the logical and causal link between nominated targets and the overall objectives. As Brad pointed out, in the modern era, “there is so much data available for analysts to consider when nominating targets that sifting through it all might take so long that an adversary could act more quickly and decisively.” In my view, commanders need not perfectly understand an AI system’s functionality to rely on its recommendations. In fact, they might even consider using AI to develop a no-strike list (NSL); of course, analysts should vet and validate AI-driven NSLs to ensure no valid military objects make the list.
Second, AI-nominated targets do not present legal issues, per se. The LOAC imposes an obligation on people to discriminate between military objects and unlawful targets (e.g., civilians, civilian objects, protected things and places) and to ensure that attacks are conducted in accordance with the rules governing attacks (e.g., proportionality, precautions). Paragraph 6.5.9.3 of the DoD Law of War Manual sums this concept up nicely. When an AI system like “The Gospel” nominates a target, it does not guarantee that the proposed target is a military objective. The Gospel does not determine whether an attack may be expected to result in incidental harm that is excessive in relation to the concrete and direct military advantage expected to be gained. Only humans can make those conclusions of law. This is a red line, but AI-driven target nominations do not cross it.
Third, I acknowledge that AI systems do not engage in reasoning. In my view, it doesn’t matter. Indeed, algorithms lack operational experience, moral intuition, purpose, compassion, and…humanity. But consider the original question. Suppose that a friendly foreign analyst was to nominate a target for destruction in the context of an armed conflict. The battle staff would receive the information and investigate it on behalf of the commander. The staff may never see or fully understand why the analyst nominated the target. For various reasons (e.g., political sensitivities, protection of sources and methods) the root reasoning may remain secret or opaque. Target nominators from the Intelligence Community or foreign governments can be “black boxes” in the same way AI systems are.
Thus, the staff seeks to answer questions like:
- Does the nominated target meet the higher commanders’ objectives, guidance, and intent?
- Is the nominated target consistent with law of war?
- Is the desired effect on the nominated target consistent with the end state?
- Is the nominated target politically or culturally sensitive?
- What may the effect be on public opinion (enemy, friendly, and neutral)?
- What are the risks and likely consequences of collateral damage?
- Is it feasible to attack this nominated target? What is the risk?
- Is it feasible to attack the nominated target at this time?
- What are the consequences of not attacking the nominated target?
- May attacking the nominated target negatively affect friendly operations due to current or planned friendly exploitation of the target?
Depending on the answers, the staff may approach the commander with a recommendation to strike. The staff may also determine that the target is operationally unworthy of inclusion on a target list or that it presents legal issues.
My point is that target nomination is merely one factual step in a process loaded with human reasoning and legal judgments. If you replace the words “friendly foreign analyst” above with “AI system,” the battle staff and the commander would behave in the exact same manner. Staffs and commanders will (and must) always vet and validate targets, regardless of the nomination source. In my experience, commanders do not blindly trust anyone or anything. Even the most well-designed AI systems will not replace the contextual inquiries we always make or the precautions we always take.
Automation Bias and Multi-Generational Workplaces
That said, Brad made some compelling points. First, he noted that AI systems will nominate more targets than humans have the capacity to vet and validate. We must resist the temptation to skip the inherently human targeting functions (i.e., vetting, validating, target engagement decision). Commanders who demonstrate laziness in this regard or blind trust in AI shouldn’t be permitted to serve as Target Engagement Authorities.
Second, he discussed automation bias and how humans tend to trust computers, even in the face of evidence that they should not. Excessive trust in computer systems is a real problem. We’ve all read the stories about drivers who follow navigation apps over collapsed bridges or into lakes. Humans over-rely on technology. In the context of Israeli operations in Gaza, he argued that The Gospel system may be leading the Israel Defense Forces into “a strategic hallucination cycle from which they can’t escape until it’s too late.”
However, as more digital natives enter the workforce, I anticipate automation bias will dull. Hear me out. Unlike older generations in the workforce, Generation Z trusts technology, in general (see here), but understands that AI systems essentially exploit their behavioral data. They have only lived in a world in which computers and smart phones constantly surveil them and force-feed them targeted advertisements. They use Siri and Alexa, but they do not trust them as omnipotent virtual goddesses.
Even Generation X understands the limits of powerful AI. Brad, for example, recognizes that his Facebook account feeds him baking videos as if he were a “70-year-old white lady from Alabama,” which he is not, I assure you. In my case (I’m a Millennial), social media has me wired: bourbon, saltwater fishing, CrossFit, military humor, black labs. I am impressed with the algorithmic accuracy, but I don’t do everything my iPhone suggests that I do. I make independent decisions about how to act and what to buy based on my schedule, my budget, and my circumstances.
Younger people will continue to engage with AI systems, of course (do they have a choice?), but they understand that Instagram and TikTok only encourage them to purchase cowboy boots because they recently searched “flights to Nashville” (or whatever). I predict that consumer awareness of, and skepticism about, AI-driven advertisements will translate to healthy skepticism in AI-driven target nominations. Here are some studies that I think support my intuition:
The Rise of the Digital Native: How the Next Generation of Analysts and Technology Are Changing the Intelligence Landscape
The Kaleidoscope: Young People’s Relationships with News
Fake News Reaching Young People on Social Networks: Distrust Challenging Media Literacy
Study on the Perception of Generation Z in Relation to Robotized Selection Processes
Human Trust in Artificial Intelligence: Review of Empirical Research
Rethinking Technological Acceptance in the Age of Emotional AI: Surveying Gen Z (Zoomer) Attitudes Toward Non-Conscious Data Collection
Gen Z, Explained: The Art of Living in a Digital Age
The gist (from page 20 of this report), which I agree with, is that:
“In the future environment, end-users, who have grown up using AI-enabled, immersive, and mobile technologies their entire lives, will inherently trust those applications to accomplish assigned tasks. That said, these same users will very likely be skeptical of information they engage with in the digital world and will seek out both disconfirming and confirming information as they form opinions or make decisions.”
Of course, I could be wrong. But that’s my working hypothesis and my sincere hope.
Finally, I note that Brad concluded, “AI Nominating Targets is Bad.” While I appreciate his judgment and his clear stance on the issue, I think it’s important to recall Melvin Kranzberg’s First Law: “Technology is neither good nor bad; nor is it neutral.” Allowing AI to nominate targets is not inherently illegal, unethical, or immoral. However, it will certainly change the way we fight, and change is uncomfortable.
The thoughts and opinions expressed are those of the author and not necessarily those of the U.S. government, the U.S. Department of the Army, the U.S. Department of the Navy, or the U.S. Naval War College.
Thanks John. Brad and I see eye-to-eye on almost everything related to AI, but I fall squarely in your camp on this one. I was waiting for someone to weigh in with a different opinion.
I have quite a bit of experience with targeting, as both a consumer (fighter aircraft, AOC) and producer (Intel Group Commander, with the Group comprising two active duty and several National Guard targeting squadrons...almost the entire Air Force's targeting expertise in one Group).
I agree with everything you say! I'm glad Brad quoted the Air Force targeting 'bible', AFDP 3-60, Targeting. However, I believe he didn't quite capture the point you so rightly highlight -- it's not the nomination that's the issue, it's the leap from nomination to target approval and strike. The entire purpose of the AOC Target Effects Team (TET) is to review target nominations and propose a targeting list (JIPTL) to the C/JFACC for approval. That process will always involve humans reviewing target noms, with lawyers right there alongside the targeteers/TEA.
Time-sensitive targeting (TST) is a different animal of course. But even in that case, the ROE and SPINS will be clear about the level of automation allowed before a target is allowed to be struck. I remain confident that AI will *not* be allowed to nominate and approve kinetic attacks without a human in the process (where that human resides is a legitimate question, which leads directly to some of the legitimate concerns Brad highlighted. But that's for a different post).
As you so rightly underscore, IHL/LOAC + ROE/SPINS still apply, regardless of the level of automation in the targeting cycle. Your list of questions is excellent. The only thing I can add is that all of those questions are distilled down to one question, "what is the risk (including risk to mission, risk to force) of striking (or not striking) this nominated target?" Followed by determining who accepts that risk, and at what level.
All that said, we need to start thinking now about the implications of using GenAI in the targeting cycle. Despite the incredible potential of LLMs, integrating GAI into the targeting process will be fraught. The DoD will need to tread carefully when thinking about how much GenAI is used, and exactly what it's used for.
I'm many years removed from being Gen Z, but I'm with you on this one!
Great post!
(Fellow JAG here - Canadian)
What we're talking about, short of fully delegating use of force decision-making to autonomous systems, is human-machine teaming (HMT). Inherent in this concept, particularly as regards lethal autonomous weapons systems (LAWS) is the concept of meaningful human control (MHC).
I, and I'm sure many others, have come to loathe this amorphous term, but it's really just a collective exercise in trying to effectively describe the arcs of responsibility in HMT.
This gets us to Trust vs. Trustworthiness.
Trustworthiness and trust are intertwined, yet distinct concepts with significant implications for LAWS development and deployment. While trustworthiness (as expressed through reliability) is vital in order for human operators to rely on LAWS, thus establishing MHC, trust can be dangerous; acting as nothing more than a rubber stamp would remove the necessary meaningfulness from control. This tension is sure to be heightened by the ever increasing speed of the battlespace.
I agree that AI could, and should, play an important role in deliberate targeting. Dynamic targeting will present a real challenge, however. I also agree that it will be interesting to see how future end-users interact with systems in HMT environments.