On November 30, 2023, Israel magazine +972 and Local Call reported an AI decision support tool the IDF currently employs called ‘the Gospel.’ The investigation finds, “the Israeli army’s expanded authorization for bombing non-military targets, the loosening of constraints regarding expected civilian casualties, and the use of an artificial intelligence system to generate more potential targets than ever before, appear to have contributed to the destructive nature of the initial stages of Israel’s current war on the Gaza Strip…”
According to The Guardian in December 2023, the Gospel system "significantly accelerated a lethal production line of targets that officials have compared to a 'factory.'"
Regarding the pace of Gospel decision-making or target recommendation, media outlets reported different numbers. The Guardian claims that, before employing the Gospel, the IDF produced 50 targets a year and now, using the Gospel, can produce 100 targets a day. On the other hand, NPR reported that the system could produce around 200 targets within 10-12 days. NPR's estimates seem more likely.
According to NPR, the Gospel system is new – the earliest mentions date back to 2020, when the system received a top innovation award. And this is not surprising. The IDF has a reputation as a technologically advanced fighting force, so it makes sense that it would employ relatively new systems in real time. AI tools like the Gospel can accelerate the decision space and provide significant data to target analysts. We can see a similar reliance on emerging tech in the conflict in Ukraine-Russia, where some observers acknowledge a race between Russia and Ukraine to ‘out-innovate’ each other.
The Gospel system can identify static and moving targets and, based on data we know nothing about, it produces a recommendation for human intelligence and targeting analysts to review and determine whether to authorize the recommendation for further review along the appropriate channels.
Certainly, anyone who follows the military AI space will have a thousand questions about the inner workings of the Gospel system and its subsequent performance. But much to our dissatisfaction, there is much about this system that we will not know and are not likely to know any time soon. We do not know what data is used or the quality of other AI systems that are informing Gospel target recommendations; we do not know which target strikes originated from the Gospel, and we do not know how accurate the system is.
There are other concerns typical for any AI system, namely explainability and automation bias, which are particularly acute in this case. The explainability problem stems from uncertainty regarding the data and decision protocols that led to a particular recommendation. The analyst evaluating the final product does not know if the system made errors or if there was biased or inaccurate data leading to an inaccurate target recommendation. The inability to open the box and examine the decision-making pathways is one of the notable concerns of using AI in this way – even more so if the system is autonomous (which the Gospel is not).
There are also legitimate concerns about automation bias. This refers to an over-reliance, sometimes called overtrust, in the system's output. If the Gospel previously produced good targets, there may be more willingness to rely on the output and not hold the information to the highest degree of scrutiny. It is a phenomenon we are all familiar with. But there is also the consideration that the Gospel produces a lot of information for human analysts to review. If we take NPR’s estimation, the system can produce roughly 200 targets in 10 days; that is a lot of information to validate. It is easy to imagine how analysts could feel overwhelmed (especially in the stress of conflict), how a high accuracy in the target recommendations could result in overtrust in the output, and how somebody could make mistakes. It seems imperative to have an information-overwhelm protocol to help handle this workload responsibly and implement solutions to these challenges. It may be that the IDF has implemented those protocols – it is just not public information.
For a while, the news about the Gospel system flooded my social media (the algorithms know me well) with worrying headlines like the IDF is using a "data factory" to commit "mass assassinations." Hyperboles aside, it is a giant leap to blame the AI for the now over 20,000 Palestinian deaths. Perhaps the outrage is warranted – the issue is that we simply do not know. And I guess that is the point. While global discussions are happening within our communities about 'responsible AI' or 'trustworthy AI', we must consider how quickly we jump to conclusions in both directions – assuming a system is perfect or terrible. At the end of the day, humans are responsible for the output of the Gospel system. As new tools enter our toolbox to make data-driven decisions, we must not forget that humans are (for now) responsible, and we make loads of mistakes.
https://www.rafael.co.il/press/rafael-advanced-defense-systems-ltd-introduces-puzzle-intelligence-system-revolutionizing-target-precision-with-ai-integration/