Hi all,
As I mentioned in the last posting, here is the full version of the article Tiffany Saade and I penned last month in the Republic Journal.
I was recently at a Track II dialog where I briefed some of these ideas and I received some really good feedback and some new ideas to develop. While there, several of my colleagues asked me to hurry up and post the full article, so here it is, all 6000 words of it.
Be careful what you wish for…
Human-centered warfare: Optimizing human cognitive autonomy and avoiding machines on the loop.
Sep 2024
Bradley L. Boyd
Tiffany Saade
Machines are helping humans think on the modern battlefield. In Ukraine and Gaza, AI-enabled decision-support tools are helping humans fight wars at greater speeds and higher tempos. This increase comes from automating parts of human cognition. Instead of hundreds of humans spending thousands of hours reading and analyzing reports, AI-enabled decision support tools sift through large amounts of intelligence data, including intelligence reports, video, and signals information to identify and recommend targets for action. Governments and militaries consider these tools important technology that improves human performance in war while the humans remain firmly in control.
Initial reporting from the war in Gaza suggests that the reality of human control over AI-enabled decision support tools may be more complex than keeping a human on or in the loop of decisions. Human operators may have become so cognitively dependent on these tools that they no longer have meaningful control over what the machines are recommending or the resulting actions. Human behavior when teaming with AI-enabled decision support systems suggests some difficult questions about whether humans can retain their own cognitive autonomy in an environment of sophisticated cognitive automation. In this article we consider some of those questions and provide a framework for how to think about maintaining human control through cognitive autonomy in the context of AI-enabled warfare.
The relationship between human and machine
The idea of machines thinking for humans seemed far-fetched only two decades ago, but the last ten years saw a dramatic improvement in compute power and new methods of machine-learning that allowed machines to perform some cognitive tasks as well or better than humans—and in most cases much faster. Under the catch-all term artificial intelligence (AI), humans are creating tools that give machines more ability to perform what were previously human cognitive functions: AI-enabled machines are designing and testing new drugs to improve human health;[1] they are beginning to do open-ended scientific research;[2] they are making art;[3] and they are starting to help fight wars.[4]
In warfare, AI-enabled systems are identifying targets, looking for patterns in intelligence data, and helping prioritize efforts. The integration of more AI-enabled systems has militaries and governments considering what role machines can and should play in the cognitive functions of war and how to ensure human oversight.
As AI-enabled machines take on more cognitive tasks from humans, should we be worried about becoming cognitively dependent on them? Instead, should our goal be to maintain as much cognitive autonomy as possible in all circumstances? If we accept some cognitive dependence on machines because of real benefits, how can we know when we have become “too” dependent? Is there a right amount of cognitive autonomy? If so, how do we build a machine that improves outcomes but prevents dependence and preserves cognitive autonomy?
We argue that optimizing AI-enabled decision support tools for human cognitive autonomy, within a specific context, would ensure that outputs are aligned with human-defined ends while allowing maximum benefit from machine automation.
Machines on the Loop
In a 2016 article, Paul Scharre suggested future human-machine teaming would take the form of a centaur: human brain enabled by machine power. Humans guide and control the machines that increasingly execute complex warfighting tasks at beyond human speed. In centaur warfare, the human retains the roles of essential operator, moral agent, and failsafe, leaving the machine to help where feasible and appropriate.[5] The metaphor of a centaur is meant to illustrate that the human-machine team is improved by the machine, but the human retains cognitive and moral control of what the team does.
In 2023, Sparrow and Henschke responded that the machines are becoming so good at cognitive tasks that humans will turn cognitive control—and, by default, immediate moral control—over to machines. Instead of a centaur, human-machine teams will be minotaurs.[6] Sparrow and Henscke believe that AI-enabled cognition tools will become so good that armies will have AI Generals as the brains making decisions that the human bodies execute. They believe this is starting to happen now and that AI technology is improving so rapidly that we are likely to be confronted with minotaurs very soon. Instead of humans on the loop, we will have machines on the loop.
There may be pressure someday to intentionally turn cognitive control of warfare over to machines, but that seems unlikely in the near term. Most nations are insistent that humans will retain control over the actions of machines, and that the role of machines is to help the humans perform better. Turning over cognition to machines would seem to violate the principle of meaningful human control gaining consensus in the international legal community.[7] It would also seem to violate the U.S. Department of Defense’s requirement that any employment of autonomous weapon systems display appropriate levels of human judgment.[8] The problem is that the way those systems are designed, and the context in which they operate, increases human cognitive dependence on them—which means humans may be unintentionally giving up meaningful human control of AI-enabled systems.
Cognitive Dependence in war
There are times when cognitive dependence on a machine seems to be all benefit and very little, if any, risk. Consider a calculator. It takes most of us a lot of cognition to do math. Having a calculator do it for us is a great benefit. Plenty of humans would not consider doing even moderately complex math without it. A calculator almost always performs at 100 percent accuracy, and we do not worry about it enslaving us, or worse, hallucinating answers. But the stakes of cognitive dependence can be much higher when machines help solve cognitive problems in complex contexts like war.
Militaries around the world are racing to integrate artificial intelligence into decision-making systems to help humans make better decisions faster in war. These systems are called AI-enabled decision support tools. With these tools, machines sift through massive amounts of intelligence data to identify, select, and recommend targets for action, sometimes to lethal effect. These decision support tools perform tasks that previously took hundreds of human analysts, they do it in a fraction of the time, and they do it well—so well that there is an incentive to create more powerful systems that can perform ever more sophisticated human cognitive tasks. Those future cognitive tasks could include analyzing and recommending options for military forces, directing the actions of human forces in combat, coordinating multiple lethal systems to destroy targets, and optimizing which forces get which assets to do their missions.
AI-enabled decision support tools, like the U.S. military’s Maven Smart System, have been used in Ukraine and in the Middle East to help identify, track, and prioritize targets.[9] Similar systems have been used to more dramatic effect in the Israel Defense Force’s (IDF) operations in Gaza. Reporting on the use of these systems suggests some troubling emerging behavior in human operators intended to oversee these AI-enabled tools.
Use of AI-enabled decision support tools in Gaza suggests that under certain circumstances, human operators can become overly dependent on AI-enabled systems, and that dependency can result in actions that may violate policy and perhaps international humanitarian law. Reports have centered primarily around two IDF systems, Gospel and Lavender. Both systems scrape massive data repositories to identify targets for military action. Gospel predicts which structures are used by Hamas, while Lavender predicts who is a Hamas operative and where they might be located. That information is then used to strike those targets as part of the larger military operation.
Both tools have increased the speed and volume of IDF targeting operations in Gaza. Prior to the introduction of AI-enabled decision support tools, the IDF targeting process required 20 IDF intelligence officers to produce 50-100 targets in 300 days; Gospel produced 200 targets in 10-12 days.[10] The Lavender system identified 37,000 suspected militants in the first few weeks of the conflict and is at least 90 percent accurate.[11]
IDF leadership encouraged operators to maximize use of the machines for multiple reasons: political imperatives demanded striking as many targets as fast as possible; leadership considered the machines statistically accurate enough to remove direct human oversight; and the machines removed human emotional subjectivity when identifying targets.[12] Some operators admitted to spending only 20 seconds on approving targets, essentially verifying that the target was a male, while others acknowledged that they preferred to just follow the machine because it was statistically accurate and “…the machine did it coldly. And that made it easier.”[13]
The battlefield results from use of AI-enabled decision support tools are difficult to isolate. The introduction of Gospel and Lavender in Gaza corresponds with greater speed and volume of offensive kinetic strikes when compared to other recent conflicts. By one report, the war in Gaza has seen more daily casualties than any conflict in the last 24 years.[14] We probably will not know whether the increased speed and volume was a positive result for operational objectives until after the war. However, by comparing IDF press releases to unofficial investigations and reports, we can infer that use of Gospel and Lavender in Gaza may not have complied with IDF policy or the Laws of War[15].
The IDF made it clear that reports about operator use of Gospel and Lavender are not consistent with IDF policy or IDF commitment to the Laws of War.[16] The IDF suggested that the reports of operator misuse are false, however, the IDF does not dispute reports of what the tools are designed to do. If we assume the open-source operator reports are accurate, and that IDF policy described in press releases is accurate, then we can infer that human use of the tools has diverged from what was intended when they were developed.
It is important to understand that the tools themselves did not violate any policies or laws. The existence and design of the tools as reported do not appear to violate any requirements for distinction, precaution, or proportionality in the targeting process. The issue lies with how humans use and interact with the tools. In the case of Gospel and Lavender, operators became dependent on AI-enabled decision support tools for approving targets for strike because they were encouraged to do so by the context they were in..
In the presence of machine speed and accuracy, and under cognitive pressure to do more, the operators of these systems became dependent on the machine. The humans stopped thinking and let the machines think for them. Why?
Dependency from increased cognitive bias and decreased cognitive capacity
When human cognitive capacity is reduced, meaning the ability to think is disrupted by physical or mental stress and exertion, the human mind can be more susceptible to cognitive biases and shortcuts to keep thinking and acting with less expenditure of energy. These biases and shortcuts can be very useful; however, they may lead to cognitive dependency on machines with detrimental downstream effects.
One way humans reduce their cognitive load is through a process called cognitive offloading. This is when physical acts are used to alter the processing requirements of information and can include actions like tilting your head to see a sideways image or using a calculator as in our previous example.[17] In times of mental stress and reduced cognitive capacity, a mental strategy to offload cognition to a tool is a rational choice. Sustained offloading may result in what is essentially a long-term dependency particularly if the results seem to be exactly what is desired.
Humans tend to seek out or overweight information that aligns with their beliefs while avoiding or underweighting evidence that contradicts them.[18] A recent study suggests that when psychologists use an AI-enabled decision support system they tend to trust and accept AI recommendations that align with their initial intuition, and further, those with greater expertise were more skeptical of results when the AI tool suggestion deviated from their professional judgment.[19]
When interacting with machines, particularly machines that perform some level of human cognitive task, humans can also bias towards automation if they perceive it to be particularly performant or if the results seem to be what is expected or desired.[20] Automation bias means that a human favors a machine’s output over their own. Automation bias can be particularly prevalent in times of reduced cognitive capacity.[21]
In particularly stressful contexts there could also be bias for action. When a human prefers action over inaction, even if inaction may mean higher probability of reaching desired outcomes, they are succumbing to action bias. Research on action bias suggests that there is a preference for action under high stress in professional sports, but data is not yet available on human interaction with decision support tools.[22] The pressure reported by IDF intelligence personnel would seem consistent with conditions that encourage action bias: an available tool that enables more action, reduced cognitive capacity in operators, demands and rewards for increased action, and consequences for perceived inaction.[23] It appears likely that IDF operators could not sustain the operational tempo and results demanded by their leadership if they retained more of the cognitive load in the targeting cycle. Under these conditions operators outsourced cognition they were supposed to retain for themselves and became dependent on the machines.
Under conditions of stress and reduced cognitive capacity there could come a point where the human is completely dependent on the machine for a cognitive process. This dependency may not be acceptable in all circumstances, particularly when authorizing lethal force. One way to deal with this would be to work to reduce the stress that is affecting cognitive capacity, but in warfare that may not be a practical goal. Instead, we may consider ways to reduce cognitive dependence on AI-enabled tools in warfare by optimizing human cognitive autonomy.
Imposing meaningful control
In the United Nations Convention on Certain Conventional Weapons (UN CCW) a consensus is forming around the need for humans to retain meaningful control when employing autonomous weapons. While defining meaningful human control (MHC) is difficult, the concept can help steer us towards knowing when humans become overly dependent on AI-enabled systems. A general summation suggests that MHC ensures adherence to the laws of war (LoW) and international humanitarian law (IHL) as well as global moral consensus. These laws and morality focus on the employment of force in a way that is: 1) militarily necessary: 2) proportionate to the context; 3) distinguishes between combatants and noncombatants; and 4) takes proper precautions to avoid unauthorized harm. We could consider a human as cognitively dependent on a decision support tool, and not applying meaningful human control, if the actions derived from the tool’s recommendations are violating LoW and IHL without the human operator intending it. But LoW and IHL violations are not the only circumstances where human cognitive dependence could be problematic. Cognitive dependence is not just about violations of LoW and IHL, but about who or what is in control of the conditions and actions in a particular context.
Control is the capacity to exert influence over internal states, external states, and desired outcomes. There is evidence that control is a biological imperative because it is associated with organisms ensuring their own survival.[24] However, the amount of control we want can depend on the context or situation we are in. For example, a low-stakes situation (i.e. choosing what cereal to eat in the morning, especially if you are late for work), would encourage you to accept less control because most choices available are reasonable, and while there may be a preference that varies from day to day, the risk of accepting a low preference selection is minimal. Giving up control could allow you to eat faster and spend less energy on cognition. If the situation has high risk (i.e. in the context of targeting adversaries in a village full of noncombatants), you will likely prefer more control to mitigate risk, with greater cost in speed and cognition.
We may also consider that control of some parts of a situation may be more necessary than others. Control of defining the desired state may be critical, while control over the process to achieve the desired state may be a place to assume risk to gain speed. We may be willing to make this trade-off by assuming that we have control over validating that the output state matches the desired state. Problems arise when we cannot tell if the process is in service of the desired state, or if the output state resembles the desired state.
The aforementioned examples from Gaza suggest that humans may have given up direct control of significant parts of the targeting process in Gaza for speed and cognitive efficiency, while still maintaining the illusion of control through human on the loop requirements. Despite human intelligence personnel on the loop, significant portions of the targeting outcomes appear to be linked to the outputs coming from the AI-enabled decision support tools rather than human cognition. The operators became cognitively dependent on the machines and then the outcomes became most closely tied to machine rather than human control.
Discussions on how humans can maintain control of automated and autonomous systems are long-standing. CCW meetings on MHC have considered ways that humans can maintain control over machines by ensuring: 1) the technology is predictable; 2) it is reliable; 3) it is transparent; and 4) the user has accurate information on what the tech does and what it is for.[25] These requirements are good steps towards ensuring MHC but they do not take into account human psychology and how humans interact with machines even when these conditions are present.
Human operators must be able to maintain control over the outcomes in war. To do that they must manage their dependence on machines so that they retain sufficient cognitive autonomy to guide the war towards human defined outcomes. We propose to better understand the trade-off between cognitive dependence and cognitive autonomy to better ensure human control of warfare when humans employ AI-enabled decision support tools.
Cognitive autonomy as a means for meaningful control
We define cognitive autonomy as the ability to perform the mental process of acquiring, storing, manipulating, and retrieving information in an independent and authentic way.[26] However we do not consider cognitive autonomy an absolute. Instead, it exists on a spectrum. At either end of the spectrum, we have opposing states that are neither achievable nor desirable. We do not want complete cognitive autonomy where human reason exists only in relation to itself, nor do we want complete dependence where we no longer think at all. We want our cognition to exist somewhere on the spectrum in between, and that is likely dictated by a desire for a general global autonomy while we selectively sacrifice some autonomy for an acceptable benefit.
Global cognitive autonomy is what philosophers consider when they discuss distinct personhood and free will. Autonomy in this context refers to the states of a person, and assumes that most adults who are not suffering from debilitating pathologies or oppression are considered autonomous.[27] We confine use of the word autonomy to consider cognition in a specific narrow context. As an example, philosopher John Christman describes a situation where a person can be an autonomous being with complete personhood but still lack control of their behavior in circumstances like drug addiction.[28] Similarly, we consider the human operator of AI-enabled decision support tools to be cognitively autonomous in a global sense even if in the context of using the tool, they have become cognitively dependent, and thus no longer in control.
Loss of control in a narrow context does not necessarily mean that the situation is deteriorating. The machine could be doing a fine job. Instead, the nature of that control is changing. In a situation where a human and machine are both exerting influence, control of that situation leans toward being either human or machine-defined, depending on which is exerting the greatest influence. If a human operator relies on a machine to automate and amplify cognitive functions, the human becomes more dependent and control of the situation becomes more machine-defined. The opposite is true if human cognition is the primary influence, control of the situation becomes more human-defined. This means that meaningful human control is at least partially a function of cognitive autonomy, and could be achieved by ensuring that human operators of decision support tools maintain sufficient cognitive autonomy. The difficulty is in determining how much is sufficient autonomy in any given context.
Evaluating cognitive autonomy and dependence
Applying the idea of cognitive autonomy to human-machine interaction is fairly novel but the idea has been applied in developmental psychology when studying how humans seek to increase control in their environment as they age into adulthood. Developmental psychologists use a framework called cognitive autonomy and self-evaluation (CASE) to determine when children become cognitively autonomous from their parents and when determining if persons with mental disabilities can function on their own.[29]
The CASE framework includes five categories to determine cognitive autonomy: a) making informed, independent decisions; b) voicing educated and appropriate opinions; c) weighing the influence of others on thinking; d) considering consequences; and e) self-evaluating practices.[30] These categories inform a questionnaire researchers use to determine how much cognitive autonomy a person has. Research showed that CASE is a viable tool for assessing cognitive autonomy in children and young adults.[31] The CASE framework provides a good starting point for creating a method to measure human cognitive autonomy in human-machine teams, but it needs to be tailored for human interaction with AI-enabled decision support tools.
We suggest a modified framework of six categories for evaluating cognitive autonomy and dependence of humans on a given AI-enabled decision support tool: a) cognitive capacity; b) conceiving a state of being; c) applying value; d) forecasting outcomes; e) understanding influence; and f) controlling choice architecture. We refer to it as the Cognitive Autonomy Variable (CAV) framework.
In the diagram below, a human operator’s cognitive autonomy or dependence are functions of six variables. The primary variable is how much cognitive capacity the operator has at any given moment. Greater cognitive capacity enables greater capacity in the other variables. In each of the other variables, the human is either more or less reliant on the machine for that function. The composite relationships of those functions suggest that the human is becoming more dependent or more autonomous from the decision support tool.
Figure 1
Cognitive Autonomy Variables:
Cognitive capacity: Not to be confused with a measure of intelligence, this variable considers the resources available for cognition and indicates whether a human can perform the capabilities of the other variables. Capacity can be affected by caloric energy, stress, sleep deprivation, and many other factors.
The cognitive biases discussed all have some element of reduced cognitive capacity as a factor of their appearance and effect. War requires high levels of cognition coupled with circumstances of significant stress. Cognition limits of humans, as well as susceptibility to stress, will be indicators of human tendency to become more dependent on a decision support tool to offset cognitive capacity limitations.
Conceiving a state of being: This category captures human creativity and the ability to perceive relationships between information in order to imagine and describe a state that differs from that which is currently in place.
The heart of any war is the belief that the current state of being is so undesirable that lethal force on a national scale has become necessary. Every war, so far, is therefore a function of human conceptions that things should be a certain way. This is the heart of control. Whether this conception is machine- or human-defined will indicate how dependent humans have become on the tool. The ability of a human to decide what could be and what could not be is the beginning of control and cognitive autonomy. If a machine is deciding what could be and could not be for the human, the machine could be said to be in control and the human cognitively dependent.
Applying value: Value is where human cognition determines desire and preferences. When conceiving of new states of being, the human applies value to determine which state is more desired than another, what is acceptable and what is not, and it helps to prioritize and sequence events. Applying value allows human cognition to go from what could be to what should be.
Applying value to conception of a desired state is closely associated with why humans do what they do or want what they want. In warfare the why, or the purpose, of the violence must be tightly controlled, well-understood, and closely tied to a larger human-centered purpose. Our ability to hold on to the purpose behind a decision or a choice will help us ensure that a system is on track to accomplish the correct objective function we have defined and reasoned. We might think that preserving cognition only over the end state is enough; we would know why that completed end state is important, and why we decided on that end state in the first place. However, when a deviation in the process leads to a different end state or output, this is precisely where holding on to the purpose becomes critical; it is to be able to deviate from a generated end state we do not desire, to the one we initially wanted.
Forecasting future outcomes: An ability to predict outcomes from decisions and values, forecasting is essential to making choices that match desired states. Without forecasting, there is only random choice.
Where machines are starting to excel is in their ability to consider far larger amounts of information than humans can in order to forecast future outcomes of actions. This makes it useful to employ machines for predictive analytics to down-select options in favor of those that are most likely to yield the desired end state. While humans could retain control over conception of what should happen, cognitive autonomy is degraded if they cannot forecast an outcome to decide which is the best action to reach the desired state. In warfare the temptation will be to rely on the machine to predict how to get to a desired state. In many cases this may be fine but humans are sacrificing some autonomy for that benefit.
Understanding influence: There are external and internal influences on cognition. Understanding the nature and ubiquity of influences in any given context allows humans to impose as much cognitive autonomy as they can by tempering or enabling influence, by accepting some influences over others, and understanding (to the extent possible) how influences are affecting cognition.
In warfare, not only will there be neutral influences from the information environment, there will be influences designed to sway decision-making. An adversary may attempt to deceive by confusing data collection and poisoning collected data, or a decision-support tool may attempt to influence its operator to meet a mis-aligned reward function. Much the way a commercial algorithm might be influenced as much by advertiser funding as your personal movie preferences, a military decision-support tool will be balancing competing priorities from the chain of agents throughout its deployment cycle. The operational recommendations may not always align with operator preferences in a real-time context because it is near impossible to anticipate every context prior to deployment. The human may think decision support tool recommendations are aligned with their operation because it says so on the package. However, it would be hard for the operator to know the entire chain of influences affecting the machine.
Controlling choice architecture: Choice architecture is the array of available choices.[32] It is the structuring of the decision environment to guide, constrain, and influence decisions. A choice cannot be made if it is not available, and the volume and types of choices available can shift behavior toward specific outcomes.[33]
One’s ownership of a system is ultimately reduced as automation takes over reasoning rather than strictly automating the process by which an action is executed. This becomes a concern especially when systems have control over choice architecture. If a system controls the amount and variety of choices presented to us, that would potentially steer us towards choices we might have not made independently—often without our awareness. As we lose control over choice architecture, it becomes increasingly challenging to preserve the reasoning behind a decision we take or a choice we make, because our options are limited. In other words, we infer that there is some sort of a positive correlation between choice architecture and cognitive autonomy, whereby a broader choice architecture requires greater cognition to make a choice.
The inherent irony in high-stakes decision-making situations is that while the pressure often reduces cognitive capacity, the gravity of the decisions actually demands a higher level of cognition. This creates a critical tension: on one hand, there's a need to make decisions quickly to respond effectively to the situation. On the other, there's a competing need to carefully consider each decision, given the potentially severe consequences of any mistakes.
In such scenarios, the choice architecture plays a pivotal role. Simplifying this architecture can potentially speed up decision-making but might also limit the depth of cognitive engagement, potentially leading to oversights or errors. This raises a significant question about the extent to which we should rely on automated systems or AI in such contexts, and about how much we allow the choice architecture of a given situation to be determined by automated systems. Machines can process vast amounts of data rapidly and can be designed to operate under predefined rules, which is advantageous in time-sensitive situations. However, their ability to appreciate the nuances of human values and the broader context in which decisions take place might be limited.
The dilemma then becomes whether to delegate critical, high-stakes decisions to algorithms that can act faster but may lack deep understanding, or to maintain human oversight, accepting slower decision-making processes in exchange for potentially more considered and contextually aware outcomes.
Applying the CAV framework
We propose that decision support systems could be constructed to consider the CAV framework in a way that allows human operators to know how dependent they have become on a particular decision support tool, and how they could adjust the tool to increase cognitive autonomy when desired. We offer three principles to be considered for decision support tool design, as well as some suggestions for what practical implementation might look like; and we propose that the CAV framework could be used in testing and evaluation prior to deployment.
Principles
1. Design of any AI-enabled decision support tool should consider how humans will interact with machines that can replace some human cognition—particularly how cognitive bias will manifest when humans interact with machines in the expected context.
2. The ratio of cognitive autonomy to dependence should be recognizable and measurable: We do not intend this as a binary marker but rather as a gauge to inform operators when their interactions with the tool indicate that reliance on the tool is increasing and may exceed desired thresholds.
3. Cognitive autonomy should be optimizable by human control of the tool. The human operator must be able to modify system functions when cognitive dependence exceeds desired limits and more cognitive autonomy is optimal for the context.
Practically, these principles could be implemented through tracking systems in the machine that turn into awareness for the operator. For example, activity logs and performance metrics that compare present user behavior to historic behavior—as well as to behavior in peers with similar systems—might suggest to an operator that they have started to rely on the tool much more than they have, or much more than their peers. At this point the operator or a supervisor could evaluate whether they have become too dependent. It could also provide a legal record for future use, which may have some benefit but carries the risk of encouraging timidity on the battlefield.
The system could also communicate uncertainty clearly in ways that encourage reflection in the operator. This means something more complex than “90 percent performant”—a metric that does not communicate what the other ten percent is or what exactly performance at 90 percent means. Rather than encourage reflection and cognitive autonomy, that type of communication encourages complacency and automation bias, and thus cognitive dependence. In the case of target identification, uncertainty could be communicated by modality rather than as a composite. For example, full motion video analysis is 80 percent, electromagnetic spectrum is 50 percent, and text source is 60 percent. This at least makes the human fully aware of the limitations in the system and causes reflection on what those numbers mean.
Further, the user interface could be constructed to encourage engagement and cooperation with the tool rather than input and output. If the tool were to communicate in questions rather than statements, that could change how the operator thinks about recommendations. So an AI-enabled decision support tool might recommend a target in the form of a question: “Is target #2 with identification confidence levels x, y, and z suitable for action?” It is a small change but could have important implications for how the operator views their interaction with the tool and their responsibilities.
There are many other ways these tools could be designed to encourage cognitive autonomy when we need it and allow some dependence when acceptable. Developers could start considering concepts now, but further research must be done to develop this CAV framework into a usable tool—particularly if we want to make a useful testing and evaluation regime that can validate meaningful human control and appropriate human judgment in operational systems.
References
(2015). "What is Cognition?" Insights https://cambridgecognition.com/what-is-cognition/ 2024.
(2023). DoD Directive 3000.09 Autonomy in Weapons Systems. D. o. Defense.
(2024). Daily death rate in Gaza higher than any other major 21st Century conflict - Oxfam.
(2024). "OpenArt." Retrieved August, 2024, 2024, from https://openart.ai/home?gad_source=1&gclid=Cj0KCQjw28W2BhC7ARIsAPerrcIXMSxZtla5TGboU6QNmS1wCMFlhRyW0CPhxzNSU6Xz9lrm9zr61dcaAvWxEALw_wcB.
(IDF), I. D. F. (2024). The IDF’s Use of Data Technologies in Intelligence Processing. www.idf.il, Israel Defense Force.
Abraham, Y. (2024) ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine
Bashkirova, A. K., Dario; (2024). "Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommednation acceptance." Computers in Human Behavior: Artificial Humans 2(1).
Beckert, T. E. (2007). "Cognitive Autonomy and Self-Evaluation in Adolescence: A Conceptual Investigation and Instrument Development." North American Journal of Psychology 9(3): 579-594.
Brumfiel, G. (2023) Israel is Using an AI System to Find Targets in Gaza. Experts Say it’s Just the Start. National Public Radio (NPR)
Christman, J. (2020). Autonomy in Moral and Political Philosophy. The Stanford Encyclopedia of Philosophy. E. N. Zalta.
Goodfriend, S. (2024) Why human agency is still central to Israel’s AI-powered warfare. +972 Magazine
Kaanders, P. S., P.; Folke, T.; Ortoleva, P.; DeMartino, B.; (2022). "Humans Actively Sample Evidence to Support Prior Beliefs." Elife 28(11).
Leotti, L. A. I., Sheena S.; Ochsner, Kevin N.; (2010). "Born to Choose: The Origins and Value of the Need for Control." Trends in Cognitive Science 14(10).
Lu, C. L., Cong; Lange, Robert Tjarko; Foerster, Jakob; Clune, Jeff; Ha, David (2024). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. Arxiv.
Risko, E. G., Sam J.; (2016). "Cognitive Offloading." Trends in Cognitive Science 20(9): 676-688.
Roff, H. M., Richard (2016). Meaningful Human Control, Artificial Intelligence, and Autonomous Weapons. Convention on Certain Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons. Geneva, Switzerland.
Sanger, D. E. (2024) In Ukraine, New American Technology Won The Day. Until It Was Overwhelmed. The New York Times
Scharre, P. (2016). "Centaur Warfighting: The False Choice of Humans vs. Automation." Temple International and Comparative Law Journal 30(1): 151-166.
Schein, M. B.-E. O. H. A. I. R. Y. K.-L. G. (2007). "Action Bias Among Elite Soccer Goalkeepers: The Case of Penalty Kicks." Journal of Economic Psychology 28: 606-621.
Skitka, L. J. M., Kathleen L.; Burdick, Mark (1999). "Does Automation Bias Decision-making?" International Journal of Human-Computer Studies 51(5).
Sparrow, R. J. H., Adam (2023). "Minotaurs, Not Centaurs: The Future of Manned-Unmanned." The U.S. Army War College Quarterly: Parameters 53(1).
Thaler, R. H. S., Cass R. (2008). Nudge: Improving Deciions About Health, Wealth, and Happiness. New Haven, CT, Yale University Press.
Vora, L. K., et al. (2023). "Artificial Intelligence in Pharmaceutical Technology and Drug Delivery Design." Pharmaceutics 15(7).
[1] Vora, L. K., et al. (2023). "Artificial Intelligence in Pharmaceutical Technology and Drug Delivery Design." Pharmaceutics 15(7).
[2] Lu, C. L., Cong; Lange, Robert Tjarko; Foerster, Jakob; Clune, Jeff; Ha, David (2024). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. Arxiv.
[3] (2024). "OpenArt." Retrieved August, 2024, 2024, from https://openart.ai/home?gad_source=1&gclid=Cj0KCQjw28W2BhC7ARIsAPerrcIXMSxZtla5TGboU6QNmS1wCMFlhRyW0CPhxzNSU6Xz9lrm9zr61dcaAvWxEALw_wcB.
[4] Abraham, Y. (2024) ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine
[5] Scharre, P. (2016). "Centaur Warfighting: The False Choice of Humans vs. Automation." Temple International and Comparative Law Journal 30(1): 151-166.
[6] Sparrow, R. J. H., Adam (2023). "Minotaurs, Not Centaurs: The Future of Manned-Unmanned." The U.S. Army War College Quarterly: Parameters 53(1).
[7] Roff, H. M., Richard (2016). Meaningful Human Control, Artificial Intelligence, and Autonomous Weapons. Convention on Certain Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons. Geneva, Switzerland.
[8] (2023). DoD Directive 3000.09 Autonomy in Weapons Systems. D. o. Defense.
[9] Sanger, D. E. (2024) In Ukraine, New American Technology Won The Day. Until It Was Overwhelmed. The New York Times
[10] Brumfiel, G. (2023) Israel is Using an AI System to Find Targets in Gaza. Experts Say it’s Just the Start. National Public Radio (NPR)
[11] Abraham, Y. (2024) ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine
[12] Ibid.
[13] Ibid.
[14] (2024). Daily death rate in Gaza higher than any other major 21st Century conflict - Oxfam.
[15] Goodfriend, S. (2024) Why human agency is still central to Israel’s AI-powered warfare. +972 Magazine
[16] (IDF), I. D. F. (2024). The IDF’s Use of Data Technologies in Intelligence Processing. www.idf.il, Israel Defense Force.
[17] Risko, E. G., Sam J.; (2016). "Cognitive Offloading." Trends in Cognitive Science 20(9): 676-688.
[18] Kaanders, P. S., P.; Folke, T.; Ortoleva, P.; DeMartino, B.; (2022). "Humans Actively Sample Evidence to Support Prior Beliefs." Elife 28(11).
[19] Bashkirova, A. K., Dario; (2024). "Confirmation bias in AI-assisted decision-making: AI triage recommendations congruent with expert judgments increase psychologist trust and recommednation acceptance." Computers in Human Behavior: Artificial Humans 2(1).
[20] Skitka, L. J. M., Kathleen L.; Burdick, Mark (1999). "Does Automation Bias Decision-making?" International Journal of Human-Computer Studies 51(5).
[21] Ibid.
[22] Schein, M. B.-E. O. H. A. I. R. Y. K.-L. G. (2007). "Action Bias Among Elite Soccer Goalkeepers: The Case of Penalty Kicks." Journal of Economic Psychology 28: 606-621.
[23] Abraham, Y. (2024) ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine
[24] Leotti, L. A. I., Sheena S.; Ochsner, Kevin N.; (2010). "Born to Choose: The Origins and Value of the Need for Control." Trends in Cognitive Science 14(10).
[25] Roff, H. M., Richard (2016). Meaningful Human Control, Artificial Intelligence, and Autonomous Weapons. Convention on Certain Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons. Geneva, Switzerland.
[26] (2015). "What is Cognition?" Insights https://cambridgecognition.com/what-is-cognition/ 2024.
[27] Christman, J. (2020). Autonomy in Moral and Political Philosophy. The Stanford Encyclopedia of Philosophy. E. N. Zalta.
[28] Ibid.
[29] Beckert, T. E. (2007). "Cognitive Autonomy and Self-Evaluation in Adolescence: A Conceptual Investigation and Instrument Development." North American Journal of Psychology 9(3): 579-594.
[30] Ibid.
[31] Ibid.
[32] Thaler, R. H. S., Cass R. (2008). Nudge: Improving Deciions About Health, Wealth, and Happiness. New Haven, CT, Yale University Press.
[33] Ibid.