Data Ethics Club: Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images?#
What’s this?
This is summary of Wednesday 4th December Data Ethics Club discussion, where we spoke and wrote about the article Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images? by Anna Nadibaidze. The summary was written by Jessica Woodgate, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Huw Day helped with the final edit.
Article Summary#
Visuals of AI in military have the potential to shape (mis)perceptions of AI in warfare; the article examines these visuals to analyse the role they have in constructing certain ideas about AI.
One of the major themes seen in visualisations of AI generally is anthropomorphism. Humanoid robots, used by campaigners and policymakers e.g. the Stop Killer Robots campaign and regulation conferences, evoke associations to the Terminator and insinuate futuristic tones. Futuristic connotations are problematic because they detract from the fact that the technologies already exist. Anthropomorphising AI is also problematic because it implies that the technologies will replace humans, however, AI in military is used for a variety of purposes including interacting with machines. We should be thinking about the ethical issues that arise from these interactions, rather than from robots replacing humans. Images of “killer robots” invoke “uprising” narratives where humans lose control of AI or imply that humans have little influence over how AI impacts society. Downplaying the control humans have over AI masks the fact that it is humans who decide to develop or use AI, and those decisions are made within political and social contexts.
Another major theme is the colour blue (associated with feelings of calm) combined with illustrations of code or other computational images. Images often contain human soldiers in abstract space with a blue background or running code, wearing a virtual reality headset. These kinds of images commonly feature in blogs or on academic book covers. Other images include uncrewed aerial vehicles (UAVs or drones) either alone or with human soldiers. There are visuals focusing on the combat aspect rather than more mundane applications (e.g. admin) and abstract blue spaces risk distracting attention away from the actual technologies which are actually being used such as integrating AI into data analysis or decision-support systems.
Abstract and futuristic images detract from the reality of the situation, which is that incorporating AI into the military is about changing the dynamics of human-machine interaction. The changes brought about by these dynamics imply a new scope of ethical, legal, and security implications for agency. Better images would include the humans behind the AI systems as well as the humans that might be affected by them, e.g. soldiers and civilians.
Discussion Summary#
What depictions have you seen of AI in the military domain? Have they been positive or negative?#
Some of us have seen mainly negative and concerning visualisations of AI, with lots of depictions evoking the terminator, robots, or the matrix. We’ve seen lurid media around AI in warfare. Others have seen generally positive depictions of AI in media, heralding advanced technologies as a sign of progress. Whether images are positive or negative may depend on who puts the images out. Positive depictions tend to be for the purpose of selling AI, as unrealistic sci-fi portrayals are more attractive and effective at drawing people in.
Working in defence, we are only allowed to use certain images without security clearance; it is more convoluted than taking a photo of a crowd of people on the street. We have stock images for our communications which tend to be a mix of normal people in t-shirts doing things around a table or more graphic-designed type images. Companies like Boston Dynamics don’t show their technologies in a military context, but media and social media sometimes take these images and place them in a militarised environment. Other military organisations apply similar designs to Boston Dynamics to military settings.
Three main themes we’ve seen used to depict AI in military are abstract blue backgrounds, humanoid robots, and members of the military; we tend to see more machines in images than people. At academic conferences we have seen sci-fi images which cultivate abstract conceptualisations. The dominance of blue is striking to us; we would more naturally associate green with code in homage to the matrix. Blue isn’t necessarily calming, as the article proposes, but instead suggestive of cold, clean, crisp technology.
Futuristic blue backgrounds distance images and thinking about AI from reality, removing the human element. The abstractness elevates the people using the tools and prevents them from being criticised. The language used to talk about AI disassociates AI from the humanity insofar as the people or families affected are not talked about.
In addition to abstract imagery and language, video gamification further dehumanises war. Gamification normalises killing and perpetuates unrealistic narratives, such as first person shooters like Call of Duty or action movies with generic single-dimensional, unrelatable, bad guys (e.g. Top Gun). Using games to distance military from reality reminds us of Enders Game, a novel depicting human detachment from the act of killing in which kids are used to kill people from far away through gamified interfaces. In the UK, the military use games to test and plan tactics, recreate warfare scenarios, and to engage with the public through recruitment and advertising campaigns. The military have put out recruitment drives actively targeting video games and send soldiers to compete in eSport tournaments. The US army even has an eSports team with 11 members who compete full time, arguing that it builds morale, helps soldiers to decompress, and improves welfare.
Does popular culture play a role in how you think of weaponised AI?#
The weaponisation of AI has been visited many times in popular culture such as the TV program Black Mirror. Black Mirror was really good at starting conversations and exploring scenarios. We wondered if Black Miror was a bit biased towards AI going bad, but stories tend to be more about the social consequences of technology going bad rather than bad in the I, Robot (film or book) sense. More classic storylines tend to associate Artificial General Intelligence (AGI) with “bad”, and efficient technology running without issues with “good” (think James Bond). Narratives that investigate social consequences are better than the killer robots/uprising narrative as they are more realistic. In the end, the problem is always our humanity: humans being humans.
Fiction has some interesting depictions of AI including portraying AGI as inevitable and competent. Themes range from fantastical futures with flying cars such as the Jetsons, to the apocalyptic rogue AI in the Terminator. Technology is often portrayed as highly sophisticated, such as in spy fiction with new fancy computers or phones with suspiciously good user interfaces and no bugs. These presentations fit into a narrative framework of excitement around new technologies. Overexcitement has real-world consequences, such as the CSI effect where juries are underwhelmed by real trials after watching overly stylised media.
Powerful actors in industry have fed excitement by pushing existential risks of AI in popular discourse, but to us these arguments seem to require some leap of logic. For example, the squiggle maximiser depicts a hypothetical AI with a utility function which values something humans would consider almost worthless like maximising paperclip-shaped-molecular-squiggles in the universe. These values are apparently innocuous but could pose a catastrophic threat when the AI’s internal processes converge on a goal that seems completely arbitrary from the human perspective. The squiggle maximiser demonstrates instrumental convergence, the hypothetical tendency for most sufficiently intelligent, goal-directed beings to pursue similar sub-goals even if their ultimate goals are quite different. Agents may pursue instrumental goals in pursuit of some particular end, but not the end goals themselves, without ceasing provided their end goals are not fully satisfied.
Navigating how to portray AI and technology in fiction and popular culture is challenging. We don’t want to make media less entertaining just for the sake of realism, as making something interesting is more important for popular culture than accuracy. We can’t envision the next blockbuster spy thriller being Jack Ryan writing SQL queries. To get mass knowledge across, it has to come through popular culture in some way; it is difficult to think about anything without looking through a popular culture lens. Yet, we need to deal with the amount of misinformation that emanates from it. Science fiction is problematic in so far as it masks real problems that affect real people. When the audience is removed from the actual consequences moral judgement is distanced, abstracting away the humanity.
The way AI is portrayed in media and fiction subverts the fact that AI is not brand new; it is easy to forget that most people don’t know that much about what AI really is or its history. Media tend to overuse the term “AI” - AI isn’t just one thing, but many different techniques from robotics to machine learning (ML). We expect that the general public don’t really understand or know the spectrum of automation; not every AI is a robot and not every robot is AI. AI isn’t the paradigm change its sold as. Some techniques are different to how we have done computing before, but in general the ML methods commonly thought of as AI are a natural extension to statistical techniques that have been around for a while.
Perceptions of AI outperforming humans encourage more people to adopt it, yet we were wary of how much AI tools can actually outperform humans. AI development inevitably falls into a performance-based arms race with a hype cycle that exaggerates the qualities of the product far beyond what is actually there (discussed in a previous Data Ethics Club writeup). Currently, most generative AI gives no output on accuracy. People interpret the absence of accuracy to mean that the model has 100% confidence in outputs, which is incorrect (or at least, even if the model has 100% confidence, this doesn’t mean we should have that same level of confidence). We are suspicious of any supervised ML tools trained on labelled human data and wondered how we avoid reducing the problem to class imbalance.
Why does it matter to have a discussion on visuals of AI (whether in the military or more broadly)? What roles do different actors (e.g. researchers, media, etc.) have to ensure visual representations aren’t misleading?#
Discussions of AI iconography in military are different from other domains as they could potentially be more consequential. Using AI in the military is highly controversial; Boston Dynamics advocate against weaponising robots. They released an open letter to the robotics industry and communities arguing that weaponised applications risk harm and serious ethical issues, and will harm public trust.
We are privileged in where we live, as we are so far away from war that we don’t see military technologies in deployment. The military will not show the reality of their AI weapons, whether this is to retain public support or because certain images would require security clearance. We would like to see more openness in military images, and for more images to be available for wider use. Increasing accessibility could be facilitated by an open repository of appropriately licensed images of AI like shutterstock or better images of AI.
The visualisations that are chosen for textbooks, such as blue spacey colours on the cover, often aren’t up to the authors but the publishing houses. Issues regarding how images are chosen in publication run into existing social problems with the bigger system outside of AI such as how funding is allocated. In saying this, we do all have a choice and can be thoughtful about which images we use in our work generally.
What change would you like to see on the basis of this piece? Who has the power to make that change?#
If the general public were better educated about AI, popular culture may be forced to be more realistic to avoid suspension of disbelief. The 1995 Sandra Bullock classic “The Net” is only so ridiculous because most people have a decent grasp of the internet now. Hopefully, we can have a similar paradigm shift for AI in fiction. To influence popular culture, it makes sense to educate children today and improve our depictions of AI.
We should depict AI in a more realistic and grounded way, rather than something which happens somewhere else to someone else. Most of us aren’t directly involved with anything related to how the military operates and what actually goes on, but we would like to see more images of humans interacting with AI and mundane things like logistics. Across many disciplines, it is common to incorporate computerisation for big systems, so perhaps it is not surprising that AI is used for mundane tasks in the military as well.
Some of us were concerned with automating decision-making processes that require checks. We would like to explore in more depth who has control of these technologies and how control is represented. Sometimes having fine-grained control when we use computers isn’t appealing, e.g. for the most part we would rather use a Graphical User Interface (GUI) than type into the command line. There is precedence for the military utilising unfit technologies; the V22 Osprey is an example of military technology deployed despite its catastrophic failures. People in the military are also concerned with AI taking jobs, as there is a large section of the population employed by the armed forces.
It is important to realistically weigh the checks and balances of using AI. The flip side to the argument against AI is that the technology could help to save human lives by diminishing physical presence in battles. If AI is being used everywhere (although, some of us do not think that AI should be used everywhere), it might not be right to stop it being used in the military. When used to make decisions at the right speed, it can be useful as an assistive tool rather than simply making decisions for us. To effectively and safely incorporate AI in military, it is important that information goes to the right people at the right time.
Rather than asking if we want AI in war, perhaps we should be asking whether we want war at all. The article doesn’t address this question, instead asking that given we do use AI in war, how should we use it.
Attendees#
Huw Day, Data Scientist, University of Bristol, https://www.linkedin.com/in/huw-day/
Jessica Woodgate, PhD Student, University of Bristol
Zoe Zhou, Data Scientist, Columbia University, https://www.linkedin.com/in/zhou-zoe/
Dan Collins, PhD Student, University of Bristol
Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane
Hessam Hessami, Data scientisti
Melanie Stefan, Computational Neuroscientist, Medical School Berlin