Data Ethics Club: Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images?#

What’s this?

This is summary of Wednesday 4th December Data Ethics Club discussion, where we spoke and wrote about the article Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images? by Anna Nadibaidze. The summary was written by Jessica Woodgate, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Huw Day helped with the final edit.

Article Summary#

Visuals of AI in military have the potential to shape (mis)perceptions of AI in warfare; the article examines these visuals to analyse the role they have in constructing certain ideas about AI.

One of the major themes seen in visualisations of AI generally is anthropomorphism. Humanoid robots, used by campaigners and policymakers e.g. the Stop Killer Robots campaign and regulation conferences, evoke associations to the Terminator and insinuate futuristic tones. Futuristic connotations are problematic because they detract from the fact that the technologies already exist. Anthropomorphising AI is also problematic because it implies that the technologies will replace humans, however, AI in military is used for a variety of purposes including interacting with machines. We should be thinking about the ethical issues that arise from these interactions, rather than from robots replacing humans. Images of “killer robots” invoke “uprising” narratives where humans lose control of AI or imply that humans have little influence over how AI impacts society. Downplaying the control humans have over AI masks the fact that it is humans who decide to develop or use AI, and those decisions are made within political and social contexts.

Another major theme is the colour blue (associated with feelings of calm) combined with illustrations of code or other computational images. Images often contain human soldiers in abstract space with a blue background or running code, wearing a virtual reality headset. These kinds of images commonly feature in blogs or on academic book covers. Other images include uncrewed aerial vehicles (UAVs or drones) either alone or with human soldiers. There are visuals focusing on the combat aspect rather than more mundane applications (e.g. admin) and abstract blue spaces risk distracting attention away from the actual technologies which are actually being used such as integrating AI into data analysis or decision-support systems.

Abstract and futuristic images detract from the reality of the situation, which is that incorporating AI into the military is about changing the dynamics of human-machine interaction. The changes brought about by these dynamics imply a new scope of ethical, legal, and security implications for agency. Better images would include the humans behind the AI systems as well as the humans that might be affected by them, e.g. soldiers and civilians.

Discussion Summary#

What depictions have you seen of AI in the military domain? Have they been positive or negative?#

Some of us have seen mainly negative and concerning visualisations of AI, with lots of depictions evoking the terminator, robots, or the matrix. We’ve seen lurid media around AI in warfare. Others have seen generally positive depictions of AI in media, heralding advanced technologies as a sign of progress. Whether images are positive or negative may depend on who puts the images out. Positive depictions tend to be for the purpose of selling AI, as unrealistic sci-fi portrayals are more attractive and effective at drawing people in.

Working in defence, we are only allowed to use certain images without security clearance; it is more convoluted than taking a photo of a crowd of people on the street. We have stock images for our communications which tend to be a mix of normal people in t-shirts doing things around a table or more graphic-designed type images. Companies like Boston Dynamics don’t show their technologies in a military context, but media and social media sometimes take these images and place them in a militarised environment. Other military organisations apply similar designs to Boston Dynamics to military settings.

Three main themes we’ve seen used to depict AI in military are abstract blue backgrounds, humanoid robots, and members of the military; we tend to see more machines in images than people. At academic conferences we have seen sci-fi images which cultivate abstract conceptualisations. The dominance of blue is striking to us; we would more naturally associate green with code in homage to the matrix. Blue isn’t necessarily calming, as the article proposes, but instead suggestive of cold, clean, crisp technology.

Futuristic blue backgrounds distance images and thinking about AI from reality, removing the human element. The abstractness elevates the people using the tools and prevents them from being criticised. The language used to talk about AI disassociates AI from the humanity insofar as the people or families affected are not talked about.

In addition to abstract imagery and language, video gamification further dehumanises war. Gamification normalises killing and perpetuates unrealistic narratives, such as first person shooters like Call of Duty or action movies with generic single-dimensional, unrelatable, bad guys (e.g. Top Gun). Using games to distance military from reality reminds us of Enders Game, a novel depicting human detachment from the act of killing in which kids are used to kill people from far away through gamified interfaces. In the UK, the military use games to test and plan tactics, recreate warfare scenarios, and to engage with the public through recruitment and advertising campaigns. The military have put out recruitment drives actively targeting video games and send soldiers to compete in eSport tournaments. The US army even has an eSports team with 11 members who compete full time, arguing that it builds morale, helps soldiers to decompress, and improves welfare.

Why does it matter to have a discussion on visuals of AI (whether in the military or more broadly)? What roles do different actors (e.g. researchers, media, etc.) have to ensure visual representations aren’t misleading?#

Discussions of AI iconography in military are different from other domains as they could potentially be more consequential. Using AI in the military is highly controversial; Boston Dynamics advocate against weaponising robots. They released an open letter to the robotics industry and communities arguing that weaponised applications risk harm and serious ethical issues, and will harm public trust.

We are privileged in where we live, as we are so far away from war that we don’t see military technologies in deployment. The military will not show the reality of their AI weapons, whether this is to retain public support or because certain images would require security clearance. We would like to see more openness in military images, and for more images to be available for wider use. Increasing accessibility could be facilitated by an open repository of appropriately licensed images of AI like shutterstock or better images of AI.

The visualisations that are chosen for textbooks, such as blue spacey colours on the cover, often aren’t up to the authors but the publishing houses. Issues regarding how images are chosen in publication run into existing social problems with the bigger system outside of AI such as how funding is allocated. In saying this, we do all have a choice and can be thoughtful about which images we use in our work generally.

What change would you like to see on the basis of this piece? Who has the power to make that change?#

If the general public were better educated about AI, popular culture may be forced to be more realistic to avoid suspension of disbelief. The 1995 Sandra Bullock classic “The Net” is only so ridiculous because most people have a decent grasp of the internet now. Hopefully, we can have a similar paradigm shift for AI in fiction. To influence popular culture, it makes sense to educate children today and improve our depictions of AI.

We should depict AI in a more realistic and grounded way, rather than something which happens somewhere else to someone else. Most of us aren’t directly involved with anything related to how the military operates and what actually goes on, but we would like to see more images of humans interacting with AI and mundane things like logistics. Across many disciplines, it is common to incorporate computerisation for big systems, so perhaps it is not surprising that AI is used for mundane tasks in the military as well.

Some of us were concerned with automating decision-making processes that require checks. We would like to explore in more depth who has control of these technologies and how control is represented. Sometimes having fine-grained control when we use computers isn’t appealing, e.g. for the most part we would rather use a Graphical User Interface (GUI) than type into the command line. There is precedence for the military utilising unfit technologies; the V22 Osprey is an example of military technology deployed despite its catastrophic failures. People in the military are also concerned with AI taking jobs, as there is a large section of the population employed by the armed forces.

It is important to realistically weigh the checks and balances of using AI. The flip side to the argument against AI is that the technology could help to save human lives by diminishing physical presence in battles. If AI is being used everywhere (although, some of us do not think that AI should be used everywhere), it might not be right to stop it being used in the military. When used to make decisions at the right speed, it can be useful as an assistive tool rather than simply making decisions for us. To effectively and safely incorporate AI in military, it is important that information goes to the right people at the right time.

Rather than asking if we want AI in war, perhaps we should be asking whether we want war at all. The article doesn’t address this question, instead asking that given we do use AI in war, how should we use it.

Attendees#

  • Huw Day, Data Scientist, University of Bristol, https://www.linkedin.com/in/huw-day/

  • Jessica Woodgate, PhD Student, University of Bristol

  • Zoe Zhou, Data Scientist, Columbia University, https://www.linkedin.com/in/zhou-zoe/

  • Dan Collins, PhD Student, University of Bristol

  • Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane

  • Hessam Hessami, Data scientisti

  • Melanie Stefan, Computational Neuroscientist, Medical School Berlin