AI in Warfare#

What’s this?

This is summary of Wednesday 9th March’s Data Ethics Club discussion, where we discussed AI in Warfare, a Reith Lecture by Stuart Russell.

The summary was written by Huw Day, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara and Natalie Thurlby helped with the final edit.

Introduction#

We discussed The Reith Lecture AI in Warfare by Stuart Russell, , Professor of Computer Science and founder of the Centre for Human-Compatible Artificial Intelligence at the University of California, Berkeley.

“Stuart Russell warns of the dangers of developing autonomous weapon systems - arguing for a system of global control. Weapons that locate, select, and engage human targets without human supervision are already available for use in warfare. Some argue that AI will reduce collateral damage and civilian casualties. Others believe it could kill on a scale not seen since Hiroshima and Nagasaki. Will future wars be fought entirely by machines, or will one side surrender only when its real losses, military or civilian, become unacceptable? Professor Russell will examine the motivation of major powers developing these types of weapons, the morality of creating algorithms that decide to kill humans, and possible ways forward for the international community as it struggles with these questions.”

We voted for this lecture before Russia invaded the Ukraine, and in light of this, the topic may have additional relevance, emotional weight, or both to some audiences. We encouraged our participants to remember that the topic of war might have personal relevance for their fellow attendees, and may be related to any armed conflict across the world. Whilst the discussions remained considerate and thoughtful, this is still quite a dark topic and this writeup reflects that.

To what extent were you aware that lethal autonomous weapons were already purchasable?#

Unfortunately, some of us who have worked in the government sector are quite aware, as well as how easy is it to purchase, or to build something at home. This promptly bought us to the discussion of 3D printing - some schools even have access to 3D printers now. We were not really aware of the full extent of their availability. For some of us it was surprising to hear that a lot of the weapons that already exist are autonomous (i.e. can identify and attack targets on their own).

We agreed that the possibility of small, cheap, killer robots that could target individuals was a frightening prospect. Some of us felt that it was presented as a current reality, when the killer robots that are for sale are currently much larger and more expensive than the speaker’s narrative suggested. Others felt that this wasn’t an unreasonable projection, given other military technology.

Science reporting on this issue (and other ethical problems like this) is typically very poor. This in turn feeds poorly into the mental models that the public has and can lead to either hysteria or apathy.

The speaker mentions a short Sci-Fi Film “Slaughterbots”.

“The 7 minute film opens with a Silicon Valley CEO-type delivering a product presentation to a live audience a la Steve Jobs. The presentation seems innocuous enough at first—the presenter seems to be unveiling some new drone technology—but takes a dark turn when he demonstrates how these autonomous drones can slaughter humans like cattle by delivering “a shaped explosive” to the skull. The audience eats it up, clapping and laughing along with the CEO as if they hadn’t witnessed anything more dangerous than the unveiling of the iPhone X. The CEO goes further, showing videos of the tiny killer drone in action. “Let’s watch what happens when the weapons make the decisions,” the CEO says, as the bot executes a number of people on the massive screen behind him. “Now trust me, these are all bad guys.” What follows is a deeply unsettling portrait of a dystopian world where these small weaponised drones use their onboard technologies—“cameras like you use for your social media apps, facial recognition like you have on your phones!”—to make autonomous decisions about who lives and who dies.”

We were briefly sidetracked by the relative weakness of the speaker’s definition of autonomy in comparison to the definition as it relates to humans. With humans, autonomy is more than just the ability identify and attack a target. With humans, morale is a factor whereas with autonomous robots, this is not a factor. Perhaps this is a good thing? Defining what autonomous means “the AI chooses who to target” could also apply to propaganda/advertising as well as weapon targeting.

Each new development in technological warfare is supposed to “end war”. Is this just another weapon which can have more disastrous outcomes? What is going to be the effect of using these weapons? It will target soldiers better than targetting civilians. Wars tend to only stop when the number of losses is great enough. If they’re not humans, surely these systems won’t have morality, won’t get tired and don’t need motivation and so might wage more unethical wars. This leads to a problem of AI allignment and what objectives are coded into the AI.

We’ve not even considered the ethics of war, let alone different types of it. Can we really say one type of war is better than another? That one type is more or less ethical? Theorising about war boils down to deducing the conditions by which it is okay to kill someone.

Is the inclusion of AI into this discussion actually interesting or relevant?

In your opinion, is there a valid argument that lethal autonomous weapons could reduce negative outcomes? Is this different for civilians versus soldiers?#

It’s clear that several things that are wrong with it. Most people wars only stop when enough people are killed.

Will autonomous weapons be able to distinguish/discriminate people? What about so called grey zone conflicts? There are a lot of areas where you don’t really know if its a conflict. Categorising anything qualitative is problematic, but here the stakes are considerably higher. Is someone carrying a gun a soldier? A knife? Will the robots detect armbands correctly? Maybe we’ll end up using our fitbits to declare which side we’re on. If drones were accurate at telling the difference between military and civilians, that could be a positive thing. But civilians also take up arms. This would change warfare - military would wear civilian clothes (you also see civilias wearing camo). Humans are bad at distinguishing between civilians and military targets. Is it better to have a human behind it so that at least they might feel some guilt if something goes wrong?

Putting anything into numbers runs into this problem. This isn’t an optimisation problem, this is life or death. What training sets of data are used for these machine learning systems? How do you come up with performance metrics? You almost encode some sort of morality by setting objective functions. It will lead to very imbalanced wars. Again we fall onto the problem of trying to decide which wars are more or less ethical.

If only drones are on the front lines for one side, the nature of war will change. There will be much less discontent in the aggressor’s country. This is a bad thing. Once they’ve been trained, what’s to stop the opposition wearing more civilian outfits to mess up the training data? How quickly can machines adapt to these things like this?

Perhaps drones could just take out the leadership? “Wars end when the level of death/destruction becomes untenable” and we worry about losing sight of the human cost. There is a worry about assassinations of people where you don’t have accountability as well as the obvious implications of the ethics of so-called targeted killing.

If nations don’t have the latest technology and are in war, that is a worrying disadvantage. It starts to feel like a nuclear arms race and the nations with the upper hand are the least willing to ban them. Hiroshima is often framed as compassionate (reduced overall death/destruction), but is that the case? Many of us would argue no. Do more people die in equal or unequal rates?

Autonomous weapons often seem to be justified as being defensive. Having no people involved in the system would make it difficult to integrate morals. Although an alternative opinion on this is that there is an operator somewhere in the system (e.g. the person launching the drones).

There has been research about drone operators in recent wars:

“Incidence rates for PTSD were 0.9 per 1000 persons for drone pilots (n=3, 95% CI 0.3-2.7) compared to 0.7 (n=20, 95% CI 0.4-1.0) for manned aircraft pilots. After adjustment, it was it was found that both groups had statistically equivalent rates of mental health outcomes despite self-reports of high levels of stress and fatigue reported among drone pilots.” (Otto and Webber)

People tell themselves moral stories to justify working in these industries. We wondered about how living in a different country (e.g. one that experiences war or is closer to it) might end up with people having different opinions. Often times weapon mongers to the world (such as the British) provide weapons to other countries but don’t experience the destruction that these weapons cause. People who work in weapons research companies very rarely talk about the casualities that come from that area. You can work in these areas without ever thinking about the outcomes.

If we are to have people in positions to bring such destruction and devastation into the world, we need those people to have a clear understanding of the moral implications of their decisions. Someone like Stanislav Petrov is often credited as having “saved the world” after correctly interpreting an accidental fire by the enemy force, and possibly preventing a nuclear war.

Who do you think would be held responsible for the implementation of a lethal autonomous weapon?#

Maybe we would have fewer and fewer software developers? We don’t want to encourage people to come into the defence technology profession (like some soldiers). This is already happening “Project Maven” DOD Project. The remit of this project was to develop certain advanced technology for defensive capabilities (e.g. developing computer vision software for identifying targets for drones). The company that initially won the contract was Google. Widespread protests which lead to Google ethical projects. Most developers didn’t want to do this. DOD gave the contract to Palantir (tech company that mostly do government contracts; tracking systems, military applications etc).

One of the problems with the free market means that someone will always do this job, and there is money in it. How do you uninvent these things? Evidence that CBRN weapons have been reduced in usage so maybe there is hope? Even if someone people will still produce them, other regulations will reduce the demand. (we’re a good 5-20 years from the demand dropping for AI warfare, and hopefully for now it’s just such a cool gizmo to show off).

What change would you like to see on the basis of this piece? Who has the power to make that change?#

How necessary/unavoidable is war and in what sense?

It could be a question of when is it okay for a computer to make a decision versus a human: for accountability reasons it should be a human who makes decisions about e.g. employment or killing someone. Various pieces of media examine this idea in a more general setting. Some ask: If we cannot hold computers accountable, should we be allowing them to make management decisions?. Others might ask, based on the way we make decisions: Are You a Robot?.

We were disappointed that the legal aspect was treated by the speaker as if it’s essentially a solved problem: if there was criminal intent then it will prosecuted that way, or criminal negligence it would be settled. Plausible deniability/anonymity of drones is one possible attractive element to them, in a similar manner to how algorithms are sometimes used to deflect accountability when making decisions.

It’s as difficult to decide on what Warfare is as what Autonomy is: does terrorism count? What’s the difference? Did Ukraine and Russia agree to enter into an offical War, or did Russia just start entering/bombing Ukraine? War has rules/is quite legalistic. The rules would be partially decided by code: different in each country. Whether those rules are followed is another question entirely and what sort of accountability structures are in place is a key issue which some of us felt the speaker glossed over - that being said, it wasn’t really the focus of the talk.


Attendees#

Name, Role, Affiliation, Where to find you, Emoji to describe your day