Data Ethics Club: The Tech We Won’t Build#

What’s this?

This is summary of Wednesday 29th March’s Data Ethics Club discussion, where we spoke and wrote about the podcast episode The Tech We Won’t Build where “Laura Nolan shares the story behind her decision to leave Google in 2018 over their involvement in Project Maven, a Pentagon project which used AI by Google.” The summary was written by Jessica Woodgate and Huw Day, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara and Natalie Thurlby helped with the final edit.

Discussion#

What mechanisms or options exist to you in your context (work or otherwise) to resist the development of technologies you morally disagree with?#

We loved the idea of individuals calling out poor or unethical research mentioned in the podcast (where Google employees called out the US defence contract Project Maven for spying on people); employees should have power to hold their employers accountable. To counter misinformation in academia, Elisabeth Bik calls out manipulated images and has had hundreds of papers retracted. Hotlines exist for reporting unethical behaviour, however this can be hard to do alone. Forming groups might help to support one another and create grassroots power. Talking to the people you are working with, as was done in the Project Maven example, might be a more effective way of promoting change, rather than just leaving the project and allowing it to go ahead anyway – just without your input.

What issues arise with the ability to morally object in the workplace?#

However, raising the alarm bell might be more difficult if you are working in a country you haven’t worked in before. It can be less clear what the norms are, and how they should be appropriately challenged. A lot of researchers move internationally for their work, so this might be something that crops up a lot in these communities.

In addition to knowing the norms in place, the size of an organisation might impact your ability to challenge something you morally disagree with. In large organisations, it is much easier to get lost amongst the weeds. Even if you raise your head above the parapet, it could be soon forgotten, increasing the importance of having colleague support.

Difficulties with garnering the support of your colleagues arise with the growth of remote working. Often, these sorts of conversations are important to feel out informally first, to garner perspective and general thoughts. Opportunities for informal interactions with colleagues are heavily reduced in remote work scenarios.

Different levels of authority may have different motivations; how does this affect our freedom to choose ethical paths?#

On a personal level, PhDoers might have autonomy to decide not to go down certain paths, but this could be limited by funding. Sometimes there are incentives behind funding bodies that limit the freedom of choice for the direction you take, and it isn’t always possible to know what those incentives really are, or the reasoning behind them. The motivation of other bodies supporting, and to some extent constraining, your work might be different to yours and this might be hidden from you.

Diversion of motivation from authoritative bodies and choice constraint continues in more focussed positions like postdoc projects and industry jobs. It can be pretty cut and dry in these domains: you either do the goals you’re given, or you lose your job! There is also the sunk cost fallacy in defence projects, removing the excuse of “this doesn’t work”. For example, the SA-80 British made assault rifle which is famously rubbish and described by some as “the weaponized version of civil servant, as it doesn’t work, and can’t be fired”, and yet the British military refused to abandon it.

Continuing support for projects when they do not make financial or ethical sense brought us to the different forces that exist behind research, as in defence there are not just financial but also political motivations. For example, in the SA-80 case, the military don’t want to use an American rifle so will carry on using ineffective technology.

What are our own motivations for our work?#

When choosing a career, it is important to think carefully about the risks and benefits of the kinds of positions and companies you undertake. For some, there might be the option of a very highly paid job. However, there might be ethical compromises that come with the paycheck. Knowing that the wage might be hard to leave, should the job be avoided in the first place?

As a counter to having to choose between being economically comfortable and having an ethical career, perhaps a guaranteed minimum income could ensure that people are less likely to do horrible things or unethical jobs. Some of us had already come upon situations where we were presented with relatively harmless roles, but for organisations whose motivations we are not comfortable with, and decided against taking the position. We must think about the kinds of compromises we are making: our ethics; our job security; our financial security; our creative freedom?

A lot of research lives in a ‘grey’ ethical area - but are there clear thresholds for the Tech We Won’t Build?#

How could our work be misapplied?#

We wondered if anything is really ethically neutral, especially in the domain of research. It is conceivable that something which seems unimportant now could have unintended side effects when it is put into different contexts. You can carry out your research, focussing on things you perceive as more “ethical” – whatever that means to you – but from an academic perspective, when you put something out there, it’s out. Technologies we develop could have unforeseen long term effects, such as Huw’s PhD looking at mathematical patterns in DNA replication, which now has unclear implications, but what if 50 years from now someone uses it to make a bioweapon? Will Huw finally do something useful?

Network science is another example domain which might seem initially unsuspicious, yet there is a lot of defence funding that goes into the area, with military applications such as looking at social networks to detect terror cells. Similarly, in the education sector the integration of technology could result in biased and harmful prophecies for children. Teachers can have preconceptions of students from certain backgrounds, which may lead to models limiting those students’ access to resources by predicting what they might (not) achieve.

Unfortunately, in domains like maths and physics, it is common for abstract models to be developed and then later down the line applied in harmful ways. If someone wants to build it, a lot of the time it is quite easy to do. At the very least, we thought you can avoid being the final step (i.e. the harmful application).

When do we decide when not to build something?#

Considering the risk of unforeseen harms, at what point do you decide not to build technology? How far away from harm do you have to be down the causal chain to have responsibility for the consequences of your work? Is it when there are very clear implications that Huw’s work could be used for genocide, or something else awful, like a boring PhD thesis? How much harm will he cause his reviewers? Should he be stopped now?

It is important to distinguish the point at which you halt development, from where mitigation techniques should be brought in. Maybe someone could develop some sort of signposting to help this… Something like Data Hazards labels? Just a thought.

To understand where these thresholds are, the podcast talks about projects in the domain of high risk, high consequence. If there is a high probability that things could go wrong, and if things go wrong they go really wrong, maybe you should stop. An article that came up in our discussions provided a quite simple but useful heuristic for machine learning. The paper suggests that ML should be used for well-defined, ‘learnable’, and low stakes problems, with relatively small ranges of subjectivity. This might help when we are trying to think about what could be done, or go wrong with, the models we build.

Removing responsibility through depersonalisation?#

There is often the reasoning in tech that “we’re only making a tool” and thus aren’t accountable for misapplications of our work. However, it is difficult to know where this line should be drawn. The scope of some tools might be quite generic with wide ranging application domains, some of which could be harmful. For example, license plate recognition could be based off of generic text/image recognition. If you follow this reasoning too far, it can lead to unintuitive outcomes. Should we blame the inventors of the wheel for tanks? How many degrees of separation do you need to claim the technology that you developed is nothing to do with harm that is caused? This highlights the ambiguity of ethical reasoning.

To define the tipping point of accountability, some of us had the intuition that if there’s a harm you could reasonably foresee then you hold some sort of responsibility to make efforts towards mitigation. However, if it is completely out of the realm of what you could conceive then you are off the hook. We also thought that it might depend on what you could do personally to mitigate the risk of bad things happening; what is in your personal control.

Going above personal responsibility, ethical frameworks can help to support ethical development on an institutional level. For example, the concept on research on people vs research with people, which is key in early years education. We should think reflexively, including the people for whom the technology might be used into the design and development process, so that their needs remain front and centre. If they can’t be directly involved, we should at least make efforts to consider their perspectives.

How do we build accountability into tech development so that we can challenge decisions and directions we disagree with?#

We really liked the quote “can we build this? Yes. Should we build it? Depends on who you ask.” The example of [robodebt] (https://www.theguardian.com/australia-news/2023/mar/11/robodebt-five-years-of-lies-mistakes-and-failures-that-caused-a-18bn-scandal) was brought up – a governmental scheme which led to hundreds of thousands of people being attributed unlawful debt with the intention of “saving money”. It demonstrates how little accountability there is for people who make terrible decisions as leaders. However, perhaps we underestimate how little people understand about the workings of AI and data science. We can see how something might appear a great solution, if you don’t really understand what it means.

We were quite keen to see further discussion on the military usage of tech. There is an old-timey view that it’s all bad, and whilst there is ground for that, it is still worth talking about. Some of us have had conversations with people who work in defence and are interested in ethics. It is important to try not to get into an “us vs them” mentality when considering such topics; we are talking about real people and should try to understand their perspectives.

At the very least, you need a clear chain of accountability in defence. Pursuing the requirement for clear accountability in the military could lend a hand to other domains seeking better accountability practices. The domain of defence is interesting to consider, as it has a very different focus, for example whilst the private sector is heavily profit-oriented, the military has other motivations such as politics. What emerges from this difference? Does it affect motivations for ethics, or accountability?


Attendees#

Name, Role, Affiliation, Where to find you, Emoji to describe your day

  • Natalie Zelenka, Data Scientist, University of Bristol, NatalieZelenka, @NatZelenka

  • Nina Di Cara, Research Associate, University of Bristol, ninadicara, @ninadicara

  • Huw Day, PhDoer, University of Bristol, @disco_huw

  • Ola Michalec, University of Bristol - social scientist in comp science school. @Ola_Michalec

  • Amy Joint, Content Acquisition Manager, F1000, [@AmyJointSci] (https://twitter.com/amyjointsci)

  • Euan Bennet, Lecturer, University of Glasgow, @DrEuanBennet

  • Robin Dasler, Data Product Manager, daslerr

  • Jessica Woodgate, PhD Student, University of Bristol jess-mw