Economies of Virtue: The Circulation of ‘Ethics’ in Big Tech#

What’s this?

This is summary of Wednesday 5th May’s Data Ethics Club discussion, where we discussed the paper Economies of Virtue: The Circulation of ‘Ethics’ in Big Tech by Jake Goldenfein, Monique Mann and Declan Kuch.

The summary was written by Huw Day, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara and Natalie Thurlby helped with the final edit.

UNO Meme, "Actually be Ethical Instead of just Pretending or Draw 25" with Tech Bros holding a handful of UNO cards.

Introduction#

This week at Data Ethics Club we discussed Economies of Virtue: The Circulation of ‘Ethics’ in Big Tech, a paper by by Jake Goldenfein, Monique Mann and Declan Kuch. (This paper may be difficult to access open source without a university login).

In this paper, the authors discusses some of the ethics problems in Big Tech and “how Big Tech has transformed ethics into a form of capital — a transactional object external to the organisation, one of the many ‘things’ contemporary capitalists must tame and procure”.

We talked about the difficulities in conducting ethical research that is funded by the body you are investigating, how this research is best able to bring about change and whether interventions that serve a captialistic interest are always harmful.

What difficulties appear as an ethics AI researcher?#

One of the attendees works on training PhD students in responsible innovation for AI. We discussed the tensions of trying to train responsible AI - it’s a mindset and not a checklist. There are inherent risks in framing it otherwise - what is ethical is influenced by political events, cultural beliefs, so many things and is not just a clipboard to run through at the end of a project.

So when people say that they have developed a checklist - how good can that be? And how can you create training that gives any guarantee of an ethical standard.

This discussion reminded some of us of carbon credits in the sense that you can “buy” good ethics with money instead of being good at ethics. In both cases, it brings about the question of: if you can afford to put money into this effort (be it environmental/ethical impact) when you weren’t doing something to negatively contribute (taking a flight somewhere/working on a project with military applications), why not avoid doing the negative action and still contribute? It almost feels like a guilt-tax to payoff to appeal to public opinion.

Often times a data scientist’s job is a lot about representing information, rather than evaluating it. Whilst part of that is using that representation to best understand it from your point of view, an equally important part is to represent the data in a way that supports the message you (or the client) want. When you’re trying to get a product out, if you have a vested interest, it makes it very hard to stay objective. (This is another great reason to bring reflexivity practices into data science!).

Its common for data scientists and mathematical people to see models as not harmful and neutral by default, because they are simply a representation of objectively measured data. It also seems common for people to throw their data into a black box which they don’t (and can’t) understand. Data scientists need to realise that there are consequences to the work that they do. Unless people are economically punished for unethical practices, where is the incentive for them to stop?

Are interventions/research that serve capitalist interests always bad/harmful?#

What are our assumptions about what is good/bad? Is working in a not-for-profit automatically good? What are we aiming for: in terms of interventions (e.g. regulation) or in terms of values (e.g. transparency).

Ethical harms are less tangible sometimes than other regulated things. Is it possible to circumvent huge scandals? Or is it necessary? Even in more critical tangible systems (planes/cars), things keep on until scandal because it’s deniable that there’s a problem. What is the structure that will catch these problems earlier on? Negative feedback loops come to mind. Why do organisations bring in standards? Because employees asked for it.

The aviation industry provides an interesting case study: plane crashes cause mistrust and low sales, is there a feedback loop in the same way? FAA (aviation industry) has very heavy-handed regulation, but still has scandals and failures. But maybe there’s another world where we just accept planes fall out of the sky sometimes? Big tech self regulation…is that really a good idea?

There appears to be a presumption 100% transparency is good. Even if you’re working in not for profit, you’re still not going to be 100% transparent at all times. Why are we asking for transparency? Transparency is in some sense the opposite to trust but a direct call for transparency is not the same as truth. There is an excellent paper about this tension between transparency and trust on our Reading List, but be warned - it’s a long one!

Might we have good reasons for not being transparent? Coorporate interests are the motivation in private companies but what about public sector? Competitive interests might be apparent, but there are likely to be “good” reasons relating to privacy and national security. On a personal level, sometimes people are not confident in their own ability and so not willing to share. What would we want to be transparent about? This might look different to different audiences (Transparency to the end user: which variables are impacting you? Transparency to the regulator: which variables are causing this selection of loan applications to fail?).

What’s the best outcome for users being able to really know what is going on? External regulation is arguably better than internal (unscrupulous people could potentially cover things up). But if the internal team has multiple people and there is more of a culture of ethical regulation, then that has an important place as well.

When is critical research practice likely to bring about substantial change?#

Critical research practice is difficult to do from “inside the house”. If research is challenging the main purpose of the company (e.g. Google and language models) and their main products then this is problematic for them from a financial perspective, and it’s doubtful whether truly meaningful change is possible from this position. That said, it would be great if companies were made up of all ethically minded people!

One of our members, an academic in a University philosophy department had some interesting (and very self-aware) insights into public perception of their work. Philosophers are not all knowing and can’t outthink every ethical problem.

The authors pointed out how there’s a potential issue with public funding cut for universities so there is more pressure to get industry funding. Some of the departments being closed have been the ones doing this more critical work.

However, it doesn’t always cost anything to do ethics research so you don’t always need ethical funding from the firms you are researching (which might affect the research). In the case of many philosophy departments, you don’t even always need government funding. You can just get a contract from the university to get paid to do your teaching and research obligations, circumventing the “publish or perish” model so many of us experience in academia.

How big of an issue is a lack of access to big data companies (even if you have funding)? The more access you have, the better. There is a difference between complaining about a problem and being able to fix it (despite what we would like to think here at DEC). If you perceive misrepresentation of data in something like the pharmaceutical industry for example, you can’t just write a putdown about a clinical trial, you basically have to go to court (via a regulatory body).

It is quite common in academia to write a piece that isn’t actually useful and for implementation and instead you want to write something conceptual. For example, we do know that certain algorithms are racially biased. We can write what conceptions of racism are morally wrong for example. Then you hope someone picks this up and actually tries to apply it.

As our anonymous philosophy academic put it: “Philosophers have so much academic freedom because people don’t care so much about what they say. That’s probably why philosophy has been allowed to survive in the traditional sense.”

Everything else has been shoved towards publish or perish mode. Some are hesitant to grant that the publish and perish thing isn’t important to many philosophers.

With universities and public bodies, a lot of ethical charters and intiatives are completely performative (e.g. Athena Swan Intiative even though there’s massive pay imbalances). People put too much energy into performing and not enough to fixing.

Conclusions#

Is it impossible to do this critical research from within Big Tech? This was done in the case of Stochastic Parrots research within Google. Maybe you’re allowed to make general criticisms of Big Tech, but not specifically about the cool new tech of the company you work for.

We would have liked to have heard more about alternative funding sources. The authors were arguing for ‘untainted’ money - but we don’t really know that that exists. It’s unrealistic to expect anyone to be perfect moral agents. Maybe instead the answer is to be very clear about our positionality - how where we work and who funds us affects our work.

The uncomfortable truth is that many innovations will cause harm. So what’s the solution - to do some tokenistic type of harm mitigation or perhaps throw it all out entirely?

Even if we agree not to research something, someone else somewhere will likely pick it up and do it. Is it sufficient to make sure that each of us as individuals are ethical (or as ethical as we can be)? Or should we seek to regulate others?


Attendees#

Name, Role, Affiliation, Where to find you

  • Natalie Zelenka, Data Scientist, University of Bristol, NatalieZelenka, @NatZelenka

  • Nina Di Cara, PhD Student, University of Bristol, ninadicara, @ninadicara

  • Huw Day, PhDoer, University of Bristol, @disco_huw

  • Euan Bennet, Senior Research Associate, University of Bristol, @DrEuanBennet

  • Robin Dasler, research data product manager, daslerr

  • Ismael Kherroubi Garcia, @hermeneuticist

  • Sergio Araujo-Estrada, Senior Research Associate, Aerospace Engineering, University of Bristol

14 people attended in total