Structural Injustice and Individual Responsibility#

What’s this?

This is summary of Wednesday 6th October’s Data Ethics Club discussion, where we spoke about the podcast episode Structural Injustice and Individual Responsibility, an episode of the podcast The Philosopher’s Zone with David Rutledge, with guest Robin Zheng, Assistant Professor in Philosophy, Yale-NUS College Singapore..

The summary was written by Huw Day and Natalie Thurlby, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara and Natalie Thurlby helped with a final edit.

Introduction#

This week’s discussion was on Structural Injustice and Individual Responsibility, an episode of the podcast The Philosopher’s Zone with David Rutledge, with guest Robin Zheng, Assistant Professor in Philosophy, Yale-NUS College Singapore.

The episode asks who is responsible for structural injustice? Whilst some might say the answer is “practically everybody”, in some cases that might be just another way of saying “effectively nobody”.

What are the structural injustices in data science?#

Robin Zheng mentioned that as a university professor, she reinforces structural injustices by providing teaching for only the privaleged few that can access the elite instutuion where she teaches. Similarly, for data scientists, structural injustics are the things that individual data scientists may be unable to control. We spoke about too many to list, but a great example of this is the English language bias in programming: in Python, R, and other data-science programming languages, we define code using English words (“for”, “in”, “while”) despite the fact that the computer has no conception of their English meaning. Meanwhile the tools we use may have ecological consequences that will disproportionately affect developing countries, the contents of standard datasets used to benchmark ML/AI algorithms reflect society’s bias rather than address it, and (as we’ve previously discussed) as a society we’ve bought into using products/software that were built by exploiting underpaid workers.

Who is attributable for harmful deployments of data science? (E.g. The original researchers? The company who sold it? The government who bought and deployed it?) And who is accountable?#

Zheng makes a distinction between being attributable when others recognise our involvement in a process, and accountable when they can enforce consequences. In our discussions, there was disagreement on whether Data Scientists were even attributable for the outcomes of their work due to the complex chain of input and responsibility. While we all felt that data scientists should take responsibility for the choices they make, many outcomes seemed unrelated to data scientists individual choices, due to the presence of structural injustices. And in many cases the blame was shared between many different types of roles, e.g. sales and marketing.

We discussed whether blame was the most conducive way to enable data scientists to make positive changes. An analogy was made to insurance, where blame is not necessary to share the responsibility of fixing problems.

Moving beyond individuals, big tech companies could also be required to take corporate responsibility for their data science outcomes. Car manufacturers are responsible for adhering to certain safety standards, even if at times it might limit the capabilities of your technological developments. The difficulty is agreeing on what those standards are in, particuarly in a context of a quickly evolving technological capabilities that we are all struggling to keep up with.

‘Fair’ is the stupidest word humans ever invented, except for ‘staycation’. – Shawn (The Good Place), Human Data Science

Finally, another issue to be considered is who defines fairness and bias, and who continues to translate these concepts to new technologies.

How could individuals in the role of data scientist, publisher, funder, company, or government use their role to push boundaries?#

Facebook going down a few days before this discussion was had was a noteworthy example of the monopoly they had affected a lot of people. It provides an interesting case study when considering what individuals embedded in a similar company (outside of senior management) could do to make a difference. Leaving a company costs it in recruitment, but could potentially make it easier for them to get away with harmful practices. Whistle-blowing can have serious financial ramifications for the company, due to legal action and share-prices, but even this costly action for the individual, can’t prevent a company’s monopoly.

Throughout the discussion, we identified culture and career incentives that prevent individual data scientists from grappling with individual responsibility. Much of a science is framed as “problem-solving”, so we love to know what we’re doing and get excited about how to do it, but lots of people are happy to forget about the “why”. We are also trained to big-up our results, and to focus on the success of our individual approach as well as the general science/tech approach more widely. Both are due to incentives for funding and career progression, meaning that those with the power to write promotion/funding criteria and training mandates have a lot of power to change this culture. The same actors could encourage multidisciplinary design as another solution to this problem, to encourage people with a liberal arts background to work with data scientists to tackle these issues in their work. Companies could similarly invest in ethics practices, as well as making more accessible choices in their work practices (such as not offering unpaid internships).

At the same time we did identify some individual actions that we could take in our roles, whether that be prioritising ethics as part of our role as data scientists through reading and discussing data science and ethics, using Code of Conducts, inviting in colleagues from other disciplines, including affected communities in planning our tools and software, taking a critical look at input data sets, or choosing colour-blind friendly pallettes and adding alt-text to images. These things can make individual differences, and also help to change the culture in organisations and fields. This lines up in some ways with Zheng’s framing of “ideal roles” in the Role Ideal Model in which structural inequality can be considered to come from the ways in which individuals do not fulfil the ideal role of that position in society. Under this model, one way of improving injustices might look like expanding the role of data scientists to include considering whether certain pieces of data science work should be done and/or ensuring it is done fairly (either individually or in collaboration with others). We’ll sign of this week with a reminder to recognise the power we do have: “If you don’t think you can make a difference, spend a night in a room with a mosquito”.


Attendees#

Name, Role, Affiliation, Where to find you, Emoji to describe your day

  • Natalie Thurlby, Data Scientist, University of Bristol, NatalieThurlby, @StatalieT

  • Nina Di Cara, PhD Student, University of Bristol, ninadicara, @ninadicara

  • Huw Day, PhDoer, University of Bristol, @disco_huw

  • Aaron MacSween, applied cryptographer/project lead, XWiki SAS/CryptPad, ansuz, đŸ¶

  • Euan Bennet, Senior Research Associate, University of Bristol, [@DrEuanBennet] (https://twitter.com/DrEuanBennet)

  • Kamilla ‘Milli’ Wells, Citizen Developer

  • Mia Mace, mace-space