đŸ€— Join In#

Join our mailing list to get meeting reminders

Everyone is welcome to get involved in Data Ethics Club, as much or as little as you’d like to! We would love to hear your point of view at our discussion groups, to have your support in organising or running a meeting, or to add your contributions to our reading list.

You don’t need to be a data ethicist (we’re not!), or a data scientist - having a variety of different people is how we learn from each other. It’s a friendly and welcoming group and we often have new people drop by, so why not try it?

We meet every other week for one hour on Zoom (Wednesdays, 1pm, UK time) to talk about something from the reading list. Out upcoming meeting dates are available below. If you would like to get email reminders about the content and dates for the next meeting then click above to join our mailing list! You can also join the DEC Slack by clicking here.

Please read our Code of Conduct before attending.

Upcoming meetings#

These are the meetings for the next academic term.
We will update the material and questions based on the previous weeks’ vote.

All meetings are held at 1pm UK time and last one hour. If you are in another timezone please use a time/date converter like this one to check your local time!

You can see the write ups of previous meetings here!

Summer Book Club#

For our summer 2025 bookclub, we will be reading AI Snake Oil: What artificial intelligence can do, what it can’t and how to tell the difference by Arvind Narayanan and Sayash Kapoor.

On their website you can read the first chapter online for free, see what each chapter is about and see the suggested exercises and discussion prompts by the authors to get an idea of the kinds of conversation we might be having.

There are 8 chapters and the exact schedule is below. We’ll be meeting 4-5pm UK time on Microsoft Teams, click here to join.

If you’d like to be forwarded calendar invites to the session then you can let us know via email or by commenting on the Slack thread.

Discussion prompts below taken from the suggested exercises and discussion prompts by the authors, but should be rough guidelines of discussions, not the only thing you can talk about!

11th June: Chapter 1 - Introduction (34 pages)#

Suggested discussion prompts:

  1. What do “easy” and “hard” mean in the context of AI? Does it refer to computational requirements, or the human effort needed to build AI to perform a task, or something else? And what does easy/hard for people mean?

  2. Based on your definitions of these terms, pick a variety of tasks and try to place them on a 2-dimensional spectrum where the axes represent people’s and computers’ ease of performing the task. What sort of relationship do you see?

  3. The text gives many examples of AI that quietly work well, like spellcheck. Can you think of other examples? What do you think are examples of tasks that AI can’t yet perform reliably but one day will, without raising ethical concerns or leading to societal disruption?

18th June: Chapter 2 - How predictive AI goes wrong (24 pages)#

Suggested discussion prompts:

  1. Predictive models make “common sense” mistakes that people would catch, like predicting that patients with asthma have a lower risk of developing complications from pneumonia, as discussed in the chapter. What, if anything, can be done to integrate common-sense error checking into predictive AI?

  2. Think about a few ways people “game” decision-making systems in their day-to-day life. What are ways in which it is possible to game predictive AI systems but not human-led decision making systems? Would the types of gaming you identify work with automated decision-making systems that do not use AI?

  3. In which kinds of jobs are automated hiring tools predominantly used ? How does adoption vary by sector, income level, and seniority? What explains these differences?

2nd July: Chapter 3 - Why can’t AI predict the future? (39 pages)#

Suggested discussion prompts:

Suppose a research group at a big tech company finds that it can build a model to predict which of its users will be arrested in the next year, based on all the private user data that it stores, such as their emails and financial documents. While far from perfectly accurate, it is more accurate than any model that uses public data alone.

  1. Does it seem plausible that a model like this might work in any meaningful sense? If so, what signals might it be picking up on?

  2. What laws, rules, or norms should govern companies’ ability to undertake research projects of this sort?

  3. Is there any ethical and responsible way in which technology like this can be put to use, or should we as a society reject such uses of prediction?

  4. What, if anything, prevents a company from partnering with police departments in your country to use such a predictive model for surveillance of individuals deemed high risk?

16th July: Chapter 4 - The Long Road to Generative AI (51 pages)#

Suggested discussion prompts:

  1. Spend at least an hour using a chatbot for learning. Reflect on your experience and discuss it with your peers. What worked well, and what didn’t? Do you plan to continue to use chatbots for learning?

  2. Generative AI is built using the creative output of journalists, writers, photographers, artists, and others — generally without consent, credit, or compensation. Discuss the ethics of this practice. How can those who want to change the system go about doing so? Can the market solve the problem, such as through licensing agreements between publishers and AI companies? What about copyright law — either interpreting existing law or by updating it? What other policy interventions might be helpful?

  3. Discuss the environmental impact of generative AI. What, if anything, is distinct about AI’s environmental impact compared to computing in general or other specific digital technologies with a large energy use such as cryptocurrency?

23rd July: Chapter 5 - Is Advanced AI an Existential Threat? (27 pages)#

Suggested discussion prompts:

  1. In AI safety policy, entrenched camps have developed, with vastly divergent views on the urgency and seriousness of catastrophic risks from AI. While research and debate are important, policymakers must make decisions in the absence of expert consensus. How should they go about this, taking into account differences in beliefs as well as values and stakeholders’ interests?

  2. Make predictions on the forecasting website Metaculus on a few AI- and AGI-related questions. Be sure to read the “resolution criteria” carefully. What data or information did you consider? What do you think of the community predictions? Discuss with your peers.

  3. As of 2024, there have been a few attempts to automate AI research. Read some of this work. What set of activities are researchers trying to automate? Assess how close they are to their goal. What are the implications of being able to automate AI research?

6th August Chapter 6 - Why can’t AI fix social media? (48 pages)#

Suggested discussion prompts:

  1. What are the advantages and disadvantages of having a content moderation system with highly specified rules? What do you think about a content moderation system that gives more autonomy to content moderators?

  2. Of the many ills that have been blamed on recommendation algorithms, which ones could algorithmic choice conceivably combat? Which ones are structural and can’t be solved through the lens of individual empowerment?

  3. Try to form a high-level impression of how developers at Facebook think about the impacts of their platform and their responsibility (suggest skimming the Facebook papers. What motivates them? Which of the concerns listed above seem to be supported by these documents? What do you think about the quality of the ethical reasoning in the documents?

13th August: Chapter 7 - Why do myths about AI persist? (31 pages)#

Suggested discussion prompts:

  1. One difference between AI research and other kinds of research is that most AI research is purely computational, and doesn’t involve (for instance) experiments involving people or arduous measurements of physical systems. In what ways does this make it easier to have confidence in the claims of AI research? In what ways does it make it harder?

  2. What techniques do you personally use to stay grounded when you hear of seemingly amazing AI advances in the news? Discuss with your peers.

  3. What are some ways to improve accountability for companies making unsubstantiated claims? These could include legal remedies as well as non-legal approaches.

20th August: Chapter 8 - Where do we go from here? (27 pages)#

Suggested discussion prompts:

  1. The chapter makes the point that broken AI appeals to broken institutions. What are some examples of broken institutions enamored by other dubious technologies? Is there something about AI, as opposed to other technologies, that makes it liable to be misused this way?

  2. What impact do you think AI will have on your chosen or intended profession in the next five to ten years? What levers do we have to steer this impact in a way that is positive for society?

  3. Look up some examples of AI-related legislation or regulation recently enacted or being debated in your country. Discuss the pros and cons of specific actions and proposals, as well as the overall approach to AI policymaking.

Past Meetings#

You can see a record of what we have discussed previously here.

Date

Discussion Material

Summary

30.04.2025, 1pm

UK announces AI funding for teachers: how this technology could change the profession

Read the writeup

16.04.2025, 1pm

Understanding and supporting the mental health and professional quality of life of academic mental health researchers: results from a cross-sectional survey

Read the writeup

02.04.2025, 1pm

The Political Economy of Death in the Age of Information: A Critical Approach to the Digital Afterlife Industry

Read the writeup

19.03.2025, 1pm

Alphafold: The Most Useful Thing AI Has Ever Done

Read the writeup

05.03.2025, 1pm

International Women’s Day Special

Read the writeup

22.02.2025, 1pm

OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us

Read the writeup

05.02.2025, 1pm

“It’s Not Exactly Meant to Be Realistic”: Student Perspectives on the Role of Ethics In Computing Group Projects

Read the writeup

22.01.2025, 1pm

Data Ethics Club: New Years Resolutions Special

Read the writeup

18.12.2024, 1pm

Ask Me Anything! How ChatGPT Got Hyped Into Being

Read the writeup

04.12.2024, 1pm

Visuals of AI in the Military Domain: Beyond ‘Killer Robots’ and towards Better Images?

Read the writeup

20.11.2024, 1pm

A giant biotechnology company might be about to go bust. What will happen to the millions of people’s DNA it holds?

Read the writeup

06.11.2024, 1pm

Data Ethics Club: Creating a collaborative space to discuss data ethics

Read the writeup

23.10.2024, 1pm

Transparent communication of evidence does not undermine public trust in evidence

Read the writeup

09.10.2024, 1pm

Time to reality check the promises of machine learning-powered precision medicine

Read the writeup

25.09.2024, 1pm

ChatGPT is Bullsh*t

Read the writeup

Weekly in July/August 2024

Data Feminism Book Club

Read the writeup

04.06.2024, 11am

How AI Could Save (Not Destroy) Education

Read the writeup

22.05.2024, 1pm

The Myers-Briggs Test Has Been Debunked Time and Again. Why Do Companies Still Use It?

Read the writeup

08.05.2024, 1pm

Amazon’s Just Walk Out technology relies on hundreds of workers in India watching you shop

Read the writeup

24.04.2024, 1pm

Artificial Intelligence Act: MEPs adopt landmark law

Read the writeup

27.03.2024, 1pm

Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence

Read the writeup

13.03.2024, 1pm

Values in Generative AI (discussing Gemini and DignifAI)

Read the writeup

28.02.2024, 1pm

Google Search Really Has Gotten Worse, Researchers Find

Read the writeup

14.02.2024, 1pm

Italian Supervisory Authority clamps down on Replika chatbot

Read the writeup

31.01.2024, 1pm

Duolingo cuts workers as it relies more on AI

Read the write up

17.01.2024, 1pm

New Years Data Ethics Resolutions

Read the write up

06.12.2023, 1pm

Anatomy of an AI-Powered Malicious Social Botnet

Read the write up

22.11.2023, 1pm

Implementation of an ethics checklist at Seattle Children’s Hospital

Read the write up

08.11.2023, 1pm

The Dimensions of Data Labor

Read the write up

25.10.2023, 1pm

How influencer ‘mumpreneur’ bloggers and ‘everyday’ mums frame presenting their children online

Read the write up

11.10.2023, 1pm

Privacy and Loyalty Card Data

Read the write up

27.09.2023, 1pm

Cancelled in support of UCU strikes

N/A

31.05.2023, 1pm

Classifying ‘toxic’ content online

Read the write up

05.06.2023, 2pm

JGI Data Week Special! Find out more here.

Read the write up

17.05.2023, 1pm

Designing Accountable Systems

Read the write up

03.05.2023, 1pm

Queer in AI: A Case Study in Community-Led Participatory AI

Read the write up

19.04.2023, 1pm

Social Biases in NLP Models as Barriers for Persons with Disabilities

Read the write up

29.03.2023, 1pm

The Tech We Won’t Build

Read the write up

08.03.2023, 3pm

Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes, plus a related short talk from David Widder

Read the write up

08.02.2023, 1pm

ChatGPT listed as author on research papers: many scientists disapprove

Read the write up

25.01.2023, 1pm

Data Ethics New Years Resolutions discussion!

Read the write up

14.12.2022, 1pm

Defective Altruism

Read the write up

16.11.2022, 1pm

The Ethics of AI Generated Art

Read the write up

30.11.2022, 1pm

Cancelled in support of the UCU strikes

02.11.2022, 1pm

The data was there – so why did it take coronavirus to wake us up to racial health inequalities?

Read the write up

19.10.2022, 1pm

Patient and public involvement to build trust in artificial intelligence: A framework, tools, and case studies

Read the write up

05.10.2022, 1pm

The Failures of Algorithmic Fairness

Read the write up

21.09.2022, 1pm

Hacking the cis-tem

Read the write up

16.06.2022, 1pm

Data Week Special - We watched a video by Virginia Eubanks (author of Automating Inequality)

Read the write up

01.06.2022, 1pm

Participatory data stewardship

Read the write up

18.05.2022, 1pm

Why Data Is Never Raw

Read the write up

04.05.2022, 1pm

Economies of Virtue: The Circulation of “Ethics” in Big Tech

Read the write up

06.04.2022, 1pm

The Algorithmic Colonization of Africa

Read the write up

23.03.2022, 1pm

The Tyranny of Structurelessness

Read the write up

09.03.2022, 1pm

AI in Warfare

Read the write up

23.02.2022, 1pm

N/A

Cancelled due to UCU Strikes.

09.02.2022, 1pm

“You Social Scientists Love Mind Games”: Experimenting in the “divide” between data science and critical algorithm studies

Read the write up

26.01.2022, 1pm

Which Programming Languages Use The Least Electricity?

Read the write up

12.01.2022, 1pm

Data Ethics Club’s New Years Resolutions - read the meeting summary for an overview

Read the write up

15.12.2021, 1pm

The Reith Lectures: Onora O’Neill - A Question of Trust

Read the write up

01.12.2021, 1pm: CANCELLED UCU STRIKE

NA: CANCELLED UCU STRIKE

No meeting, but feel free to have a read about the strikes here

17.11.2021, 1pm

Statistics, Eugenics, and Me

Read the write up

03.11.2021, 1pm

UK’s National AI Strategy - Pillar 3: Governing AI Effectively

Read the write up

20.10.2021, 1pm

Towards decolonising computational sciences

Read the write up

06.10.2021, 1pm

Structural Injustice and Individual Responsibility

Read the write up

NO MEETING 22.08

NO MEETING 22.08

N/A

08.09.2021, 1pm

ESR: Ethics and Society Review of Artificial Intelligence Research

Read the write up

25.08.2021, 1pm

Participant’s Perceptions of Twitter Research Ethics

Read the write up

11.08.2021, 1pm

What an ancient lake in Nevada reveals about the future of tech

Read the write up

28.07.2021, 1pm

The Rise of Private Spies

Read the write up

14.07.2021, 1pm

Numberphile: The Mathematics of Crime and Terrorism - Numberphile

Read the write up

17.06.2021: Inclusive and Ethical Data Science Seminar

Responsible Data and AI by Anjali Mazumder, Intro to The Turing Way by Malvika Sharan, and FAT Forensics ToolBox by Alex Hepburn

N/A

16.06.2021: Screening of Coded Bias

Coded Bias

N/A

26.05.2021, 1pm

‘Living in the Hidden Realms of AI: The Workers Perspective’

Read the write up

12.05.2021, 1pm

Critical Perspectives on Computer Vision

Read the write up

28.04.2021, 1pm

We created poverty. Algorithms won’t make that go away

Read the write up

14.04.2021, 1pm

Identifying gaps, opportunities and priorities in the applied data ethics guidance landscape

Read the write up

31.03.2021, 1pm

Dataism is Our New God

Read the write up

17.03.2021, 1pm

#bropenscience is broken science

Read the write up

03.03.2021, 1pm

Algorithmic injustice: a relational ethics approach (Birhane, 2021)

Nina’s Twitter Summary

17.02.2021, 1pm

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Natalie’s Twitter Summary

03.02.2021, 1pm

Ethics can’t be a side hustle

Nina’s Twitter Summary

20.01.2021, 1pm

Executive Summary of the Review into bias in algorithmic decision making

Brief summary on the meeting document