đ€ Join In#
Join our mailing list to get meeting reminders
Everyone is welcome to get involved in Data Ethics Club, as much or as little as youâd like to! We would love to hear your point of view at our discussion groups, to have your support in organising or running a meeting, or to add your contributions to our reading list.
You donât need to be a data ethicist (weâre not!), or a data scientist - having a variety of different people is how we learn from each other. Itâs a friendly and welcoming group and we often have new people drop by, so why not try it?
In term time, we meet every other week for one hour on Teams (Wednesdays, 1pm, UK time) to talk about something from the reading list. Our upcoming meeting dates are available below. If you would like to get email reminders about the content and dates for the next meeting then click above to join our mailing list! You can also join the DEC Slack by clicking here.
Please read our Code of Conduct before attending.
Upcoming meetings#
These are the meetings for the next academic term.
We will update the material and questions based on the previous weeksâ vote.
Term time meetings are held at 1pm UK time and last one hour. Summer bookclub meetings are held at different times, see below! If you are in another timezone please use a time/date converter like this one to check your local time!
You can see the write ups of previous meetings here!
Summer Book Club#
For our summer 2025 bookclub, we will be reading AI Snake Oil: What artificial intelligence can do, what it canât and how to tell the difference by Arvind Narayanan and Sayash Kapoor.
On their website you can read the first chapter online for free, see what each chapter is about and see the suggested exercises and discussion prompts by the authors to get an idea of the kinds of conversation we might be having.
There are 8 chapters and the exact schedule is below. Weâll be meeting 4-5pm UK time on Microsoft Teams, meeting details here:
Meeting ID: 366 067 928 085 8
Passcode: 9hw7zS3e
If youâd like to be forwarded calendar invites to the session then you can let us know via email or by commenting on the Slack thread.
Discussion prompts below taken from the suggested exercises and discussion prompts by the authors, but should be rough guidelines of discussions, not the only thing you can talk about!
So far we have read:
Chapter 1 - Introduction (34 pages)
Chapter 2 - How predictive AI goes wrong (24 pages)
Chapter 3 - Why canât AI predict the future? (39 pages)
Chapter 4 - The Long Road to Generative AI (51 pages)
Next meetings:
23rd July: Chapter 5 - Is Advanced AI an Existential Threat? (27 pages)#
Suggested discussion prompts:
The authors defined AGI as âAI that can perform most or all economically relevant tasks as effectively as any humanâ. They noted this was a pragmatic and less philosophical definition. What do you think of this choice? What are the consequences of this definition?
In AI safety policy, entrenched camps have developed, with vastly divergent views on the urgency and seriousness of catastrophic risks from AI. While research and debate are important, policymakers must make decisions in the absence of expert consensus. How should they go about this, taking into account differences in beliefs as well as values and stakeholdersâ interests?
How much do you think the risk is the users of AI versus the AI itself? How does this inform how we should address specific issues related to AI such as deepfakes, cybersecurity or the ability to synthesise new bioweapons?
13th August: Chapter 7 - Why do myths about AI persist? (31 pages)#
Suggested discussion prompts:
One difference between AI research and other kinds of research is that most AI research is purely computational, and doesnât involve (for instance) experiments involving people or arduous measurements of physical systems. In what ways does this make it easier to have confidence in the claims of AI research? In what ways does it make it harder?
What techniques do you personally use to stay grounded when you hear of seemingly amazing AI advances in the news? Discuss with your peers.
What are some ways to improve accountability for companies making unsubstantiated claims? These could include legal remedies as well as non-legal approaches.
20th August: Chapter 8 - Where do we go from here? (27 pages)#
Suggested discussion prompts:
The chapter makes the point that broken AI appeals to broken institutions. What are some examples of broken institutions enamored by other dubious technologies? Is there something about AI, as opposed to other technologies, that makes it liable to be misused this way?
What impact do you think AI will have on your chosen or intended profession in the next five to ten years? What levers do we have to steer this impact in a way that is positive for society?
Look up some examples of AI-related legislation or regulation recently enacted or being debated in your country. Discuss the pros and cons of specific actions and proposals, as well as the overall approach to AI policymaking.
Past Meetings#
You can see a record of what we have discussed previously here.