Dataism Is Our New God#
Whatâs this?
This is summary of Wednesday 31st Marchâs Data Ethics Club discussion, where we spoke and wrote about the piece Dataism Is Our New God, which is an interview with Yuval Noah Harari.
The summary was written by Nina Di Cara, who tried to synthesise everyoneâs contributions to this document and the discussion. âWeâ = âsomeone at Data Ethics Clubâ.
What is dataism, really?#
Harariâs dataism is based on believing in the judgement of algorithms in the same way that we may believe in the decisions of gods. The volume of data being amassed in our digital world is giving humans power that we have never had before, in medical discovery, personalisation, surviellance and more. Harari argues that this data gives humans god-like powers, making algorithms the new higher being of decision making. Unlike other data ethics pieces, Harari isnât interrogating how data science fits into existing power structures, but framing data science as a separate system of authority.
In our group discussions we could certainly see some of the similarities between dataism and religion, and tended to chalk these up to themes of abdicating responsibility for outcomes, and a lack of control in the decisions made about us by systems we cannot see or understand. In the same way that people have historically attributed events that they didnât understand to gods, the same language is being transferred to how we can be âblessedâ by algorithms.
That said, we werenât wholly convinced that the âbeliefâ went any further than this. Flatly comparing dataism to religion is abrasive. To many, religion is much more than an authority system. Arguably, the religion that is organised around the authority has itâs own societal power, beyond that of the âsourceâ of the authority itself (thanks Terry Pratchett). We felt the data as religion metaphor was missing a proviso like this.
Do we have the server space?#
Even when recognising that there is a great belief in the potential of data, we were skeptical about the ability of humans to recreate god-like power through algorithms. General intelligence is not exactly around the corner, and while there are useful things that you can do with AI theyâre not god-like. If Apple and Google know more about ourselves than we do, as Harari alluded to, then their targeted advertising doesnât show it! Besides, even if we had the data, do we really have enough server space to make a god?
We wondered whether this piece is intentionally hyperbolic in terms of the abilities of AI, in order to scare people into considering AIâs potential negative impacts. Or perhaps it doesnât matter if AI has these abilities; much the same as religion is built on belief, just believing the algorithms can make decisions is enough for us to give them the authority to do so. The drive for a quantified self is strong; so many of us want to know how many steps weâve done or monitor our heart rate, and we want to believe that the information we get is true.
Given our skepticism, the potential algorithms that Harari describes felt more like the Wizard of Oz being a fallible man behind a curtain. As with stochastic parrots, AI seems impressive until you understand the nuts and bolts. But we are probably an audience who have the background knowledge to understand what is behind the curtain - not everyone does.
Knowing enough to know better#
With the background of those attending Data Ethics Club being people who have some understanding of data, one theme that came up was data literacy. Those of us in the room were probably more likely than the general public to know how poor algorithmic decisions can be. We identified education around data and AI as a way to avoid being overly accepting of algorithmic decisions.
But what does good data education look like? It might mean better general information literacy, or having a good understanding of your rights. For instance, in European law (under GDPR) youâre entitled to some sort of explanation about automated decisions made about you - how many people know about and use this to their advantage? This isnât common knowledge - it was news to some of us.
As well as education for the general public, we need good ethics education for the people building systems. We could make use of codes of professional conduct for this kind of thing (e.g the BCS has one), if we assume corporations arenât going to work to the standards we would hope for.
Rise of the curly fry haters#
Harariâs vision of personalised oppression was also a topic of discussion. We werenât sure about this, given that the data would be making decisions from underlying data that potentially describes broader groups anyway. That is unless new sub-groups develop from the data that were never conceived by humans, and become the new basis for algorithmic discrimination. For example, what if liking curly-fries is associated with high income? The curly fry haters are unlikely to rise up as a group, and it would be hard to pin point the reason for the discrimination in order to organise around it.
It might be possible for the algorithmically opressed to become itâs own category for activism. However, this depends entirely on how transparent the algorithms are. If we donât know that we are being unfairly treated, or why, then how can we fight it?
Our science fiction future#
Sci-fi recommendations and discussion featured heavily this week as we contemplated our possible distopyian AI futures! That said, Harariâs outlook didnât seem to evoke fear about algorithms, it was framed more like an inevitable outcome which makes a nice change from the usual panics that seem to go around as new technologies are introduced.
Whilst we might not live out a sci-fi novel, we agreed that the narratives around AI created by literature and the media are hugely important in framing how people think about it. When AI is framed as infallible or even magical its no wonder it seems attractive!
Overall, we mostly took dataism with a pinch of skepticism ourselves, but it did make excellent food for thought.
Voting#
91% (10/11 voters) felt that the content sparked interesting dicussion.
55% (6/11 voters) would recommend the content itself to others
Further recommendations based on this piece#
This weeks discussion prompted lots of recommended follow on content!
Some papers:
Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law, Wachter, Mittelstadt, and Russell, 2021.
The Sisyphean Cycle of Technology Panics, Orben, 2020.
âBlessed by the algorithmâ: Theistic conceptions of artifcial intelligence in online discourse, Singler, 2020.
And some books:
Any of Harariâs other books!
Mindf*ck - Christopher Wylie
Weapons of Math Destruction - Cathy OâNeil (+2)
Brave New World - Aldous Huxley
Brave New World Revisited - Aldous Huxley
Your Computer is on Fire, particularly âYour AI is a Humanâ - Sarah Roberts
Small Gods - Terry Pratchett
Attendees#
Note: this is not a full list of attendees, only those who felt comfortable sharing their names.
Natalie Thurlby, Data Scientist, University of Bristol, NatalieThurlby, @StatalieT, :sun_with_face:
Nina Di Cara, PhD-ing, University of Bristol
Huw Day, Maths PhDoer, University of Bristol
Tessa Darbyshire, Scientific Editor, Patterns @TessaDarbyshire tdarbyshire@cell.com
Vanessa Hanschke, PhD Interactive AI, University of Bristol
Matthew West, RSE, University of Exeter
Emma Tonkin, Digital Health, University of Bristol, đ€
Paul Lee, investment world
Ruth Drysdale Jisc
Zoë Turner, Senior Information Analyst, Nottinghamshire Healthcare NHS Foundation Trust
Robin Dasler, data-related software product manager
Emma Kuwertz, Data Scientist, University of Bristol
Kamilla âMilliâ Wells, Citizen Developer, Australia :birthday: