Data Ethics Club meeting 27-03-24, 1pm UK time#
Meeting info#
Quick links#
Link to content: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
Meeting Description#
You’re welcome to join us for our next Data Ethics Club meeting on 27th March 2024 at 1pm UK time. You don’t need to register, just pop in!
This time we’re going to watch/read Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence by Shakir Mohamed, Marie-Therese Png & William Isaac, which is an academic paper. We will be focussing on section 4 of the paper, but feel free to read the entire paper. Thank you to Jessica Woodgate for suggesting this week’s content, summarising it and coming up with the discussion questions. See below Jessica’s summary!
Reading Summary#
The paper explores how colonial and decolonial theories can be applied to critique AI and identify ways to align research with ethical principles that centre vulnerable people. First, the paper examines why it is important to centre vulnerable people and power relations by looking at the role of values. Values shape scientific knowledge and technology, playing an important role in knowledge generation. We should examine the values and norms we aim to uphold when performing research and deployment of AI to help mitigate harms and failures. Values thus intersect with power, in terms of whose values are represented, and structural inequalities that result in an unequal spread of benefits and risks across society. Historical hindsight and critical science (such as decolonial theory) are used to explain underlying cultural assumptions and patterns of power which shape our intellectual, political, economic, and social world.
Second, the paper examines colonial and decolonial theory. Decolonialisation refers to restoration of land and life after historical colonial periods. Characteristics of historical colonialism include territorial appropriation, exploitation of the natural environment and human labour, and control of social structures. Coloniality refers to colonial characteristics identified with present-day activities, representing a continuation of power dynamics between the advantaged and disadvantaged. Thus, decolonisation has two roles: territorial decolonisation (dissolution of colonial relations) and structural decolonisation (undoing colonial mechanisms of power, economics, culture and thinking).
Third, colonial theory is applied to AI, viewing digital spaces as digital territories with the propensity to become sites of extraction and exploitation. Data/algorithmic colonialism can be seen in the context of algorithms which impact the allocation of resources, human socio-cultural and political behaviour, and discriminatory systems. Sites of coloniality are cases exhibiting structural inequalities that can be historically contextualised as colonial continuities. These sites help identify where empirical observation departs from current theoretical frameworks of power in AI, which are mostly ahistorical. Within sites of coloniality, colonial behaviour includes algorithmic oppression, algorithmic exploitation, and algorithmic dispossession. Algorithmic oppression involves the unjust subordination of one group and privileging of another, for example through algorithms trained on data which entrenches social biases. Algorithmic exploitation is ways that institutional and industry actors take advantage of people by unfair or unethical means for their asymmetric benefit. For example, this can be seen in ghost workers who remotely generate and process data, often in previously colonised countries. This can also be seen in beta-testing, where early versions of software systems are tested and fine-tuned in vulnerable and marginalised populations. Algorithmic dispossession encompasses how certain regulatory policies result in centralisation of power, assets, or rights in the hands of a minority, deprived from a disempowered minority. An example of this is the projection of AI governance guidelines from countries providing software onto other countries who become economically dependent on that software. Fourth, previous sections are linked to identify tactics for decolonial AI. Tactics are contingent and collaborative constructions of other narratives, rather than a conclusive solution or method. These tactics are: supporting critical technical practice; establishing reciprocal engagements and reverse pedagogies; the renewal of affective and political community.
Critical technical practice emphasises continuous generation of provocative questions and assessments of the politically situated nature of AI. This encourages context-aware technical development, considering how to appropriately reflect the values and needs of relevant stakeholders and impacted groups.
Reciprocal engagements and reverse tutelage centres the role of colonised peoples in changing the view of those with power. This can be aided by actively identifying those in the centre (of power) and those on the periphery to establish meaningful dialogue, including explicit documentation of assumed knowledge in datasets and AI systems, and developing meaningful community-engaged research.
Renewed affective and political communities relates to moving critical practices around AI from an attitude of benevolence and paternalism towards solidarity. Strengthening political communities in this way will enable them to shape AI, and reform systems of hierarchy to decolonise power. One tactic for this would be embedding tools of decolonial thought in AI design and research. Another tactic would be to support grassroots organisations in their ability to create new forms of affective community, elevate intercultural dialogue, demonstrate forms of solidarity and alternative community.
These topics all raise fundamental questions of what it is to be human: how we relate and live with each other, how we navigate differences and transcultural ethics, how we reposition roles of culture and power at work in daily live, and how the answers to these questions are reflected in the AI systems we build.
Discussion points#
There will be time to talk about whatever we like, relating to the paper, but here are some specific questions to think about while you’re reading.
Can we see colonial patterns reflected in the data spaces that we interact with?
What are the strengths and weaknesses of assessing AI through the lens of colonial/decolonial theory?
Do we think the proposed tactics are effective? Could we implement them in our own lives?
Bonus question: what change would you like to see on the basis of this piece? Who has the power to make that change?