Decolonial AI#
Whatâs this?
This is summary of Wednesday 27th Marchâs Data Ethics Club discussion, where we spoke and wrote about the paper Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence by Shakir Mohamed, Marie-Therese Png and William Isaac. The summary was written by Jessica Woodgate, who tried to synthesise everyoneâs contributions to this document and the discussion. âWeâ = âsomeone at Data Ethics Clubâ. Vanessa Hanschke, Huw Day, Amy Joint, Nina Di Cara and Natalie Thurlby helped with the final edit.
Article Summary (for a longer summary, click here)#
The paper examines how colonial and decolonial theory can be applied to critique AI and identifies tactics for decolonial AI. First, the role of values is examined to establish why it is important to centre vulnerable people and power relations. Second, colonial and decolonial theory is defined. Coloniality refers to the existence of colonial characteristics (appropriation, exploitation, control of social structures) in present-day activities. Decolonisation adopts two roles: territorial decolonisation (dissolution of colonial relations) and structural decolonisation (undoing colonial mechanisms of power, economics, culture, and thinking). Third, colonial theory is applied to AI to identify sites of coloniality. Sites of coloniality include algorithmic oppression, algorithmic exploitation, and algorithmic dispossession. Algorithmic oppression is the unjust subordination of one group and privileging of another. Algorithmic exploitation denotes ways that AI actors take advantage of people by unfair or unethical means. Algorithmic dispossession examines how certain regulatory policies result in centralisation of power, assets, or rights in the hands of a minority. Fourth, these ideas are tied together to identify tactics for decolonial AI. Tactics offer contingent and collaborative constructions of other narratives, rather than a conclusive solution or method.
The tactics proposed are:
- Supporting critical technical practice - emphasising asking provocative questions and assessing the political nature of AI, encouraging context-aware technical development
- Establishing reciprocal engagement and reverse pedagogies - creating meaningful dialogue between the empowered and disempowered, explicit documentation of assumed knowledge, and community-engaged research
- The renewal of affective and political community - moving critical practices from an attitude of benevolence and paternalism towards solidarity, strengthening communities to shape AI and reform systems of hierarchy to decolonise power
Discussion#
Can we see colonial patterns reflected in the data spaces that we interact with?#
We recognise colonial patterns in various data spaces representing sites of algorithmic exploitation, dispossession, and oppression. Algorithmic exploitation, which considers how actors utilise tools to take advantage of marginalised people, can be seen in platforms like Mechanical Turk. By providing low paid remote work, these platforms facilitate outsourcing tasks to poorer regions and fuel the economy of âghost workersâ crowdsourced over the internet. As the paper highlights, ghost work can facilitate economic development and workplace flexibility, but it also represents a form of knowledge extraction with little consideration for workersâ rights.
Algorithmic dispossession involves implementing technologies as solutions for complex developmental scenarios whilst failing to adequately consider the views of the communities affected by them. This can be seen in the growth of mobile payments in Africa, which rests on colonial legacies and recasts colonial relations through practices such as unsecured short-term credit products. Algorithmic dispossession is also reflected in social media data spaces through imbalances in their user base and thereby the voices being amplified. For example, the decentralised platform Mastodon is quite heavily swayed to the US.
Algorithmic oppression encompasses the unjust subordination of one group and privileging of another through complex social restrictions. We see potential for algorithmic oppression in Australian hiring practices, which always include questions about which minority group the applicant is (willing to identify as being) a member of. The intention of this is to âclose the gapâ through positive discrimination, but we wondered if this is effective in a wider context. In deployment, it may be the case that those employing these practices do not use the information in the way it is intended (i.e. use it to negatively discriminate against people).
Colonial patterns also ruminate in âwesternâ-centric scientific research through biases towards white-dominated training data. Classic examples of this are soap dispensers that donât recognise darker skin tones, training data for skin cancer detection lacking racial diversity, and racist facial recognition. These examples highlight how systems developed in majority-white countries fail to properly account for the fact that not everyone is white, and there needs to be more work improving inclusivity.
Coloniality in science is embedded in the way we use language. Because of historical colonialism, English is the most common second language and the most common language used in science. The gulf between English-language science and science in other languages elevates the work of scientists from English speaking countries above other nations and overlooks important work. Poor grammar and spelling are often taken as signs of bad science, which could lead to the rejection of valuable work from researchers who primarily speak other languages. Despite the feasibility of translating programming languages, most coding is done in English which blocks people from the benefits of the skill. We were recently confronted with technologyâs language bias when working with collaborators from Nairobi, who highlighted that there are no LLMs in Swahili. This bias also seeps into AI translation, which demonstrates a preference for certain languages and under-represents the needs of marginalised communities. When used in high stakes scenarios, translation errors have severe consequences, such as the rejection of refugee application.
The centrality of power that arises from biased research and advances in AI is conceptualised in the paper as âmetropolesâ and their peripheries. Metropoles are centres of power with peripheries which âhold relatively less power and contest the metropoleâs authority, participation and legitimacy in shaping everyday lifeâ. We thought this was a good way to articulate global power dynamics. We see epistemological gaps between metropoles and peripheries; those of us who live in metropoles lack a deep understanding of the rest of the world. We found it concerning that many colonial patterns arenât obvious, as they are deeply embedded in our practices and everyday lives. In the examples we discussed, we have probably seen the most âobviousâ patterns, but miss others due to our own inherent biases.
What are the strengths and weaknesses of assessing AI through the lens of colonial/decolonial theory?#
Utilising decolonial theory relies on the ability to self-reflect through a wide lens. This may not always be effective, as individuals often have limited time and resources, and marginal sway over outcomes. In addition, decolonial theory is an expansive topic which may produce challenges when trying to narrow focus to specific applications.
On the other hand, using a wider discourse can be a strength rather than a weakness, as weaving through multiple topics deepens understanding and encourages various opportunities for reflection. To properly evaluate AI it is important to assess tools through a lens which looks at more issues than just precision and recall, and asks wider questions. Evaluation of tools should be driven by people to whom tools are applied. Just as how interdisciplinarity needs productive discussion from both disciplines, fruitful evaluation of AI is fostered by listening to the voices of stakeholders from different contexts.
Foregrounding unethical practices via a decolonial lens can better support the needs of those affected and enable more direct routes to activism by exposing and explaining real world examples. Highlighting specific legacies from colonial structures helps connect the dots to explain the wider context and unveil how old systems reverberate today. This deepens awareness of the mechanics of the structures we are situated in, lending more force to arguments for enacting decolonisation. For example, we can use decolonial theory to counter industry actorsâ paternalistic attitudes, such as the justification that governments shouldnât regulate algorithms if they canât understand them. The resistance of powerful actors to external interrogation restricts the scope of voices empowered to impose normative values in AI, demonstrating a metropole-periphery dynamic. This exemplifies how âAI can obscure asymmetrical power relations in ways that make it difficult for advocates and concerned developers to meaningfully address during developmentâ, as described in the paper. Transparency (e.g. CTOs answering questions about data usage) is essential to prevent companies repeating the same patterns.
Do we think the proposed tactics are effective? Could we implement them in our own lives?#
Some of the language used to explain the tactics is quite thick, and we wondered how feasible it is to pay them due diligence in practice. In academia, we can attempt to make time to engage with and employ the ideas presented in the paper. Realistically, however, spending too much time on doing things right results in falling behind unscrupulous actors in the publish-or-perish race.
As long as ânumber go upâ is the main driving force, it will be hard to effect real change. When this is the motivation leading an organisation, less attention is paid to providing resources for workers to engage in critical analysis beyond their primary role. Even product managers are often in charge of one thing â sometimes literally one button. Pragmatically, some top-down regulation is needed to enforce changes. However, we feel like we could be stuck reacting to the exacerbation of problems which have been around long before the acceleration of AI.
Despite these issues, we do think that the tactics are effective to an extent and would like to implement them in our lives. Currently, efforts to counter colonial patterns in our workspaces seem superficial. We are reminded of statements of inclusivity supporting indigenous populations but arenât granted the opportunity to challenge the lingering colonial structures which restrict access to resources. The proposed tactics could provide a framework to help us reassess these structures.
What change would you like to see on the basis of this piece? Who has the power to make that change?#
Is federated learning a solution to redistributing power from the metropole?
Attendees#
Huw Day, Data Scientist, Jean Golding Institute, @disco_huw
Euan Bennet, Lecturer, University of Glasgow, @DrEuanBennet
Paul Matthews, Lecturer, UWE Bristol @paulusm@scholar.social
Chris Jones, Data Scientist at Machine Learning Programs
Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane
Michelle Wan, PhD Student, University of Cambridge, @MichelleWLWan