Whatâs this?
This is summary of Wednesday 6th Aprilâs Data Ethics Club discussion, where we spoke and wrote about the Real Life Mag article The Algorithmic Colonization of Africa by Abeba Birhane.
The summary was written by Huw Day, who tried to synthesise everyoneâs contributions to this document and the discussion. âWeâ = âsomeone at Data Ethics Clubâ.
Nina Di Cara and Natalie Thurlby helped with the final edit.
Discussion
Introduction
This week at Data Ethics Club we read The Algorithmic Colonization of Africa, an article written by Abeba Birhane. The article, set in the backdrop of the second annual Conference on Technology, Innovation, and Society (CyFyAfrica2019) which aimed âto bring forth the continentâs voices in the global discourse.â
The author discusses the uphill nature of this endeavour and how western influences coming in with their own tech solutions do more harm than good. We discussed ideas of utilitarianism, white saviourism and what we (as westerners ourselves) should be doing to learn more effectively from other parts of the world.
The author states that tech solutions are made on the basis of utilitarianism - âthe greatest happiness for the greatest number of people,â - which means that solutions that center minorities are never sought. Do you agree, and what are the alternatives?
That does happen, we do agree. You could imagine optimising proportionally by subgroups but you would then need lots of demographic data which is less likely and minority groups are less likely to trust why you would collect such information. The immortal life of Henrietta Lacks by Rebecca Skloot provides an interesting insight into why this lack of trust is reasonable and comes highly recommended by a few of our members.
However, some of us who were better trained in philosophy asked if this really was utilitarianism. For example, women have been oppressed even though they make up half of the population.
Most of the time either your boss tells you to do something or the motivator is often a gap in the market. Often people are not solving a moral problem, but are more capital focussed. Even if we were trying to optimise for happiness, thatâs so hard to measure. Itâs not practical even if it is a nice idea.
You could strive to have a minimum standard in terms of outcome for everyone (people can all see over the fence). But for some things it can be quite hard to imagine how that could work: e.g. automatic hand dryer has to work for literally everybody?
There are alternative needs between different groups, for example a flashing fire alarm light which accomodates deaf people might be sensory overload for autistic people. These accommodations arenât always complementary.
It can be difficult to design things just for minority groups and have enough interest for organisations to sustain themselves. But itâs not just about majority/minority - itâs also about power and intersectionality. So utilitarianism is not necessarily what we are doing. Often a powerful minority are being benefitted, not the overall global majority.
By not consulting with people in the minority, everyone loses out. The priviledged majority miss out on the solutions that come from increased accessibility too.
One counterexample to a lot this skepticism might be Fintechs in Africa (e.g. M-Pesa) which provide innovative payment solutions. How can people exchange money without literally handing cash to each other if they donât have a bank account or smart phone? Money gets attached to sim cards. There was lots of pessimism (some of which was warranted) but there were lots of positives too! People started getting salaries. âWhy would you need that?â was asked from lots of who donât understand the real problems on the ground.
How does White Saviourism show up in the pursuit of tech solutions for social good?
One of our members did a dissertation which involved reading Googleâs AI ethics blog posts. One thing that came up (even before ideas of responsible AI) was a trend of âAI for Goodâ which in some ways is White Saviourism inate. The general approach was âif the country isnât in the west, letâs go there and solve the problem with the AI.â
One of the problems is that there is a desire to shoehorn AI (or more generally, tech) into solutions. There are lots of problems where we need more money, people to help etc. and instead people in the tech sector are like âAI will fix itâ (e.g. predictive policing, hiring and firing algorithms or worker chat apps designed to suppress discussions of workerâs rights).
This made some of us think of events like Red Nose Day where often people (usually in the West) donât want to solve problems, they just see it as a PR opportunity. It is the only time the poorest people in the world are represented, and how they are represented is not chosen by themselves but by the charities who want donations.
One academic specific problem is when money is ring fenced for international help/charity in general. Donât really fully understand how companies would go about doing the same thing or why would they want to unless to distract data? Is it basically just a tax write off for them? If they were trying to be good people then would it not be hard to optimise by not focussing on using technology.
What Western ideals influence the way we build technology in the Global North? What should we be doing to learn more effectively from other parts of the world?
We are all so busy trying to solve problems but we are not asking if they are the right problems to solve. We want money and we want to look philantropic. We also want to look cool solving problems (shoutout to Elon Musk and his submarine comes to mind).
Under what circumstances should we do businesses in Africa/abroad in general? Providing goods and services? Thereâs models of partnerships with governments/community groups that are the purely extractive neo-colonial models which are too prevelant. There has to be a fine line there because if thereâs a corrupt government/group youâre working with, how do we know the money weâre putting in is going to the right place? An example a little closer to home is UK PPE contracts where there was âapparent systemic biasâ in award of lucrative PPE deals favouring firms connected to Tories, so this issue is one that is prevalent everywhere.
What we would have liked to see more of in this piece?
Who gets to decide where is too corrupt? There were not enough case studies in this article. We would like to have known more about what kind of models work and what models donât? This article felt like it was taken every common strand of data ethics issues applied in the African context. Whilst it was a good introduction if you donât know about data ethics, it was also quite broad and not very focussed. We were missing the information we needed to form a full opinion on all this.
We would have liked more detail about the nuanced dynamics within Africa about who wants want, and what the points are contention are between people about values and outcomes. Why are people willing to exploit fellow people? Is it seen as being beneficial for the local economy perhaps? Birhane alluded that more African values could be brought in - but we would like to know more about what those values are and learn about these.