The Algorithmic Colonization of Africa#

What’s this?

This is summary of Wednesday 6th April’s Data Ethics Club discussion, where we spoke and wrote about the Real Life Mag article The Algorithmic Colonization of Africa by Abeba Birhane. The summary was written by Huw Day, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara and Natalie Thurlby helped with the final edit.

Discussion#

Introduction#

A meme with a stressed man (representing tech companies) trying to choose between pressing one of two buttons: "Extracted data and exploit profits" and "Big white saviour vibes". Meme made by Euan Bennet.

This week at Data Ethics Club we read The Algorithmic Colonization of Africa, an article written by Abeba Birhane. The article, set in the backdrop of the second annual Conference on Technology, Innovation, and Society (CyFyAfrica2019) which aimed “to bring forth the continent’s voices in the global discourse.”

The author discusses the uphill nature of this endeavour and how western influences coming in with their own tech solutions do more harm than good. We discussed ideas of utilitarianism, white saviourism and what we (as westerners ourselves) should be doing to learn more effectively from other parts of the world.

The author states that tech solutions are made on the basis of utilitarianism - “the greatest happiness for the greatest number of people,” - which means that solutions that center minorities are never sought. Do you agree, and what are the alternatives?#

That does happen, we do agree. You could imagine optimising proportionally by subgroups but you would then need lots of demographic data which is less likely and minority groups are less likely to trust why you would collect such information. The immortal life of Henrietta Lacks by Rebecca Skloot provides an interesting insight into why this lack of trust is reasonable and comes highly recommended by a few of our members.

However, some of us who were better trained in philosophy asked if this really was utilitarianism. For example, women have been oppressed even though they make up half of the population.

Most of the time either your boss tells you to do something or the motivator is often a gap in the market. Often people are not solving a moral problem, but are more capital focussed. Even if we were trying to optimise for happiness, that’s so hard to measure. It’s not practical even if it is a nice idea.

You could strive to have a minimum standard in terms of outcome for everyone (people can all see over the fence). But for some things it can be quite hard to imagine how that could work: e.g. automatic hand dryer has to work for literally everybody?

There are alternative needs between different groups, for example a flashing fire alarm light which accomodates deaf people might be sensory overload for autistic people. These accommodations aren’t always complementary.

It can be difficult to design things just for minority groups and have enough interest for organisations to sustain themselves. But it’s not just about majority/minority - it’s also about power and intersectionality. So utilitarianism is not necessarily what we are doing. Often a powerful minority are being benefitted, not the overall global majority.

By not consulting with people in the minority, everyone loses out. The priviledged majority miss out on the solutions that come from increased accessibility too.

One counterexample to a lot this skepticism might be Fintechs in Africa (e.g. M-Pesa) which provide innovative payment solutions. How can people exchange money without literally handing cash to each other if they don’t have a bank account or smart phone? Money gets attached to sim cards. There was lots of pessimism (some of which was warranted) but there were lots of positives too! People started getting salaries. “Why would you need that?” was asked from lots of who don’t understand the real problems on the ground.

How does White Saviourism show up in the pursuit of tech solutions for social good?#

One of our members did a dissertation which involved reading Google’s AI ethics blog posts. One thing that came up (even before ideas of responsible AI) was a trend of “AI for Good” which in some ways is White Saviourism inate. The general approach was “if the country isn’t in the west, let’s go there and solve the problem with the AI.”

One of the problems is that there is a desire to shoehorn AI (or more generally, tech) into solutions. There are lots of problems where we need more money, people to help etc. and instead people in the tech sector are like “AI will fix it” (e.g. predictive policing, hiring and firing algorithms or worker chat apps designed to suppress discussions of worker’s rights).

This made some of us think of events like Red Nose Day where often people (usually in the West) don’t want to solve problems, they just see it as a PR opportunity. It is the only time the poorest people in the world are represented, and how they are represented is not chosen by themselves but by the charities who want donations.

One academic specific problem is when money is ring fenced for international help/charity in general. Don’t really fully understand how companies would go about doing the same thing or why would they want to unless to distract data? Is it basically just a tax write off for them? If they were trying to be good people then would it not be hard to optimise by not focussing on using technology.

What Western ideals influence the way we build technology in the Global North? What should we be doing to learn more effectively from other parts of the world?#

We are all so busy trying to solve problems but we are not asking if they are the right problems to solve. We want money and we want to look philantropic. We also want to look cool solving problems (shoutout to Elon Musk and his submarine comes to mind).

Under what circumstances should we do businesses in Africa/abroad in general? Providing goods and services? There’s models of partnerships with governments/community groups that are the purely extractive neo-colonial models which are too prevelant. There has to be a fine line there because if there’s a corrupt government/group you’re working with, how do we know the money we’re putting in is going to the right place? An example a little closer to home is UK PPE contracts where there was ‘apparent systemic bias’ in award of lucrative PPE deals favouring firms connected to Tories, so this issue is one that is prevalent everywhere.

What we would have liked to see more of in this piece?#

Who gets to decide where is too corrupt? There were not enough case studies in this article. We would like to have known more about what kind of models work and what models don’t? This article felt like it was taken every common strand of data ethics issues applied in the African context. Whilst it was a good introduction if you don’t know about data ethics, it was also quite broad and not very focussed. We were missing the information we needed to form a full opinion on all this.

We would have liked more detail about the nuanced dynamics within Africa about who wants want, and what the points are contention are between people about values and outcomes. Why are people willing to exploit fellow people? Is it seen as being beneficial for the local economy perhaps? Birhane alluded that more African values could be brought in - but we would like to know more about what those values are and learn about these.


Attendees#

Name, Role, Affiliation, Where to find you

  • Natalie Thurlby, Data Scientist, University of Bristol, NatalieThurlby, @StatalieT

  • Nina Di Cara, PhD Student, University of Bristol, ninadicara, @ninadicara

  • Huw Day, PhDoer, University of Bristol, @disco_huw

  • Euan Bennet, Senior Research Associate, University of Bristol, @DrEuanBennet

  • ZoĂ« Turner, Data Scientist, Nottinghamshire Healthcare NHS Foundation Trust