Data Ethics Club meeting 13-03-24, 1pm UK time#

Meeting info#


You’re welcome to join us for our next Data Ethics Club meeting on 13th March 2024 at 1pm UK time. You don’t need to register, just pop in.

This week at Data Ethics Club we’re going to be considering two recent AI image generation outcries, and using them to discuss how values are embedded in the creation of new/alternative realities. To do this we’re focussing on two pieces of content - the first is an AI app that adds clothes to women’s bodies (so-called DignifAI) and the second is a thread from Margaret Mitchell about Google’s Gemini tool, and how it could have done better.

We will start the meeting with a short summary about DignifAI from journalist Catherine Shuttleworth who wrote the article we’re reading about it. Thank you Catherine!


We’ve included a summary here about both pieces of content. What both these systems illustrate is that the values embedded into AI systems have a huge impact on the outputs they create.


DignifAI is a relatively new AI tool (i.e. less than a couple of months - but is that young enough to be considered new in the current pace of AI development?) built on top of the Stable Diffusion model which was released publicly in August 2023.

The tool claims to be ‘dignifying’ women by using AI to add clothing and coverage to women’s outfits. Screenshots also show the tool removing tattoos, making women’s hair longer, adding cleavage and making women’s waists smaller. Recent posts from the tool owners on X show men also having tattoos, piercings and alternative hairstyles removed, and in some cases being made to look slimmer.

It is clear that the tool is providing the opposite of dignity, and whilst the initial outcry was about control of women’s bodies, the increase of posts about men also illustrates that misogyny harms all of us eventually by enforcing unrealistic gender norms.


Gemini (previously known as Bard) is an AI tool developed by Google that recently launched an image generation component - which was quickly taken down. This image generation component was noticed to create scenes where historical figures like the American founding fathers were people of colour, or the pope was Black (as seen in Margaret Mitchell’s thread). This resulted in an outcry about the lack of “white representation” from images produced by the tool.

Dr Mitchell points out in her thread that AI systems can be developed to interpret user requests - some users may be looking for historically accurate depictions. Some might be looking to generate pictures with alternative versions of history - or seek to remove the white washing of history that has occured. One of the benefits of using a system of AI models is that it can be built to tailor user requests - Gemini made the mistake of not looking for user input and instead assuming all users wanted the same thing.

Whilst these aspirational values might be welcomed by some, they can also come across as tokenistic representation that makes those excluded by diversity initiatives feel even further removed.

Discussion points#

There will be time to talk about whatever we like, relating to the content, but here are some specific questions to think about.

  • Which values do you see as being most prominent in each of these tools - and what harms or benefits can you see from them being expressed through these tools?

  • The statement from Google’s CEO about Gemini said that their aim is to provide “helpful, accurate, and unbiased information in our products” and that this has to also be their approach for emerging AI products. Is this a realistic goal?

  • Margaret Mitchell’s thread shows a table for assessing unintended uses and users of tools - what do you think about this method for better understanding uses and how could it be used to improve AI image generation?