Data Ethics Club meeting 13-03-24, 1pm UK time#
Meeting info#
Quick links#
Content 1: Misogynists are using DignifAI to cover up women in worrying new trend
Content 2: Thread from Margaret Mitchell about Gemini and what went wrong
Description#
Youâre welcome to join us for our next Data Ethics Club meeting on 13th March 2024 at 1pm UK time. You donât need to register, just pop in.
This week at Data Ethics Club weâre going to be considering two recent AI image generation outcries, and using them to discuss how values are embedded in the creation of new/alternative realities. To do this weâre focussing on two pieces of content - the first is an AI app that adds clothes to womenâs bodies (so-called DignifAI) and the second is a thread from Margaret Mitchell about Googleâs Gemini tool, and how it could have done better.
We will start the meeting with a short summary about DignifAI from journalist Catherine Shuttleworth who wrote the article weâre reading about it. Thank you Catherine!
Summary#
Weâve included a summary here about both pieces of content. What both these systems illustrate is that the values embedded into AI systems have a huge impact on the outputs they create.
DignifAI#
DignifAI is a relatively new AI tool (i.e. less than a couple of months - but is that young enough to be considered new in the current pace of AI development?) built on top of the Stable Diffusion model which was released publicly in August 2023.
The tool claims to be âdignifyingâ women by using AI to add clothing and coverage to womenâs outfits. Screenshots also show the tool removing tattoos, making womenâs hair longer, adding cleavage and making womenâs waists smaller. Recent posts from the tool owners on X show men also having tattoos, piercings and alternative hairstyles removed, and in some cases being made to look slimmer.
It is clear that the tool is providing the opposite of dignity, and whilst the initial outcry was about control of womenâs bodies, the increase of posts about men also illustrates that misogyny harms all of us eventually by enforcing unrealistic gender norms.
Gemini#
Gemini (previously known as Bard) is an AI tool developed by Google that recently launched an image generation component - which was quickly taken down. This image generation component was noticed to create scenes where historical figures like the American founding fathers were people of colour, or the pope was Black (as seen in Margaret Mitchellâs thread). This resulted in an outcry about the lack of âwhite representationâ from images produced by the tool.
Dr Mitchell points out in her thread that AI systems can be developed to interpret user requests - some users may be looking for historically accurate depictions. Some might be looking to generate pictures with alternative versions of history - or seek to remove the white washing of history that has occured. One of the benefits of using a system of AI models is that it can be built to tailor user requests - Gemini made the mistake of not looking for user input and instead assuming all users wanted the same thing.
Whilst these aspirational values might be welcomed by some, they can also come across as tokenistic representation that makes those excluded by diversity initiatives feel even further removed.
Discussion points#
There will be time to talk about whatever we like, relating to the content, but here are some specific questions to think about.
Which values do you see as being most prominent in each of these tools - and what harms or benefits can you see from them being expressed through these tools?
The statement from Googleâs CEO about Gemini said that their aim is to provide âhelpful, accurate, and unbiased information in our productsâ and that this has to also be their approach for emerging AI products. Is this a realistic goal?
Margaret Mitchellâs thread shows a table for assessing unintended uses and users of tools - what do you think about this method for better understanding uses and how could it be used to improve AI image generation?