The Ethics of AI generated art#
Whatâs this?
This is summary of Wednesday 16th Novemberâs Data Ethics Club discussion, where we spoke about The Ethics of AI generated art, an article written by Jamie Arpin-Ricci. The summary was written by Huw Day, who tried to synthesise everyoneâs contributions to this document and the discussion. âWeâ = âsomeone at Data Ethics Clubâ. Nina Di Cara and Natalie Thurlby helped with the final edit.
Introduction#
We started this meeting off a little differently, by hearing from Dr Keir Williams about how he uses AI generated art in his research and teaching around indigenous and inclusive futures. If you are interested in this work you can fellow Keir on Instagram here.
Then we went into a discussion about whether any of us were familiar with these sorts of AI generation tools. There was a combination of experiences, with some people having played with them or tested them out, but few of us having used them seriously for anything before.
Tools like Dall-e work because good images of each concept can be compiled. This is an effect used in film for creating cognitive links between different things.
First impressions had us asking what does art mean? Who is it for? We were a bit conflicted trying to argue this. Whose labour is being used here? Whatâs the training data being used? Is directly derived from someone elseâs work?
We talked about the creation of physical artwork by both technicians and artists themselves! The artist works as designer but what tools do they use? Who made these tools?
Who owns AI generated art?#
In most countries, artworks canât be copyright unless they have a human creator. Under UK law, however, an AI-generated artwork is the copyright of the person who made the arrangements necessary for its creation. If an AI-generated artwork is produced with only trivial creative input from the human user, should the artwork be public domain, copyright of the user, or copyright of the programmer of the AI system?
The whole thing around AI generated artwork is that the person who inputs the stuff is somehow the artist. Parrallels with Amyâs job of talking to academics and comissioning people to do stuff. We are comissioning AI to generate artwork. Amy would never claim authorship of the stuff she comissioned, why should artists do so?
Most art is fabicrated by someone. A lot of painters had people who had help. If you just type in text and generate something thatâs just a demo. But does photoshop just do stuff for you? Or are you deserving of more credit?
Thereâs also a difference between legal ownership and ideas. People have ideas all the time but if they donât execute it in time they donât get the first patent. It doesnât matter how you feel about it. Just because you put work into something, doesnât mean you deserve credit for it, especially if someone beat you to it.
This law doesnât seem to apply well to todayâs issues, even though they are aligned on issues such as photography. In music if you use other peopleâs lyrics or melodies you can be sued for copyright. Does this happen in art? How does the law apply to people stealing each otherâs technquies or subjects? How is this different? The photography example is also similar - is choosing settings, timing, angles on a camera more/less trival than choosing models, parameters, prompts on AI generation? There was a feeling that cameras put portrait artists out of business. Even in fashion thereâs cases of a specific kind of weave being copyrighted.
There is also a question of how original AI is capable of being - are bigger and bigger systems âlearningâ more or just getting better at remembering what theyâve seen? Humans are using their knowledge and experience - is this different to what AI does? Arguably, AI cannot capture and build in genuine emotion.
We suspect AI art generating tools will also be used to generate stock photos. Does that change where the copyright will go? That depends on country, most countries rely on human input detail, except for UK. We had many discussions about different copyright laws in different countries. Discussing law is instructive to showing what is the incentive in art creation. We discussed copyright issues both of images going in and images coming out.
What does it mean for things to be âtrivialâ? If youâre making a collage, youâre choosing it rather than creating it. There is perhaps comparable to being the person who provided prompts and then curating what makes a quality output. A parallel example is electronic/algorithmic music, or even the printing press (book recommendation on this topic: The Work of Art in the Age of Mechanical Reproduction).
Where does the value of art come from? Is it from the scarcity? From the authorship? Could the system own the art? What would AI generated art put out of business?
Itâs interesting to consider if programmers should get the credit. Some of us thought thatâs too far - itâs a bit like giving credit for a painting to the people that make paint brushes and canvases. If the programmer is the creator of a camera for example, even though they may have innovated new ways of capturing images - what is their creative input?
How do we verify the authenticity and ownership of art? Should there be traceability? Can we embed the traceability into the artwork? It wonât stop others from generating, but perhaps it will help correctly reference the originals. It would be cool if there were âreceiptsâ for how an image is made. However, discussion about whether this is technically possible (explainable AI) is a complex topic!
What are the rights of artists to opt out of their work being used? When some engine scrapes the image from your site (with your copyright on it), that then becomes available to others. This is much less complicated if you just make the image for personal use only.
Is there a comparison to fan fiction? Is it only an ethics problem if there is commercial potential to the art?
Is this a rights issue for AI? Should an AI be able to file a patent? How high level intelligence is this algorithm? Thereâs already a startup selling nice artwork that is AI generated (which originated from open source research). When Keir started using it in his own work he had to filter out manga, pornographic images etc. which already requires more input than just typing in some prompts.
We often talk about how AI is much more and much less impressive than people think it is. Fairly reguarly we mock ML bros. Why is it a black box and not pink? At what point am I using a tool to do something vs. the tool did it for me (does the AI deserves credit)?
You can see in papers when someone has run something through a black box and it spits out a bunch of answers. Itâs often apparent the users donât know what theyâre talking about. Thereâs even a paper in Nature about spotting AI generated publications.
Perhaps thereâs a weird Venn Diagram, the creator of AI, whoever is using it, credit, creative ownership, accountability, ethics etc. There seems to be no one intersection point in the middle.
Who should decide whether art is high-quality?#
The article says that the success of an artist should be based on the quality of their art rather than on its popularity with consumers. But who gets to make that call? There is an interesting point in the article that commercial success is not necessarily related to quality. Thereâs clearly a difference between someone who just typed in some prompts vs. someone who takes hour over editing it and stuff like that. Thereâs this feeling of âQuantity being consumed into qualityâ where we donât know whatâs going happen but it will change who makes art and what we call art. Everything has value to someone.
Would the art critics have rated this as highly if they were as familiar with the style that this AI always generates? We think not. Art can cater to different people: is it impressive because of the process or because of the outcome? Does success in art mean financial gain? Thatâs not necessarily a good measure for what art means.
Is it cheaper to generate AI art? Ones weâve seen look pretty expensive considering theyâre made digitally. But are we undervaluing the art created by AI artists? There is some additional curation involved in creating AI art. Knowing how something was made makes us appreciate something more.
How do we value aesthetics? Is some of it what people believe about their own art? There is definitely capitalism and social bias in art. If (rich) people want a piece then itâs considered more valuable.
Something not mentioned in the article but seems like a big issue is to do with value of the artist themselves. Like with restaurants - youâll go to a famous chefâs restaurant but the chef wonât cook your meal. Itâs their ideas that you are paying for.
Should skill be compensated? If you train for years to hone skill to create a certain type of art. With AI tools, now you can just type in a sentence and get out an image with little to no extra input. Does the quality of art scale somehow with effort that went in to produce it or skill level required? Our gut instincts may say yes, but is that actually true?
Will there become a problem with âSeen one seen them allâ where AI generated art becomes too recognisable? Perhaps there is a comparison to fan fiction; itâs usually non-comercial, so no ethical problems here. Most fan fiction isnât good; should the law be different if the art isnât good? Can we have a cultural shift around how you value things? Around how things sell?
At the end of the day, art is what you make of it! On a purely philosophical level, it has value if it speaks to you, thatâs it! The value of art is socially and culturally constructed. For example, Somali women who create structures by weaving may not view it as art, but in non-Somali cultures it may be more commonly seen as art. Itâs highly contextual. Does the desire define the value? Desire can be derived from various places, may be due to aesthetic value, or the exclusivity, or the authorship.
Can we use AI art tools to disrupt current and futures narratives about the value of art works?#
In inclusive futures, when it comes to art especially in museums there are cultures where art has been taken from somewhere else. With all this art thatâs being generated, is it going to appropriate other cultures all over again? Whatâs the skew of the datasets? Whatâs the bias of the datasets? If you take samples from tumblr how will that be different to samples from instagram for example. Also how are things labelled? Who labels them?
Keir has noted in his own research that AI generated pictures of black women appear to have more skin showing than same prompts of white women. Is this do to training images, or labelling?
AI generated art has already disrupted enough to make us discuss it here. AI mimics, but people do too. AI should be able to make ânewâ ideas through combining things in different ways. People have different influences but if everyone is using the same tool, then it is all using the same ones. It could end up being trained on itsâ own images, and self-reinforcing.
The input images are impossible to curate! There are already discussions about how Midjourney creates images which reinforce orientalism.
Some areas of art will no longer be monetisable; potential for those areas of art to stagnate. If AI art floods online repositories from which AI art draws, will we just create an endless mirror chamber of AI art?
Is there any reason why we couldnât have Infinite Monets? We could generate art in his style using AI, and people appreciate them aesthetically, but and AI work lacks e.g. the same intentionality. Real physical art can actually change over time, is AI art as unique or thrilling? Inevitably will not mean as much for some people
We change alongside art! Would we feel the same way about how our perception of art changes over time if the art is AI generated?
Closing Thoughts#
The automation here comes from previous and ongoing work, not just from say embodied physical labour. We are used to using AI for image identification and now the next step seems to be image generation.
Thereâs issues of not giving permission for using art in these models. When we make art and share it, we give implicit permission for it to be viewed by others and in some senses for it to provoke feelings and often inspiration. AI art can be more literally derivative and not consciously taking inspiration. The scale and industrialisation of the data scraping is truly novel, a human artist being inspired is a âmanual breakâ.
Who is giving consent for this data to be used? Where is the right place to issue consent for the usage? Is issuing under creative commons a means of giving consent for this sort of data usage? Does AI require a different regulatory environment? Artist Greg Rutowski thinks that living artists works shouldnât be included in inputs.
What does âuseâ mean for AI training data. Is âTraining dataâ a misleading term here? The previous paradigm doesnât work under the sheer scale and âindustrialisationâ of AI, the contract can change because the tools are changing!
Can we have a cultural shift around how you value things? Around how things sell? How would we do that?
What does this mean for AI usage in public services? We want tools to be trained so they can be better, but can we meaningfully get consent from e.g. a child in care?
As usual, we came away from this session with a lot more questions than answers! However, it was an intriguing and very busy discussion which made us all think a lot about what it means for art to have value and what it means to involve AI in it.#
Attendees#
Name, Role, Affiliation, Where to find you, Emoji to describe your day
Natalie Zelenka, Data Scientist, University of Bristol, NatalieZelenka, @NatZelenka
Nina Di Cara, Research Associate, University of Bristol, ninadicara, @ninadicara
Huw Day, Maths PhDoer, University of Bristol, @disco_huw
Euan Bennet, Lecturer, University of Glasgow, @DrEuanBennet
Lucy Bowles, Junior Data Scientist @ Brandwatch
Vanessa Hanschke, PhD Interactive AI, University of Bristol
Amy Joint, Commissioning Manager, F1000 [@amyjointsci]
Miranda Mowbray, I give lectures on AI ethics at the University of Bristol
Melanie Stefan, I am a computational neuroscientist at Medical School Berlin
Viv Kuh, I teach Responsible Innovation to engineering and science PhD students at University of Bristol
Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane