Data Week 2023 Special The Real Danger of ChatGPT#

What’s this?

This is summary of Monday 5th June’s Data Ethics Week Special discussion, where we spoke and wrote about a video on The Real Danger of ChatGPT discussing the risks and benefits of ChatGPT. The summary was written by Jessica Woodgate, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Huw Day, Nina Di Cara and Natalie Thurlby helped with the final edit.

Q1 Have you played around with ChatGPT or other Large Language Models (LLMs)?#

There were four main camps for how we had interacted with ChatGPT: never used it, used it for editing, used it for programming, used it for idea generation. For those of us who had never used it, we had heard of it, and were interested to find out more as we were aware that a lot of people were/would be using it.

One way that we had used ChatGPT, as talked about in the video, was for editing. We found that it was quite useful to handle letter and email templates, editing emails, and checking grammar. Some of us had used ChatGPT in different languages, finding that it worked better in Mandarin than English for editing. As an experiment, we have also used it for editing a data ethics club write up. Being mindful of how this may affect authorship and accountability, it felt important that we ensured we were upfront and explicit about how we were using it. We found that it was very efficient for editing text, which could save a lot of time. However, we also missed out on properly reading the text we were editing, which felt like cutting corners. Implications of this could mean that editors have a shallower understanding of the content they are working with and may not catch things which ChatGPT gets wrong or misses out.

Some of us had also used ChatGPT for programming, finding that it was useful for learning basic programming syntax and finding information on other languages. We found it often quicker than googling basic examples; prompting it to “make me some code to plot this graph in python” is a great time-saver and immediately verifiable. However, we found that it doesn’t always provide the best approach, compared to more complex online examples. If you know the direction you should be going in, you can easily check and guide ChatGPT. If you are unclear about what the result should be, then things might go wrong. In these cases, we need something which is verifiable.

For idea generation, we found that ChatGPT could be a helpful tool to overcome writer’s block when constructing proposals. It might be interesting to look at success rates of our proposals before and after introducing ChatGPT to our process. Some of us had also looked at how it could be used in Ph.Ds., however we found that it wasn’t very useful to generate truly novel ideas, so outlines and suggestions were less helpful for this application.

Appropriate applications of ChatGPT might be domain specific. For example, companies using it to assist customers asking for a refund might be very different to using it assist people looking for health advice. Becoming reliant on automated systems feels uncomfortable, they lack empathy. With humans, we can mostly trust that they care about our positive outcomes. Maybe it is made somewhat better if it is clear to users that they are interacting with a chatbot. This raises questions about how familiar one needs to be with how a technology works in order to use it.

Q2 Can you think of any situations or examples where ChatGPT/other LLMs could be a reliable tool with no/limited drawbacks?#

For neurodiverse people, ChatGPT/other LLMs could be an enhancing tool, for example as a writing assistant. Using LLMs to generate cover letters and mock interview questions may widen access to people who would have otherwise been prohibited from occupying certain spaces. ChatGPT could also be a good assistant for people that are overloaded in their jobs, for example for a busy professor with a lot of students.

We also thought that it could be beneficial for large studies, such as those with hundreds of thousands of biological samples. However, we thought that currently AI is quite far away from being implemented in clinical settings on a large scale. There are some NHS trusts experimenting with new technologies, however, it seems limited due to legislation and lack of personnel. In addition, whilst most clinical trials are randomised and blinded, there are privacy concerns, as it is still possible to link participants across different aspects of studies. Perhaps LLMs are permissible if there is proper human oversight of their application, and humans can still be held accountable.

Q3 Can you think of any situations or examples where use of ChatGPT/other LLMs could cause harm, if the user is not careful with how they apply it?#

A major issue with the deployment of these tools, many of us agreed, is the lack of transparency. The public is not properly informed about how their data will be used or collected from LLMs or social media. There is opaqueness surrounding the data sources used to train these models. Are we donating our work for free by “letting” ChatGPT use our inputs as training data? We should be careful about the information which we give ChatGPT. Inputting sensitive details, for example, about patient healthcare would be concerning we cannot be certain about what will happen to that information. It is also relatively easy to manipulate these tools and override their safeguards, e.g. to get recipes for methamphetamine. There is a wild west atmosphere, where increasingly powerful tools are released into the public without widespread understanding or education about their implications.

The free-for-all environment cannot go on forever, and there is a need for proper regulation to be brought into this space. A part of this may include putting up the tool for widespread testing. As a comparison of how other powerful technologies are regulated, cars and drugs have high bars to clear before being released. This might mean that sometimes we miss out on medications that could be beneficial, however, the potential harm we avoid often makes this worthwhile. As well as external regulation, companies using ChatGPT should implement ethical self-policing, and ensure that employees are not allowed to use them badly. Employees should be properly trained on how to use these tools; here’s one such approach.

As well as malicious use of ChatGPT, we were concerned about the side-effects of such technologies on human creativity. This raises some interesting questions about originality, the real meaning of creativity, and what would happen if, as a species, we stopped being creative. The advancement of many technologies has, arguably, made us increasingly lazy, and some of us wondered if we are making the most of the spare time that these tools have opened up in our lives. What will happen to the future of human intellect if we increasingly rely on technology for thought and creativity? In the sphere of work, if ChatGPT takes all “entry-level” jobs, how will we bridge the gap for early career workers?


Attendees#

  • Huw Day, Maths PhDoer + incoming JGI Data Scientist, University of Bristol, @disco_huw

  • Euan Bennet, Lecturer, University of Glasgow, @DrEuanBennet

  • Ismael Kherroubi Garcia, Founder & CEO, Kairoi, LinkedIn

  • Katharine Evans, Governance & Policy Manager, UK LLC, University of Bristol

  • Dan Whettam - AI / Computer Vision PhD student

  • Betsy Muriithi, Research Fellow, Strathmore University, Betsy Muriithi

  • Kaltun Duale, Leading Technician, CMM, University of Bristol

  • Noshin Mohamed, Quality Assurance in Children’s Social Care

  • Glen Roarke, MSc Bioinformatics, University of Bristol

  • Chiara Singh, Development Associate at JGI, University of Bristol [@singh_chiara] (https://www.linkedin.com/in/chiara-singh2020/)