Data Ethics Club: How AI could save or destroy education#

What’s this?

This is summary of Tuesday 4th June’s Data Ethics Club discussion, where we spoke and wrote about the video How AI could save (not destroy) education, a TED Talk by Sal Khan, founder and CEO of Khan Academy. The discussion summary was written by Jessica Woodgate, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Nina Di Cara, Huw Day, Natalie Thurlby, Amy Joint, and Vanessa Hanschke helped with the final edit.

How could AI save education?#

For this special edition of Data Ethics Club, taking place during Bristol Data Week 2024, we had the chance to reach a wider audience than just DEC regulars. Instead of having people read a piece before hand, we played the first ~9 minutes of “How AI Could Save (Not Destroy) Education”, a TED Talk by Sal Khan, founder and CEO of Khan Academy.

In the talk, Khan points out how ChatGPT and other forms of AI are being used by students and undermining education as we know it. Is there any way we can mitigate this? Khan argues yes, and further claims that we’re on the cusp of having AI personal tutors.

A chatbot called Khanmigo is being introduced to Khan Academy, his education platform which states on their website: “We’re a nonprofit with the mission to provide a free, world-class education for anyone, anywhere.”

In the talk, Khan demos Khanmigo’s capabilities:

  • Khan first notes that the chat conversation is recorded and viewable by teachers for safeguarding, as well as being moderated by a second AI.

  • It also refuses to give students the answer directly, but can notice when students makes a mistake, pick out the mistake they made and asked a question about it.

  • Khanmigo can help with maths and coding problems.

  • Khanmigo can answer the question that haunts teachers “when would I need to learn this?” in a socratic way, asking students what they care about and then reframing things.

  • Khamigo can also act as a guidance counsellor and career coach, with an example given of Khanmigo answering student questions about what they need to get to college.

  • Khanmigo can pretend to be Jay Gatsby from the F. Scott Fiztgerald novel “The Great Gatsby” and answer questions for students e.g. “Why do you keep staring at the green light?”

  • Khanmigo can have debates with students about topics such as cancelling student debt, allowing students to fine tune their arguments in a judgement free environment.

  • Khamigo can help students write a story, asking for questions about settings or characters. Students can write two sentences and then the AI can write the next two.

Khan demos all of these features with enthusiasm from the crowd, but we wondered if there are problems with the way AI is joining education. We paused the video at the 8:40 mark and begun our discussions.

Discussion Summary#

How do you like to learn? Does AI play a role in your learning? Could it?#

There are a few ways in which we have incorporated AI tools into our learning, including assisting with coding, writing tasks, and knowledge acquisition. For coding, we have used GitHub copilot to help with debugging code, code completion tools to fill in code, and ChatGPT to explain code snippets or concepts (e.g. “what is object-oriented programming”). For writing tasks, we’ve applied AI tools to overcome writer’s block, assist with copy editing, and summarise blocks of information into something more coherent. This has been useful for tasks such as job applications. For knowledge acquisition, some of us now use tools like ChatGPT instead of Google, albeit with some scepticism as to the quality of outputs.

When using AI for learning, we have found that the way we frame our prompts (e.g. asking specific questions) can improve the quality of outputs. Some of us have completed prompt engineering courses to further enhance our use of AI. Generally, we’ve found AI tools to be useful for “base level” learning, restricting questions to be as objective as possible. For example, in the case of learning languages we might ask targeted questions about basic grammar such as “what is the past tense of avoir”.

Since incorporating AI, we have found our learning to be accelerated and more efficient. First, we can quickly find foundational information which might take longer to decipher without the assistance of AI. Second, AI provides a space where we can ask basic questions that provide fundamental understandings but feel like wasting time when we ask other people. Third, AI is helpful to rephrase segments of text which we don’t understand. AI can thus be applied as an initial investigation to understand a topic before we have the language to know what we are searching for when we dive deeper into documentation. We’ve found these approaches to be especially effective in the domains of mathematics, engineering, and programming.

However, outputs can be unreliable, entailing that we should be cautious about our use of AI and perform forensic analysis on responses. Some of us only use generative AI as a last resort, as we have not found it to be effective on certain tasks we have applied it to. For example, some of us have found that generative AI does not find coding errors. Some of us find copilot annoying to use, identifying that there is a certain level of skill required to use copilot effectively. Some of us do not feel confident using ChatGPT and have encountered issues when the results aren’t compatible with the software we are using. Outputs don’t always give the most obvious answer, and sometimes miss important information. If we don’t understand the output of an AI tool, we feel uncomfortable using it; it doesn’t give us confidence that we know the subject. For some topics, we find that ChatGPT is barely helpful at all unless you are an expert.

Whilst there are substantial issues with the accuracy of generative AI, we should accept that it is here to stay. We thus need to learn how to use it properly and actively engage with the content we receive. This involves critical thinking and performing science on outputs to calibrate performance. Facilitating the intuitiveness of using different software (including LLMs) is helped by having a core knowledge of software.

Which students do you think would benefit most from these tools? Which students do you think would have the most access to them?#

The level of proficiency might play a role in how beneficial tools are for students. Some students are very good at using these tools already, suggesting they could benefit by introducing them to their learning process. However, AI tools might not be appropriate for total beginners, as beginners are limited in their knowledge of what prompts they could use, and do not know enough to spot when there are mistakes. AI can speed up the process of learning, but ultimately just answers questions and does not actually teach you. You need to have some awareness of the objectives of your learning; you can only learn what you know you don’t know. On the other hand, these tools can be very useful to show students where they have gone wrong (e.g. with writing code).

Utilising AI tools could help to tailor education to students with specific learning styles or needs. Tools could make education more engaging for students who learn by doing, as in the example (in the video) of Khanmigo repackaging sources previously written about the Great Gatsby into a “conversation”. Quieter, more anxious students who struggle asking questions in front of others, worrying about saying something stupid, might benefit by having a more private discourse with a chatbot. These kinds of anxieties can arise in students at any level, from complete beginner to PhD students, suggesting that there could be application throughout the schooling system.

Those of us who work with Special Educational Needs (SEN) have found that students spend a lot of time considering these tools. Potential benefits arise from the ability to tailor AI to specific needs of students. There is a tendency to move students who are overstimulated by large groups of people into smaller rooms with special provisions. Tailoring AI to students could enhance the learning experience of those with more specific needs which are difficult to meet in large classes. For students with dyslexia, AI could help to construct emails, check spelling and grammar, and rephrase things. If schools have limited resources, funding digital twins for students could widen access to learning spaces.

However, it is also important to consider how the interface of AI tools can restrict accessibility. Whilst neurodivergent students might derive some benefits from personalisation, these students might also find it difficult to maintain engagement with tools like Khanmigo. If access to these tools is given to everyone, regardless of their abilities and needs, it might have adverse effects such as making some students lazier. The difficulties neurodivergent students face with engaging with AI tools risks broadening the gap between the most and least able students, distorting performance across the cohort.

Introducing AI tools into the educational system may therefore exacerbate various existing gaps; we suspect that the groups “would benefit most” and “have most access to” may not be overlapping. This highlights sociological problems with AI. To take full advantage of these tools, students would need access to consistent power, reliable working computers, and good internet. Premium versions of applications work better, and free versions have more pitfalls, further dividing the “haves” from the “have nots”. Careful thinking is required as to how paywalls impact education and who bears the cost, drawing parallels to the longstanding debate of paid access to journals. Rather than using education to allow people to break through a poverty barrier, AI could instead strengthen the barrier.

These gaps should be considered on a global scale, including the impact that incorporating AI in education will have on developing countries. In 2022, the Kenyan government introduced an initiative to teach children to code; we wondered whether the initiative is using AI to boost teaching capabilities. On one hand, this could improve access to knowledge repositories, aided by the use of mobile phones. On the other hand, issues with accessibility may mean that developing countries find it much harder to implement such tools in education, and countries where students can reliably access the tools leap ahead in their educational capacities.

What are the pros and cons of 1-1 learning (with AI) being adopted more widely in education: is there a trade-off between the benefits of personalised help vs. less emphasis on group learning?#

Whilst there is potential for different types of students to benefit from personalisation in 1-1 AI learning, we do think there is a trade-off with the benefits of group learning such as opportunities for socialising. Learning with peers is fun, and there is a lot of fulfilment that comes from discovering things with other people. Working in groups also teaches students social skills that help them to work with others, cultivating the ability to “read the room” and relate to one another. Moves towards digitalisation of services fosters the propensity for young people to be more isolated and is correlated with loneliness.

In addition to mitigating loneliness, learning from human teachers presents opportunities to teach children respect. We wondered how Khanmigo handles situations where students are rude, or provide intentionally wrong answers to test the boundaries of the system. There is a fine line that must be walked here between making systems closer to real teachers by being more human-like and interactive, and anthropomorphising AI. Saying “please” and “thank you” to AI may seem futile, but if we regularly interact with AI without manners, we might become desensitised to social etiquette.

As well as trade-offs with socialisation, there are trade-offs with the effectiveness of using AI in the learning process. Some of us feel that it is acceptable to use AI at work, but in an educational context it seems like cheating. There is a certain amount of trust needed in students to use AI tools appropriately. AI can provide basic answers, but that isn’t the goal – the goal is to learn, and it is important that students recognise this. It is important to differentiate between when people are using it as a learning or guidance tool, and when people are using it as a replacement of effort. A lot of scientific discovery is down to serendipitous discovery; not discovering what you set out to discover. It is important that AI does not erase this; the Royal Society “Science in the age of AI” report explores how science is affected by AI.

Further trade-offs emerge from technical limitations of AI, regarding how we can ensure the quality and correctness of AI 1-1 learning. We wondered what the validation process is before outputs are put in front of students, and how the tool is moderated. If AI is being used to moderate AI, we wondered what the chance of failure is, where failure is incorrectly identifying moments which need human intervention. If human teachers are expected to moderate AI tutors, this could change their role by introducing transcripts which they have to assess after class.

Issues of security arise regarding the use of data, and the handling of vulnerable users (e.g. children). There is evidence for security vulnerabilities in LLMs, and inputting sensitive data could be risky.

Potential for bias exists in the use of language, and the way language is handled in training data. Whilst languages like English and French are used extensively in LLMs, endangered languages further risk becoming extinct. Students will be encouraged to learn only languages supported by AI tools, turning the focus away from learning local languages. On the other hand, apps like Duolingo apply AI to accelerate language learning. This presents an opportunity to help people relearn endangered, or even extinct, languages by training AI on those languages.

It is also important to consider are the ecological impact of using AI in education, including the amount of power and resources that the tools use.

Bonus Question: What change would you like to see on the basis of this piece? Who has the power to make that change?#

As things currently stand, we do not have enough trust in AI tools to replace teachers or become central educational mechanisms. Instead, we think they could be used as a supplement to systems which are already working well. They are an addition, not an alternative.

At the same time, we should appreciate that the traditional model of education is flawed. Whilst AI isn’t necessarily the answer, we shouldn’t stick with what we already have just because it’s what we’re used to doing. Considerations like accounting for neurodivergence are arguments for why we need to adjust the current structure. To provide insight for how we could make adjustments, we can look to systems like those in various Nordic countries, which prioritise equality of students and 1-1 device programs.

With respect to competition, we wondered if Khanacademy will have a monopoly over the market, or if there are other players out there.

Attendees#

  • Arwa bokhari, University of Bristol,

  • Nina Di Cara, Snr Research Associate, University of Bristol, ninadicara, @ninadicara

  • Huw Day, Data Scientist, Jean Golding Institute, @disco_huw

  • Amy Joint, Programme Manager, ISRCTN Clinical Trial Registry, @AmyJointSci

  • Noshin Mohamed - Service Manager for Quality Assurance in Children’s Services

  • Luke Shaw, Data Scientist in the NHS (based in Bristol) LinkedIn

  • Euan Bennet, Lecturer, University of Glasgow, @DrEuanBennet

  • Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane

  • Dan Evans - Data Scientist: Dyson New Product Innovation linkedin

  • Dan Lawson - interim Director of JGI, Assoc Prof of Data Science website, linkedin

  • Paul Matthews, Lecturer @ UWE Bristol LinkedIn

  • Betsy Muriithi, Reseach Fellow, @ilabAfrica Strathmore University, @bmuriithi, :relaxed:

  • Laura Hille, Researcher, Leuphana University Lueneburg and University of Bristol, @simongoeshill

  • Manajit Chakraborty, Lecturer, The Dyson Institute of Engineering and Technology, (https://www.dysoninstitute.ac.uk/about-us/who-we-are/meet-the-team/), đŸ„±â˜•

  • Michelle Wan, PhD student, University of Cambridge

  • Sydney Charitos, Digital health and care PhD Student, University of Bristol

  • Dhiren Modi, Engineering Capability Lead, National Composites Centre (Bristol), PhD Mechanical Engineering, Actively pursuing Data Science/AI in applications, governance, societal impact etc.

  • Gemma Marsden, Research Data Manager, Cranfield University

  • Chakaya Nyamvula, Business Intelligence Analyst, @iLabAfrica Strathmore University. Data Science intern @ JGI Brsitol University, currently pursuing MSc AI and Data Science, Keele University

  • Kamran Soomro, Associate Professor of AI @ UWE Bristol.

  • Aleksandra Pastuszak, Data Scientict @ Dyson LinkedIn

  • Catherine Deas, Associate Software Engineer, looking for a new role