Data Ethics Club: UK announces AI funding for teachers: how this technology could change the profession#
What’s this?
This is summary of Wednesday 30th April’s Data Ethics Club discussion, where we spoke and wrote about the article UK announces AI funding for teachers: how this technology could change the profession by Nicola Warren-Lee and Lyndsay Grant. The summary was written by Jessica Woodgate, who tried to synthesise everyone’s contributions to this document and the discussion. “We” = “someone at Data Ethics Club”. Huw Day helped with the final edit.
Article Summary#
Since large language models (LLMs) entered into the mainstream, an important discussion has ensued regarding their effect on education. Much of this discussion has centred around the implications for students, raising questions about whether students will continue to learn when they can easily use LLMs to write their essays or cheat in exams. Whilst the student perspective is important, there has been much less attention on what LLMs mean for teachers, and how teachers are adopting and using AI in their work.
In the UK, we are in a recruitment crisis where there are increasing pupil numbers yet not enough people signing up to be teachers, and lots of people leaving. One of the biggest reasons for teachers leaving is an unsustainable workload, leading to burnout and stress. Teachers are within a context where there is increasing standardisation of teaching, high levels of performative accountability, as well as extra requirements above tasks directly tied to teaching, such as administrative and bureaucratic work.
This article was written in response to the government announcing a further £2 million investment in the Oak National Academy, a publicly funded classroom resource hub, to develop AI tools that help reduce teachers’ workloads. Government investment in AI for education began in the previous government and has been continued by the current government. One of the big claims around developing AI for teachers is that it will save their time.
The solution that AI promises to fulfil is not new, as there has been a long history of about 100 years of people trying to use technology to save time in teaching. It is clear that time has rarely been saved in the past; where when time has been opened up in one area, new tasks are soon generated to fill those gaps.
To consider the real impact of AI in education, it is crucial to ask what the implications are for teachers; what it means for them as intellectuals, not just deliverers and carers, but as people with knowledge backgrounds. It does not seem likely that teachers will be completely replaced by AI, but their work could quite drastically change. Teachers may take more of an “executive technician” role, rather than considering deeper questions on the moral concerns of education.
Teaching is widely recognised as an intellectual endeavour, but as generative AI just builds sequences of plausible content, the lesson plans it produces have no educational or disciplinary expertise of their own. The repercussions of this are that high quality graduates may be put off, the needs of specific groups of students and their contexts may be ignored, and the maintenance of teaching standards may drop due to issues like hallucination where generative AI has the tendency to make up new facts and sources.
If AI becomes routine, teachers may not develop the skills needed to critically evaluate and adapt AI-generated lessons and activities for students, and nobody will be held accountable for the quality, safety, and relevance of the materials being taught. Where the government has money to invest in education, the article argues, this should go directly to the most important resource there is in any classroom: the teacher.
Discussion Summary#
What are your experiences with using AI to teach (either yourself or others)?#
From our experience with teaching, there are some areas where we could see AI being helpful and other areas that should be left to humans. Some of us would be in favour of automating marking, using AI to review the work and then provide the teacher with a cliff notes style summary of what the students understand. Creative lesson planning, however, some of us thought that should be left to teachers; doing the research is fun and this is one of the main things we enjoy about the job. Others were unlikely to use AI to write a whole lesson plan but could imagine using it to write bits and pieces, refining or adding in suggestions. We could envision plugging in a lesson plan and then probing for areas that could be made better, such as improving the experience of students with ADHD. Some of us have used AI for curriculum development in situations where we’d already thought of 90% of the content and were using the tool as more of a memory jogger to get ideas.
AI holds the potential to generate more personalised material or provide better quality material in tasks that a human teacher would have to rush through. We have encountered teachers that have compared reports and found that the AI generated report was actually better than a quickly filled out template.
Using tools to assist in lesson plan creation is not a new thing; AI follows on from a long history of collecting resources and applying technology to assist teaching. There are a lot of human constructed resources already available, as teachers have always been resource magpies using other people’s material to inform their teaching. Back in the day, this took the form of huge filing cabinets. More recently, we’ve used google to find workshop plans, helping to decide the structure, but not delegating the details.
Students themselves may be concerned with applying LLMs to help solve specific problems. We’ve had a student use LLMs to learn python for a project, arguing they had decided to mainly use R in their PhD, so did not find it as important to have in-depth knowledge of python. If a student is able to solve a problem, it might be the solution that is important; not necessarily knowing all of the details of a language.
A common pushback from students is that “in the real world, it doesn’t matter”. Whilst what we consider to be the “real world” is an interesting aside, it’s important to appreciate that understanding the wider applicability of their education is key for students. If people can use a tool in the real world, e.g. if people can use an LLM to help write summaries in their jobs, it makes sense that students would question why they can’t use that tool when they are learning as well.
There is an interesting dynamic that arises if teachers use AI themselves, but do not allow their students to use it. Students may find their teachers hypocritical if they see teachers using LLMs to mark their work or generate lesson plans but are penalised for using it themselves. We’ve seen scenarios where both teachers and students are using AI, with an unspoken “I won’t say anything if you don’t” policy.
We can sympathise with defeatist students thinking that their work will be taken by AI, however, we should be thinking critically about whether AI could really do a human’s job. In many positions, the work is conceptual, outlining what we know, and what we can do with that knowledge. The learning environment is different to real world application in important ways: in learning environments, we are developing foundational skills we can then take into application. If we don’t pick up the skills first, we will not be able to later identify what the problems are, how we can solve them, and what tools might be helpful to address them.
Substituting AI for the hard work that goes into skill development skirts around the issue of what is important in learning. The materials produced by AI give the appearance of great knowledge. However, AI misses the core question of what the purpose of education is. There is a difference between education, which involves learning transferrable knowledge, and training for job specific tasks. We want to be asking students if they can do a task, and if they can teach others how to do that task.
Overreliance on AI has implications for how well teachers learn to teach, as well as how well students learn. AI may make things too easy, making teachers hamstrung and over reliant on another system. LLMs are an invitation to not think about what you’re doing; in the teaching and learning space, this is exactly the time you shouldn’t be using tools like this and where you should be thinking critically. Teachers need to know how knowledge is developed to be able to manipulate the classroom to encourage the kind of discussion, thinking, and writing that engenders knowledge generation.
A key part of being a teacher is their role as an expert. AI may confidently present itself as having in depth knowledge, but it is unreliable compared to a human teacher. When we are teaching outside of our expertise, we would always prefer to go to a colleague rather than AI so that you can verify the source of the information. We saw a meme recently reading “your doctors and nurses are using AI to pass exams, might want to start eating healthy”; those of us that are nurses found it both scary and funny.
We don’t see AI taking the role of an expert, but we like the idea of using it to generate ideas or farm out simpler tasks. Some of us have used LLMs to teach ourselves technical skills, referring to it when we know what to look for but don’t know exactly what words to us. When we first started using LLMs, we referred to them a lot and perhaps didn’t learn much, just applying the outputs directly. Now, we try to limit our usage to things that are slow to do ourselves but quick to verify, such as code.
We try to limit our use of LLMs because of they have negative environmental impact, but also, because we have doubts about how good the technology actually is. We can see that LLMs can help with idea generation and could imagine them being useful as information retrieval devices in the future but are yet to be convinced that they are sufficiently capable of this now. LLMs fail at in-depth analysis and run the risk of hallucination, generating confident but inaccurate outputs. Because of this, some of us are not convinced that the technology is in a place where it is actually useful and are emphatic non-users of it. The technology simply is not good enough to justify the resources going into it.
We don’t trust LLMs to do things more complex than something that we can easily check. We’ve found the responses generated to be generally shallow and superficial, lacking in actual content which we have to put in ourselves if we do use the outputs. As an example of where LLMs fail, we applied one to generate a summary of an article and were disappointed with the results. The article was a critical discussion, but the LLM generated an extremely watered down version, portraying the piece as “this-or-that”. This missed the core of the argument, which was actually very polemic.
On top of the limits of LLMs in forming value judgements, the technology raises issues as it has the propensity to propagate biases. LLMs learn from real world data, which contains the power relations that we find in society. This means that the data will reflect the societal biases that exist, and those biases are at risk of being reproduced by LLMs. Biases already exist in the educational system, for example, there are some places that are much better resourced than others. Some geographical contexts may be less represented than others, or certain groups of students who the educational system already fails may be missed. If unmitigated for, the imbalance in available resources means that LLMs will have more or better quality data to train on for some groups than others, and will perform better for those groups, further exacerbating the gap.
AI is promised to help alleviate teachers’ unsustainable workload and thereby address a crisis in teacher recruitment and retention. What other responses to this problem might there be and why are they not being pursued?#
AI might save time spent on curriculum development, but we doubt it will cut down actual teaching time. There is a lot of work involved in teaching that we can’t envision AI being useful for, such as playground monitoring. Bad behaviour, which teachers are expected to control, is a major cause of stress and burnout. AI will not be able to help teachers manage behaviour. Boots on the ground, and getting people involved, is the only likely way we can see the workload issues being solved.
An obvious solution to attract more people to teaching, and to improve the well-being of teachers, is to pay them more. The teachers amongst us don’t do it for the money, but we find that we’re expected to spread ourselves over so many different areas. If we could get AI to cover playground duty for us, that would be helpful. We wondered if there is a difference in AI usage between schools and universities; there are certainly similarities regarding a lack of resources.
A question that the authors found people a bit reluctant to get stuck into is why people are turning to AI to solve these problems, rather than looking at other approaches. Other responses are possible, so why aren’t we paying attention to them? To answer this, we need to examine whose interests are served by pushing AI into schools.
The hype narrative, which we explored in a previous Data Ethics Club, seeds a lot of unnecessary usage of AI in order to propagate particular agendas. Leaders want to “get on the AI train”, seeing that everyone else is using AI and figuring they should be using it as well. There are a lot of AI businesses that aren’t making money, so if the government are using education budgets to drive the adoption of AI in education, they need to justify the motivation behind this. To support real value being generated in our society, we need to make sure that AI doesn’t go where it doesn’t belong.
The move towards AI is part of a broader move to incentivise schooling as box ticking exercises and grade achievement. Intangible benefits are slowly being scrubbed away until we can no longer see what it is that actually works. There is a lot that goes into a good team which you cannot measure. For Socrates, the purpose of education was to learn to live. Now, we think of it as offsetting investment by employers in graduates.
In our STEM degrees, we encountered the assumption that everything you learn is useful; this isn’t always true and sometimes the topics are simply furthering human knowledge for its own sake. Having done joint honours, we have had many situations in life where we found our humanities degree, which is less quantifiable, to be applicable to practical situations in life. We can see that funders would want to know the real-world impact of research, and have seen a reduction in certain disciplines seen as “less useful” such as number theory.
Who is accountable for the quality of education and resources generated through the use of AI? Who is accountable for the safe and ethical use of AI in schools and are they equipped to take on this role?#
As the actuator, accountability will remain on the teacher. For those of us that will soon be teaching courses, we are having to think about how students will be using LLMs and how we will handle their use. Schools may not yet have policies to support teachers in new situations resulting from AI, such as where AI teaches incorrect information. Where tools using AI are brought into education, such as Khan Academy or Duolingo, the institution has a responsibility to vet them. Education standards in general are already well established, meaning existing procedures to vet lesson quality should be sufficient to check the quality of work.
Aside from quality concerns, there are also questions of moral responsibility that arise from the use of AI in education. We’ve seen instances of teachers using AI to write letters to parents about safety and pastoral issues. In particular, we’ve seen LLMs used to help write letters involving emotive issues like bullying. There are moral concerns with this insofar as teachers are disconnecting themselves from the child. A parent seeing a teacher use an LLM to write letters may think that the teacher is not putting enough effort into their child. There may also be data privacy concerns; at the worst we have seen bank statements being used.
An interesting effect of AI, and technology more generally, is how it makes processes invisible. Once you build a machine to do something, you stop thinking about everything that goes into the machine, which ends up masking the process. There are levels of invisibility: in computers, for example, various programming languages build on top of each other, which continues into software tools, and now even those software tools are being masked by AI.
What change would you like to see on the basis of this piece? Who has the power to make that change?#
Since writing the article, the authors have begun further research into how teachers are actually using AI. In preliminary studies, one teacher said, “it’s nothing lifechanging”, finding that AI can automate a few particular tasks such as resource generation. Teachers aren’t saying, however, that their workload has been halved, or their jobs are becoming redundant. Initial fears surrounded teachers completely outsourcing their work, but there didn’t seem to be any teachers doing this. Instead, it seems that teachers are more likely to have a back-and-forth process with the tools.
There was some variety between teachers who used AI tools regularly and could see lots of ways they could benefit from it, and other teachers who wouldn’t or hadn’t used AI. Those teachers perhaps had their own resources ready to go and tailored to the specifications they were teaching. For teachers not using the tools, there is a question of whether they are moving on with the times and with their students enough.
When thinking about AI in education, it is important to adopt a narrative that filters out the tendency to overhype these tools. Teachers will not be able to apply a magic AI wand and become perfect at their jobs, nor will AI completely replace teachers, and it will cease to exist as a profession. We should not just be thinking about the technology itself, but also the social and cultural conditions that it emerges and is used within.
Attendees#
Huw Day, Data Scientist, University of Bristol: LinkedIn, BlueSky
Jessica Woodgate, PhD Student, University of Bristol
Joe Carver, Data Scientist, Brandwatch
Tosan Okome, Nurse and Researcher, Nigeria
Kamilla Wells, Citizen Developer, Australian Public Service, Brisbane –currently in Cambridge!
Paul Matthews, Lecturer, UWE Bristol
Pippin Sadler, Data Analyst, Fowlers Motorcycles
Chris Jones, goodness knows what I’m up to, Amsterdam
Vanessa Hanschke, Associate Lecturer, UCL