Gemma Clare portrait

Written by Gemma Clare

Gemma is an experienced writer, specialising in education and child development. With a background as a former Inclusion Leader, Teacher, and SENCo, she is dedicated to sharing ideas that make a positive impact on the lives of children.

Maya sits down to do their homework. Instead of reaching for their textbooks, they turn to Artificial Intelligence (AI) to complete their research. Aiko sits down to assess their students. Instead of relying purely on their teacher-judgement and observation, he uses an AI-powered assessment tool. 

AI provides an exciting opportunity for students and teachers to work in different ways and it’s becoming more established as a helpful tool in education. You can read numerous articles celebrating the benefits of AI for schools and colleges – as a tool for planning, assessment, personalised learning and more. 

However, AI is presenting us with a new set of ethical issues. With concerns over bias, misinformation and unreliable sources, it’s become more important than ever for the education system to effectively teach pupils how to be critical of information. 

The Bias Problem

Machine learning algorithms are only as unbiased as the data they are trained on, which means that they can perpetuate and even amplify biases in society. There are numerous examples of where AI bias has caused both small and large-scale discriminatory practices. 

Take for example COMPAS, a tool used to predict whether criminals will re-offend, which wrongly predicted that black defendants were nearly twice as likely to do so as white defendants.

There’s also Tay, the Twitter bot from Microsoft that quickly became a sexist, racist and xenophobic holocaust denier after engaging with the public. 

There are countless other examples, across healthcare, education, recruitment and more where machine-based bias has had a damaging effect on those with protected characteristics. 

Think about how unchecked bias in AI technologies could impact the pupils in your school, and the wider influence this might have on society. For example, what would happen if an algorithm designed to predict student performance wrongly predicted that the black pupils in your class wouldn’t achieve as well as their white peers? 

Misinformation 

Another significant concern with AI is the prevalence of misinformation and false narratives. For example, despite scientific knowledge being used to train Galactica AI, ‘​​It spat out alarmingly plausible nonsense‘ such as incorrect summaries of research. 

The increasingly popular ChatGPT-3 collates information from across the web, much of which is not factually accurate despite how assured it sounds in its delivery. It also seems to churn out multiple ‘sources’ which, when you look for them, simply don’t exist. 

It’s now simple to create a very convincing narrative which is either partially or completely fabricated. This can be particularly problematic in the era of social media, where content can spread rapidly and without verification (either intentionally, or not). 

Combine this with the flaws in bias and you can see how quickly AI can be at the forefront of disastrous consequences for equality and equity.

Take for example the case of ChatGPT-4 reinforcing sexist gender stereotypes when prompted to describe the education and career choices of a boy and a girl. In this example, the boy says “I don’t think I could handle all the creativity and emotion in the fine arts program.” and the girl replies that she can’t handle “all the technicalities and numbers in the engineering program”. Imagine your pupils using these tools to study and taking the answers as fact – reinforcing harmful narratives.  

So, What Does This Mean for Educators?

You may be thinking, ‘these are problems for the developers of AI to solve, what have I got to do with this?’ and of course, developers need to be actively addressing these concerns. However, until every piece of AI technology is held accountable against rigorous ethical standards, these damaging flaws will remain, and we need to be aware of our own role within this.

Many of our pupils will be users of AI technology, both at home and in school. They’ll be accessing information and learning in a completely different way to previous generations. Educators play a vital role in preparing pupils for their future – and their future now includes AI technology. 

As educators, we can empower our pupils to make informed decisions based on what they read and listen to. We have the opportunity to teach them to evaluate the credibility of sources and question information. Pupils can be explicitly taught to recognise and challenge biases when they encounter them and to understand the signs of unreliable sources, such as an over-reliance on anecdotal evidence.

Of course, teaching pupils to be critical of all information available to them is important, not just AI-generated content. With the prevalence of targeted advertising and propaganda circling the internet, it’s also important to teach pupils how to recognise the agendas of the content they encounter. This includes being able to identify political or commercial interests that may influence the information presented to them.

One way to achieve this is to teach pupils to identify language techniques, such as loaded language or euphemisms, that may reveal a hidden agenda. This can be particularly useful when analysing news articles or opinion pieces, where the author’s bias can be subtle but powerfully influential. By understanding how language can be used to manipulate readers, pupils can become more critical and discerning consumers of digital content. 

This is an important conversation to be having in the education space right now. The ability to critically analyse the trustworthiness of content and separate fact from fiction is becoming an increasingly essential skill. If we’re not preparing the next generation of young people for this, we risk further aggregation of societal polarisation and inequality.

Educators have a unique opportunity and power to help young people to navigate the problems generated by new AI technology. 

The next question is, are we adequately equipped to meet this challenge?