7 questions teachers and trainers should ask before integrating ChatGPT or other generative AI tools in their courses

Just like edtech companies have been quick at integrating ChatGPT and other generative AI tools into their products, tech-savvy teachers and teacher trainers are already developing courses to teach the skills to use generative AI for various teaching- or learning-related tasks. But…

Learning the skills to use generative AI is the easy part.
Current and future IMPLICATIONS of doing so is what needs to be included in any teacher or learner training❗️

Nergiz Kern

Everybody wants to be the first to offer a solution, a course, a helping hand to desperate teachers worried and anxious about how they will be dealing with this new technology suddenly unleashed on them and their students.

This is understandable. And I agree that… 

a) there is great potential in these tools and 

b) a need for learner and teacher training.

But there are also risks, many unknowns and great potential for causing harm in the short and long term.

This is why we need to step back a bit and ask a few important questions❓first.

Ask these 7 questions – not ChatGPT but yourself!

1 Have you included information and discussions on the downsides, risks and limitations? 

If you are not aware of any risks and limitations yourself, stop and find out now.

As many teachers will know by now, the biggest danger of using ChatGPT is not plagiarism. We can address this by, for example, changing what and how we assess learning

Every course that introduces AI needs to include information about its risks. Many of the large language models, image databases and algorithms

2 Are you and your students or trainees aware of what is happening with the information you provide when interacting with ChatGPT?

Several European countries have banned OpenAI because it might have breached data protection and privacy laws in the EU for several reasons. There have also been cases in which people have shared confidential information with ChatGPT. They were not aware that OpenAI saves user information, and there can be data leaks due to bugs.

OpenAI states on their help page that:

‘Your conversations may be reviewed by our AI trainers to improve our systems.’

‘…we are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.’

3 Are you going to have your students work with ChatGPT directly or through an API? It makes a difference.

The terms of service of the APIs are different from that of using ChatGPT directly. Their ToS for the API states: ‘OpenAI will not use data submitted by customers via our API to train or improve our models’. Does this make you wonder if it is the same if you use ChatGPT directly?

Product developers use the API to integrate ChatGPT into their products and program it to suit their product’s purposes and enhance the user experience.

For example, if you ask ChatGPT directly for academic references, it will make them up. But if you use a specific AI research assistant, such as Elicit, it will provide you with real papers.

4 What existing problem are you trying to solve with generative AI that you couldn’t solve before?

As educators, we need to recognize when AI (and other technologies) and their benefits for education are hyped and oversold. We need to ask: 

  • What is possible with this technology now?
  • What problems is this a solution for?
  • Are these genuine problems or invented by the developers of the technology to sell their solution?
  • Can we solve these problems without AI?

Developers see the world and its problems from a technological perspective, such as trying to solve the AI bias problem through correcting datasets. Neil Selvyn calls this a ‘technicist perspective’. Their values and solutions are not necessarily the same as that of educators who have a ‘pedagogical perspective’. 

Selvyn also reminds us that technology and how it has been used historically is neither politically neutral nor always benign. Marco Kalz, Professor of Digital Education, agrees that the current buzz around ChatGPT has to do with ‘agenda setting‘, and that ‘AI-based tools are never neutral.’

5 Is there anything (skill, control, creativity, empathy, wisdom…) we might lose by jumping on the efficiency bandwagon and handing over tasks to AI tools because they are faster?

If we argue that teaching is an art, do we really believe we can replace teacher decision-making through AI, is based on statistical datasets, just because it is more efficient? Who decided that efficiency is our biggest problem? Will efficiency make our students more creative, innovative, better critical thinkers, more humane and empathetic, better collaborators and team mates?

Planning lessons, teaching, interacting with learners, providing feedback, and all the other tasks and activities that your job as a teacher entails, are not mere mechanical tasks that we can automate through technology. Technology, even if we call it artificial intelligence, cannot replace the human teacher. It might look as if it does, but only to the detriment of our learners.

We, as in society, will lose more than we gain if we don’t use AI carefully as a tool and make sure we prioritise human-human interaction where it matters. For this, we need to understand more than ever what makes us human and what we value about our humanness.

6 Who is going to benefit from the time saved through the efficiency of AI tools?

If you can create a lesson plan in 5 minutes instead of an hour…

  • Will you get to go home earlier, or get to use the saved time for meaningful interactions with your learners? Or will you have to fill the time with more hours of teaching of more students?
  • Will you work fewer hours but also get paid less?
  • Will you earn the same, if you are working on an hourly basis, for doing a task in 5-minutes instead of how long it took you previously?
  • Will your school employ the same number of teachers as before, even if the work can now be done by fewer teachers?

7 Will there be a discussion about what generative AI can mean for the future of teaching and learning?

Let’s forget for a moment the current issues with the use of AI in education. Have you thought about what the widespread use of AI might mean for teaching and learning in the long run besides what we mentioned in question 5 and 6 above?

With each new hyped technology (online, metaverse, NFTs, AI…) or situation (COVID lockdowns, natural disasters…), teachers are pushed into quickly adapting it. If not, we are warned we will perish. We are told THIS is the FUTURE, we cannot prevent it.

However, this approach is neither sustainable nor desirable. It does not lead to a positive transformation of education, but a repeated cycle of failed expensive technological driven educational reformation attempts.

If it is true that we are in an exponential age and that technology will evolve ever faster, we cannot keep up by rushing into adapting new technologies. It is even more important to stop and think through the long-term implications.

If we cannot keep up with the speed of technology development, we need to do the opposite: slow down!

Take time, stop and reflect, think about any potential implications – good and bad – and then consciously decide what steps you will take towards a future that has more of the good and less of the bad. 

Nergiz Kern