HOP articles, stories and insights
AI in education
Artificial Intelligence (AI) in education
Author: Carmine Rodi Falanga.
Illustrations: Wikimedia Commons, Freepik (premium), Recraft.ai.
Licence to (re)use the article: Creative Commons BY-NC-SA 4.0
Since OpenAI released its first version of ChatGPT in 2022 (can you believe it? Just two years ago), Artificial Intelligence has changed from a topic reserved for science fiction enthusiasts to something that makes regular headlines.
In a way similar to social media, but with possible consequences much larger and harder to predict, it is now revolutionizing every aspect of life, stimulating discussion and polarizing opinions. ChatGPT - don’t worry about the acronym, we will get to it later - gained its first million users in 5 days after launch, then gained 100 million more users in two months, and as of now counts more than 180 million users (40% of them located in in the US, India, Indonesia and Brazil).
This technology may impact our lives in the deepest possible way, and we are feeling the first ripples already. Professions will be transformed, jobs lost and created, and industries changed from the inside out. Education is no exception. This big tidal wave is coming, and while we can’t stop the waves, we can certainly learn how to swim.
In this article, I will share some thoughts on why I think AI is good news, why it is bad news, and look at the recent developments in the legislative and policy-making areas. Then I will focus on its use in education and non-formal education in particular. I will share a few creative ways I am using it, having a lot of fun with my groups of learners - and I admit, by myself. And I will include an essential list of recommended apps that I use - all with one big caveat, things change so fast in this field, that it’s hard to stay up to date.
And before we start, one essential disclaimer: we did not receive an endorsement or sponsorship by any of the apps and companies mentioned in the article nor the authors of the illustrations. We are aware of the controversial nature of this tech and of digital tools, in general: ethical, political, social, and environmental. We don’t recommend reckless and unbridled use of it. Rather the opposite: our intention is to inform, raise awareness, and sparkle a conversation around its controversies.
What exactly do we mean by AI? A brief glossary.
The IBM homepage states that “Artificial Intelligence is technology that enables computers and machines to simulate human learning, comprehension, problem-solving, decision making, creativity and autonomy”. Emphasis on “simulate” (not “create”), that’s why we can ask a ChatBot to write a Shakespearian sonnet, but we are still far from a self-driving car.
The discussion on intelligence is a very complicated one. For now, I will embrace a very empiric approach and go with the Dutch computer scientist Edsger W. Dijkstra, who said: The question of whether a computer can think is no more interesting than the question of whether a submarine can swim. It gets the job done, and I will focus on that for now.
At the moment the buzz about AI tools is really about Generative AI (capable of generating text, images, videos or other data in response to prompts). LLM (Large Language Models) are one type of Generative AI focused on text: in other words, massive programs trained on language that can understand, summarize, generate and predict text-based content. Lastly, a ChatBot is a program designed to simulate a conversation with human users. When we inject a LLM into a ChatBot, we have something like ChatGPT.
Do you know what GPT stands for? Generative Pre-training Transformer. Generative means it’s essentially a Generative AI. Pre-training because the default version - the one everybody uses when they first login to the platform - will be the same for every user, but then it starts learning from them about their context, needs, style etc: this is called “AI training”. And finally Transformer means that it’s able to focus on different parts of the input and interpret it, often getting more context than the user initially meant. Well, sadly we are not chatting with Optimus Prime... unless you train it to do so.
Currently, The most popular ones are GPT by OpenAI, Gemini by Google and Llama by Meta. There are more, and the number is growing every week. The programs are growing in power and functionalities in an exponential way. GPT released its revolutionary version 4 in March 2024, which can elaborate and produce text, sounds, images, search websites, read and create documents, and more - and the 4o version (which stands for “omnia” because it’s better at… everything) in May 2024. It is hard to stay up to date with the cutting edge because things change all the time. And in fact, the newest o1 version is already in its preview phase. If the estimates are correct, it will be a bit slower than the predecessors, but definitely better at solving advanced science and coding problems.
Why is AI good news for education?
Salman Khan (founder of the Khan Academy, who has facilitated access to education to 155 million people in 2023 only) has a very positive outlook on things. Since [AI] amplifies human intent, what matters is our intent. We know there are bad actors with negative intent [...]. It’s critical that all of us put as much energy as possible into the positive intent, Khan says.
Students and teachers will soon have access to a personal assistant in their pocket, that can give valuable - and personalized - feedback and assist with tasks and goals tailored to their needs. And while AI tools are going to help with grading papers, writing reports, and designing classes - all this can help free precious time for teachers to use their creativity, and develop human connections with students. (Watch also Sal Khan’s How could AI Save (Not Destroy) Education TED talk). The benefits when designing classes or activities will be huge. Also in areas such as inclusion and learners with special needs. Ask an advanced AI tool to rewrite a text for a person on the autism spectrum, or for a person with an immigrant background who is not fluent in the language, or for a 6/10/15 years old. It’s not always perfect, but the results will amaze you!
AI in itself is not “creative” (not yet, at least). It can recombine existing knowledge and information in patterns that look original, but it’s not inventing anything. But powered by this super assistant, human creativity can flourish. And so can yours. Are you looking for an idea for a team-building activity? Ask your chatbot to generate three and choose your favourite - so you can focus on preparation and delivery. Are you scratching your head to get five provocative questions for a “where do you stand” debate? Use your AI assistant, and your only problem will be choosing the best ones, and fine-tuning the direction you want the discussion to go. Need to find a fresh role-play scenario to experience a situation outside your group’s everyday life? Ask the AI, and you will be surprised, the scenario and dialogue lines generated in the blink of an eye.
But nothing will take the place of the human, real-life interaction between you and the group (or the teacher and their students). AI will set education back 2500 years… and that’s a good thing, says Robert Clapperton from Ryerson University. Meaning that we will leave behind the standardized input-output processes typical of the Industrial Revolution era, and go back to personalized, process-based education. Every school will have the potential to become a classical Academy.
Why is AI bad news for education?
But every coin has its flip side. And in this case, the coin is massive and so is its shadow. If we don’t understand how to integrate effectively AI tools in education, their potential might be overwhelming. Just to list a few adverse effects: digital tools can lead to alienation, lacking the empathy and emotional intelligence that people naturally have and need. We can be seduced by over-reliance on technology (“there is an AI for that”, so why bother doing any work ourselves) and become collectively lazy.AI tools raise issues about bias - they can only reproduce the knowledge they were trained on, and that’s often flawed; geographical, economical, technological accessibility and equitability; and huge concerns about privacy and data use. It can also make mistakes, and literally make up information and citations (these are called “hallucinations” and nobody knows why it happens).
They raise concerns about their ethical use. At the moment, all the major commercial models have been tightly trained to recognize and avoid racist or abusive language, and don’t encourage any interaction that could be harmful, toxic or dangerous. But some people and organizations disagree with this and release unfiltered AI tools for profit, to further an agenda, or simply because they can (and if this sounds a bit much like Elon Musk; we will get to him later). Again, a discussion on intent is necessary.
Finally, if overused, AI tools can stifle critical thinking. We already see it in previous iterations, with so many people taking social media and search engines as their main news source without questioning the sources. What will happen when this content is delivered with the friendly, well-known voice of our best digital friend?
Education in particular will feel the impact like an earthquake. Rote, clerical assignments (“write an essay on…”, or “research five facts about…”) can now be solved with one single click of the mouse, as well as basically every standard language, math or science test. The latest version of Google Lens has a “solve homework” button. Teachers and curriculum specialists everywhere are worried this will be the apocalypse. But is it really that bad?
If a task can be solved with one single click, it will and it probably should. Not because students are inherently lazy or evil, but because they don’t like wasting time. And who can blame them? I still meet educators in my work who insist on the familiar mantra: “But we need to teach young people that life is sometimes boring, and so is school”. Well, try telling that to your average teenager! And maybe it’s about time we challenge that status quo. Life can be boring at times, but learning doesn’t have to be.
I am not saying “old ways are bad”. Old ways are actually awesome (remember the Academy one paragraph ago?), but we need to preserve their spirit and purpose, while accepting the technological changes. If some homework had as its only purpose that of keeping students busy, then frankly we are all better off without it.
This is something non-formal educators and youth workers know very well. As I wrote in the previous chapter, I personally believe the field of NFE has much to benefit from a healthy application of AI tools. Methods that are aimed at developing creativity, problem-solving, communication and critical thinking skills can receive a powerful boost through AI tools and are not at risk simply because their powerful dynamics happen live, are centred on participants, and require active participation.
As I write this article, 6-year-old kids are starting primary school in Prague, where I live. They are excited and nervous as every kid ever was on their first school day. They know they will meet new friends, learn new things, begin a new chapter of their life. What they don’t know is that they will enter the “labour market” in about 2042. Do we have any idea how the job market (if there will be one) will look like in the forties? Or the geopolitical landscape? Or even life on Earth? If do you, please reach out, because I don’t have a clue.
We have no idea what skills and knowledge these kids will need in their future lives. But we do know they are going to need creativity, communication skills, curiosity, optimism, media and digital literacy, resource management, conflict resolution, and a huge dose of critical thinking. And education at large needs to change to equip learners with those skills. AI can effectively help with that, or put the final nail on the coffin. Once again, it’s a tool in our hands.
And more. These applications require enormous financial investments to operate, so they have strong ties with the commercial world. They have a frankly worrying environmental aspect, as well as big repercussions on arts, culture and the news industry (the volume of disinformation websites and articles has been skyrocketing recently). We will not focus on these aspects in this article, but they are an important part of the conversation and it’s important to keep them in mind.
Development in the Legislative and Educational field
The European Union is showing initiative in regulating AI and its “AI Act” represents the first comprehensive piece of legislation on the matter. California is also working on its own version, currently in the preparation. The EU AI Act was published in July 2024 and it will enter full force in 2 years time (with some emergency features starting within 6 months).
The law basically defines four levels of risk for AI systems, from “Unacceptable” to “Low or minimal” and introduces different requirements and red tape for developers. The Act also sets transparency requirements and is fully integrated with the code on GDPR, so data protection and privacy will also be guaranteed.
This is a very welcomed step, while on the other hand it is chilling to think that AI programs that pose an “Unacceptable risk” are legal right now, and will only become forbidden at the end of 2024 - and then again only in the EU. But the hope is that after the European Union takes the lead, other countries will follow suit.
On the front of educational policy, the Council of Europe has started a consultation process and you can find the conference recordings and the report online. It’s a starting point as it focuses on the ethical side, inclusion and the importance of digital literacy.
But there is more and I don’t see it addressed (yet). Companies are using AI tools to cut costs and increase productivity. This usually comes at the expense of human labour. Many professions will be transformed by the AI revolution, and many will lose their jobs. It’s only a matter of time before governments too will start thinking how to cut costs by using AI tools. But this could be a critical mistake. By allowing algorithms to take over our bureaucracies, we delegate our collective well-being to mindless strings of code. That could be an existential danger. And in the field of education we must thread even more carefully. Yes it’s true, every student can have a personalized, always available learning assistant and tutor in the pocket. But should it replace actual, human teachers and educators? This could be potentially disastrous, and it’s a political decision.
A brief curated selection of AI apps (all these include a free but limited version)
For general purposes, I recommend ChatGPT (free profile, registration necessary). The free account gives access to GPT4 and has some limitations (reduced speed and a limited number of prompts and data usage), but probably it will be more than enough for an average user. GPT4 can process files, create images, and on mobile it can also process voice-to-text so it feels like a conversation (with a voice a little too close to Scarlett Johansson’s, but that’s another story). A paid account gives access to the 4o version which is harder, better, faster, stronger and for 20 USD/month is definitely recommended for a professional use.
I also switch to Gemini sometimes and compare the results. At least for now I don’t feel the need for another premium account. Gemini seems to have better control of tones (it’s more friendly and conversational), recently became able also to create images, and of course its ability to search the web is unparalleled, being powered by Google, but ChatGPT was released first and the consensus seems to be that it still has a competitive edge.
A popular third option is Llama 3 (powered by Meta) which also boasts fantastic performance, but I haven’t tried it. Not one to stay behind in a technological armaments race (and one with a lot of money to spend), Elon Musk’s startup xAI is also working to release Grok 3, “the most powerful AI in the world”, by December 2024. Defined by Musk as “an anti-woke competitor to ChatGPT” (well, it seems Musk needs an AI that can tell violent jokes and produce extreme right-wing propaganda), Grok 3 will require “the largest data center on the planet” near Memphis, Tennessee - another pollution nightmare. Musk must love superlatives more than the environment.
By the way, if you are concerned about a more ethical use, you can steer towards DuckDuckGo AI chat which offers access to the free or open-source versions of the most popular AI chatbots and promises never to collect your data and full privacy.
But having the right tools is only the first step. You need to use them well. To learn how to write effective prompts and get the best results from your ChatBot, this video is a great start.
For image generation, I use ChatGPT (which incorporates the latest version of DALL-E) or the superlative Magic Media tools of Canva (free account with limited features). Try also Open Art, Gencraft or Recraft for alternatives (free limited profiles possible on each), and if you plan to make a professional use, choose your favourite and buy a premium account.
Kahoot is also integrating an AI tool that can generate quizzes automatically (only for premium users), but as a simple enough workaround, it’s possible to create a quiz with ChatGPT, generate the xls file and upload it on Kahoot. My groups learn how to do it effectively in half an hour.
To assist in creating presentations (content and slides), the app that comes on top seems to be SlidesAI. To generate speech (text to voice), Elevenlabs makes good work. For video trailers complete with the talking heads of people that were never real, Synthesia. And for (short) AI videos, try experimenting with PixVerse, Animoto or the impressive Haiper AI.
To create music, Suno is a little gem. You can input a few keywords, choose the genre, and voilà, it will write, compose and execute a 2-4 minutes song for you, complete with lyrics (unless you choose to make it instrumental). It’s phenomenal, and fun to use (and a nightmare for copyright).
Are you curious how to detect AI text? Maybe your students presented an essay that feels a bit… artificial? Quillbot has a good AI content detector, or I use Copyleaks as an alternative, but be aware that none of these tools is 100% accurate, and LLMs are constantly evolving to appear more “natural”. I promise, this article was completely written by me without any AI tools. And if you don’t believe me, test it!
The HOP online learning platform is based on Moodle and its newest version (4.5), published on 5th October 2024, is also having an AI integration, allowing course creators to generate text or images or the learners to automatically summarise the contents. HOP is moving to the Moodle 4.5 when it gets more stabile, not sooner than 2025.
Few ideas on how to use AI tools creatively with groups and in experiential learning.
Here are some of my favourite AI-powered activities. I use them working with groups already, they are constantly evolving and changing and we are learning a lot as the tools change and new opportunities arise.
- Create a Quiz on any given topic and use Kahoot to play it.
- Play Human Bingo but use an AI tool to write the questions. You can ask to focus on “provocative” questions, or professional, creative, and so on. Remember to choose the questions, don’t leave the initiative to the AI.
- Ask the AI assistant to write a short story combining the styles of 5 favourite authors. Then read the story in small groups, and use critical thinking and detective skills to identify the influences of each author. This is particularly fun if the authors belong to different epochs or artistic styles.
- Generate an Image representing how do you feel now. Then share it in a small group.
- Generate 6 images representing an experience you had (for example, a training course). Then create a storyboard with Canva.
- Celebrity Interview: working in small groups, choose a celebrity. Living or dead, real or imaginary. Then with your group agree on five provocative questions to ask that person. Ask a ChatBot to role-play the celebrity (“Can you role-play as Napoleon Bonaparte? “Oui, je peux”), ask the questions and record the interview. Be aware that specific characters may be restricted due to political reasons. Donald Trump is now off limits on ChatGPT. GPT will also refuse to impersonate dictators from the 1900s, while strangely enough Gemini can do a Mussolini impression. And it’s quite creepy.
- AI-assisted city adventure. Working in small groups, use a ChatBot to create a scavenger hunt for the area you are in. Then, go out and play it! Better yet if different groups create different itineraries for the same area.
- In small groups, identify a problem that afflicts your local area/country/Europe/the world. Ask the AI to write a story about a super-hero solving that problem. Generate an image of the superhero. Discuss the results. (Credits to Lukas Kosowski for this one)
- Critical media analysis. Individually or in small groups, watch a movie or an episode from a series and discuss it together. Identify its themes, meaning, characters’ motivations etc. Identify possible bias present in the medium. Then, ask the AI “what do you think of my analysis? What feedback would you give me to see this piece more critically?”.
- In group, everybody says one word that represents the experience, the course, the week spent together. Then ask an AI tool to create a poem (or a song with Suno) using those words. That will be the group’s anthem, and if you don’t like, you can create another one... or ten.
This is not a comprehensive list of course and the methods are not described step-by-step (we will need an entire manual for that), but they are intended as an inspiration. If you need to develop them further, get in touch with me, or… ask a ChatBot. By now, you should know how.
Photo credits:
- Copyright, Freepik
- Copy-free, Hamilton Richards
- Creative Commons BY 2.0, Steve Jurvetson
- Copyright, Freepik
- Copyright, Freepik
- Copyright, Freepik
- Copy-free, created with www.recraft.ai