How AI and Digital Tools Are Changing Education
This blog features Katerina Karaivanova, Teaching and Learning Project Lead (AI) at Griffith College, as she shares her insights on the future of AI in education. We explore how AI can support learning, the importance of using these tools ethically, and the challenges and opportunities they bring to the classroom.
Can you tell us about your role as Teaching and Learning Project Lead (AI) at Griffith College, and how it connects to the future of education?
I’ve been in this role since March, and it has been a great experience so far because it perfectly combines my passions for both technology and education. My primary focus is on helping the college navigate the rapidly evolving AI landscape in higher education, including both the opportunities and challenges it presents. I work with faculty and lecturers to help them navigate all aspects of AI-related topics. This includes providing guidance on assessment strategies, our AI policies, and feedback on AI and student submissions related to academic integrity. I’m looking forward to the next academic year, where I hope to not only continue working with lecturers but also with students, and focus on the productive ways in which generative AI can be used in education.
How do you see AI and digital tools transforming the traditional learning experience for students today?
Generative AI is completely changing the way some students engage with their learning. When used appropriately, these tools can help learn by providing examples or differently phrased explanations. Students can use them to create practice questions and to break down large tasks into manageable parts. The issue is that this same technology can also significantly undermine learning if students misuse it. Current generative AI models can produce written text extremely quickly, and some students do take that shortcut, which significantly undermines their knowledge and the value of their education. Ultimately, the key is to teach students how to use AI as a learning tool, not a replacement for their own thinking.
What are the biggest misconceptions about AI in education that you encounter, and how do you address them?
The biggest misconception I encounter is the automatic assumption that students will only use AI for cheating. While that is definitely true for some students, there are a lot of other, more appropriate ways that students use AI. Students use AI as a tutor, to create practice questions, to organise their notes, to brainstorm ideas and challenge their thinking. The second major misconception is that AI will inherently destroy students’ ability to think. I actually believe the opposite can be true, if we teach students how to use AI properly. If a student generates content and simply accepts it as fact, then yes – that’s a poor use of critical thinking that diminishes their ability to think critically. However, if we teach students to evaluate AI outputs, fact-check information, identify potential bias, and learn how to ask questions more effectively to obtain more accurate responses, then we are actually developing their critical thinking skills. I believe the best way to address both of these misconceptions is through education, which is what I plan to focus on in the next semester. By learning how to responsibly use AI, both lecturers and students can be better equipped to handle any of the challenges that come from this new technology.
How can educators balance the benefits of AI with critical thinking and creativity in the classroom?
This is genuinely challenging and typically requires a comprehensive reevaluation of how you teach and assess students. On one hand, you want to make sure that students are learning and are earning their degrees on their own merit. On the other hand, AI is here to stay, and we know that more and more businesses and industries are using it, so we have to make sure we prepare students for that as well.
One way you can balance that is by incorporating some AI literacy in your modules – it doesn’t have to be in graded assessments, but can be in class activities. Within the scope of assessment, Griffith College uses the AI in the Learner Assessment framework, which allows for different levels of AI use in assessment. This allows our lecturers to use AI when appropriate and where it aligns with the learning outcomes, and to better instruct students on what is expected of them and how AI tools can be utilised. In class activities, you can allow students to use AI in various ways, including generating and then critiquing content, using AI to brainstorm initial ideas that students then develop, or even having students use AI to challenge the ideas they came up with to help them refine them further.
Looking ahead, what are the most significant opportunities and challenges you foresee with AI shaping the future of higher education?
The opportunities are really exciting. AI can provide learning support that adapts to students’ individual needs, making education more accessible for everyone, regardless of prior knowledge and abilities. For students with learning difficulties, AI tools can break down overwhelming tasks into manageable steps and provide information in various ways, such as generating a transcript of an audio or video or converting text to speech. For lecturers, AI can be a valuable tool to reduce their workload on routine tasks, allowing them more time and opportunities to work directly with students, a part that AI can never replace.
The challenges are equally significant, though. Academic integrity remains a major concern, which leads to the greater challenge of navigating assessments as these tools continue to improve. Beyond the practical challenges, we also need to evaluate the ethical and sustainability implications of widespread AI use in education. This includes ensuring equitable access to AI tools, addressing data privacy concerns and being mindful of the environmental impact of AI model training and use. It is our responsibility as educators to help students not only learn to use these tools but also understand the ethical implications of their use.