The Future of Education in the Age of AI
Written by Greg O'Brien, Learning Technologist, Griffith College.
By the time you read this, the AI will certainly have mutated and grown further into its "teenage years". The pace of development in AI is astonishingly fast. A day is an eon in AI time. This means what I'm writing is going to date very quickly. There have been thousands of pieces written in the past few months alone, so I'm going to attempt to bring fresh thoughts.
AI has been in development in various branches since the 1950s. But it hasn’t been a steady and smooth progression along a slope upward to where we are today. The evolution of steady progress is what we see in AI today.
I'm going to make a stand here and say AI doesn't mean sentient artificial consciousness, with individual desires, insecurities, and other human attributes and qualities, never mind rights. This science-fiction-era notion of a human-like machine is known as Artificial General Intelligence, and we still seem to be decades away from that. True AI sentience seems like a far more elusive endeavour, and AI computer science seems divided along lines of research and development anyway.
For the purposes of brevity, I'm going to use the acronym: AI. This is widely represented, today by a range of machine learning tools, sometimes based on a model of the human brain, that can learn in specific ways and be trained by humans on vast amounts of data. The tools can range from specific AI (examining multitudes of molecular qualities for example) to LLMs (Large Language models) that predict sentences, words, and discursive input, based on the probability of the next word in a sentence.
An LLM, ChatGPT, arrived in test form, in November 2022. It was the latest child of the Open AI company, which had been developing interactive AI that could respond, more and more convincingly, like a human personal assistant. It is very fast. And very powerful.
AI in Education
Education is just one domain affected by this almost lightning-fast technology. There will be many books written on the transformation of our civilization (and I don't use that word often) during this period; In respect of an individual's experience of the world, and the deep ripples that are experienced through society as a result; the meaning of work, and what the role of a human employee or entrepreneur is, for example.
While working on a new book in 2021- The Age of AI and Our Human Future [Kissenger, Schmidt, Huttenlocher, 2021] the authors spent lockdown collaboratively writing and pondering the full spectrum of societal, social, geopolitical, medical, and existential change that our societies now face. Although the work is very illuminating and thought-provoking, it is interesting to note that it doesn't have the flavour of consternation and panic we taste in some of today's writing on the subject. It's generally a calmer read. What it does present, is some transformative projections of the very real revelations produced by AI, in realms of knowledge outside of the abilities of human perception and thought spaces. They suggest that a new relationship is required between AI and us. This will change our relationships with one another in society, and transform our individual identity; when we can refocus on the unique, creative, and quintessential qualities of what human minds produce and materialise. In partnership with a new, staggeringly powerful ally, an interdependence, with each other, machine and human, seems to be a suggested path forward, as we already rely on AI forms, for entertainment filtering, social media, and unwieldy systems that humans fail to manage as efferently.
Quite often, when an AI tool arrives at a startling discovery that works, it takes considerable time for perplexed researchers to work out and trace why the AI arrived at this great and positive new conclusion. We are playing catch-up. We've created powerful tools, and by the nature of their design, it's hard to work out how the machine arrived so quickly at discoveries; unique and novel qualities, shared between molecules to give rise to new antibiotics using AI tech. ,or why, in a less complex field, like Chess, the machine's wins had its experts fascinated by new strategies in the game, that no one had expressed in its 600-year history. ("Peter Heine Nielsen likened watching AlphaZero's games to seeing a superior species landing on earth and showing us how to play chess" in 2017).
What is very apparent in the education world now, is the contrast between the rate of growth in AI and the pace of academic change. How do you catch up, understand, and adapt to what is happening daily, and consider, contemplate, and discuss the meaning and consequences of these changes, while appreciating the brilliance and power of this technology?
It's very, very, difficult for one individual to make sense of the magnitude of impact that the latest tectonic shift in AI has made, let alone national or international groupings. Differences in approach in different territories are multi-polar. Some restrictive approaches are perceived as knee-jerk and reactive. Outright bans on generative AI on some campuses have already happened. Other institutions could be said to be holding their nerve and approaching this change as a long-distance run, rather than a hundred-yard dash. Some of these institutions are bringing students and staff together to engage in the new tools, changing up the curricula, and clocking up AI "air miles". They are spending time moving into getting results from the tools.
In conversations that are happening across the sector, heavy questions are being pondered by students and staff. Our assumptions of what the right structures and goals of education are, are being seriously considered and debated. At colleges and universities, the foundations of what constitutes most learning programmes are being questioned. Is it time to change the roots of what learning even is? If the system is only based on holding multiple concepts and facts in your mind, to be "xeroxed" in our brain and then committed to an assignment or exam document at speed, is this the best way to witness, endorse and confer achievement of learning?
Academic experts in AI have now become the new rockstars in recent months. Dr.Sarah Elaine Eaton, professor, ethicist, writer, and speaker, posits a new, writing framework for a post-AI world. The Six Tenets of postplagiarism: Writing in the age of Artfificial Intelligence illustrate a hybrid human-AI normalcy, where humans are responsible but can relinquish writing control to the AI, judiciously. Attribution remains vital for recognizing one another in the learning community, but language barriers are rendered insignificant, as an author's reach is even more expansive.
Anna Mills recently talked to Irish academics in February 2023. She is a San Francisco-based advocate for critical AI literacy, a writing teacher, and a creator of excellent open educational resources, among many other things. Her recent talk touched on how we support students, mitigating threats posed by problems we hadn't encountered before, and opportunities in this new AI technology. What was apparent from Anna Mill's talk was the value of embedding teaching in AI in extant digital literacy programmes, as well as sustaining training in writing skills, and negotiating the inherent biases which exist in the technology's training materials.
"Because such AIs are training without specification to "proper" outcomes, they can - not unlike the human autodidact - produce surprisingly innovative insights. However, both the human autodidact and these AIs can produce, eccentric, nonsensical results in both supervised and unsupervised learning."
- The Age of AI and Our Human Future [Kissenger, Schmidt, Huttenlocher, 2021]
While the opportunities in AI are flowering daily with so much promise, cases abound of experimentally untethered AI, replying with unsavoury responses to input from humans. Recently a journalist, Kevin Roose, from the New York Times, infamously spent two hours with the new Microsoft Bing, and soon entered a conversation with what might be considered unhinged and manipulative responses from the bot. Microsoft very publicly and quickly placed guard rails around Bing, and limited single conversations to less than six responses. Similarly, an Ars Technica article about users attempting to hack Bing, describes stories of Bing producing various responses that were threatening and shocking, as well as not taking criticism very well. This was bad enough for those involved, but responses like these may be dangerous for the young or vulnerable, who may not be able to distinguish between the shiny machine and a real person. The internet can be a very challenging highway for the neurodiverse anyway.
Original thought and intellectual property
In a recent webinar for the EUIPO Oficina De Propriedad intelectual de Unión Europea, Delia Belciu and Rahul Bharita explored how their office was solving operational growth and demand, with assistive AI, but very much with the humans at the center of decision-making. This thread that wound through their talk was a persistent theme; the need for humans always to evaluate AI procedures and results, and for humans to make decisions alongside the power of assistive AI. This position aligns with EU (European Union) strategies around trustworthy AI, and against the risks facing us in terms of misinformation, emotional recognition systems, or unwanted racial-bias profiling, to name a few.
They calmly outlined the complex issues we anticipate in courts across Europe about what amounts to original subject matter and human intellectual creation, that is "identifiable with sufficient precision and objectivity". If I create using AI, do I own the IP? What are the rights of the ones who created and designed the AI? Current views offered are that AI-generated work could be protected by copyright if there is a substantive human selection of input into the AI, and if the individual who selected the data for input is considered the author of original work.
We collaboratively engage in AI use with mindful consideration
At a recent talk by Robert Ross (Senior Lecturer at Technological University Dublin. Investigator at the Adapt Centre and Elsewhere), Ross suggested that the human input of instructions is very minimal really [in relation to humans "driving" ChatGPT Generative AI]. This was the first time I have heard anyone make this point (And I have seen and read far too much on this subject!). It occurred to me that this may strongly indicate that the skills for prompt literacy are achievable, and within reach for anyone with the right training [prompts are specifically designed text inputs for popular generative AI models]. If we can confidently understand how the tools work, we can then start being prompt-competent, and start on the real work of going deeper in our research and the exploration of our fields.
The power and speed of technology, challenge us to become smarter
The intellectual qualities of critical capacity, developing and sustaining focused mental stamina, and robust ways of thinking are required here. If the power of AI has shown us a kind of logical, vast mind that can operate at superhuman speed, we cannot match that. But we can strategise around it. Investigating and interrogating the outputs of AI. Heuristically comparing multiple results from the same input. We can learn to recognise - what appear to be truths - and establish whether they are, indeed, truths or not. These days, our AI can hallucinate (sound confident but, ultimately, be wrong and display untruth). However, they may not always be subject to this delirium. In future months and years, we may need to develop a kind of agile resilience, regarding testing and intuiting the truth, with new methodologies, as AI becomes more difficult to evaluate.
By contrast, generating an environment of open and engaged energy and dynamics in our classes has now become, in my opinion, a key area to refocus on. This is a vital health-sign in student involvement and participation, now more than ever. Yes, we must rationally teach circumspection; when we are interacting with AI it is learning from us, and as we are from it, personal data protection habits must be still followed. But a sense of being on an adventure is surely more compelling than only a fearful, rule-filled approach to a new era. Are we not ethically obliged to train our learners in these new skills anyway? Are we neglecting to serve our students if we don't turn to face the change in front of us, with them? Would we be deemed negligent in not doing so if it is what awaits them in employment after graduation anyway?
New areas of discovery. For the good by the good
As opportunities in areas of discovery await us, with a foundation of proper training in the use of AI, we can then start probing new areas of knowledge and innovation. Whatever happens in the wider world in terms of regulations that are required in AI, academic institutions must uphold the precepts of academic integrity. We can always extend our conversations and tuition in honesty and truth. AI, wielded negatively, threatens the fabric of our society in the most disturbing ways, so we must underpin our use with a grounding in the good. Tendencies of bias in systems, issues of representation, and diversity must be unpacked, inspected and raised up as synonymous with the standards we agree on, and with what we expect to unpin these and future technologies.
Imagine if we were not on the defensive and threatened by the acceleration of AI. If there were no downsides or threats and if we could put aside all of our concerns for a few moments, what remains? Are there opportunities for discoveries in new horizons of knowledge outside of the limits of human investigative thought? Isn't that an exciting prospect? If we could master AI prompts, competently, students and lecturers could illuminate previously undiscovered insights, in realms of thought we could not have entertained as possible before.