How ChatGPT is Transforming Computing Science
Insights from Ruairi Murphy
Ruairi is a lecturer in the faculty of Computing Science at Griffith College.
How has ChatGPT impacted your industry-related faculty at Griffith College?
ChatGPT is just one, and the latest, example of what is known as Artificial Intelligence or more accurately Large Language Models, so I think the effect of such technologies has been known and felt within computing for a long time now. Indeed, tools like ChatGPT are the result of computing science, so whilst it has been met with alarm by many disciplines and industries, rightly or wrongly, for computing it's an exciting development in an already expansive and well-established field. I think it's too soon to say it is "transforming" anything.
Certainly, it can be said that AI/LLMs are already and will continue to change how computing is done, something which has been in motion for some time now. Obviously, a lot of attention has been focused on its ability to write seemingly well-informed and well-structured text, and computing is not immune to this. ChatGPT can output quick and often well-written and correct code, but it is not perfect. Personally, I believe this will be integrated into future coding tools as a way to speed up and refine the work of coders.
I don't see it as something to panic about or condemn, but to embrace its positives - the ability to enhance what we already do. It won't replace the human designing, developing and shaping the software, but aid them. Of course, as academics, there are issues around the originality of work and plagiarism, but for computing as a whole, it is an exciting time.
In what ways has ChatGPT changed the way you approach teaching and research?
An immediate reaction to its widespread adoption has been to ensure that the challenges we set for our learners cannot be easily recreated or solved by a tool like ChatGPT alone. In a way, I think this is a positive development, making us hone our assessments in a way that encourages the creativity of the learner. This is something I've always done, but I think we will be even more mindful of it going forward. I want to help my learners become creative, problem-solvers, not machines. Leave the machine work to the machines.
How do you think ChatGPT has impacted ethical considerations in the field of AI?
I don't think ChatGPT has particularly changed the ethical considerations of Artificial Intelligence. Computer scientists have known for some time that the development of this technology has ethical issues for humanity to resolve. Large Language Models are just one expression or branch of AI. I think ethics has a vital part to play in how we use and integrate AI and LLM into our lives, just as it has in any technology we use. The history of technology going back hundreds to thousands of years is about how we use these new tools and innovations. Moral panics over the latest developments are as old as technology itself, from the printing press or the steam engine to the modern era and nuclear energy, the television, the internet etc. I don't think in this regard AI/LLMs are any different. AI/LLMs are here to stay and will only get more powerful, so the most pressing question should be how do we integrate them, not if we should.
What are some of the ethical challenges that arise with the use of ChatGPT in your industry-related faculty?
I think the two most pressing ethical challenges, to my mind, are related to the integrity of academic work and the implications this technology has on creativity and human labour. I think the academic challenges are obvious and well-documented - the ability to produce quick, often well-written, and seemingly accurate answers to questions etc. presents a challenge to institutes looking to assess learner work. This will only get more complicated as ChatGPT and similar tools are integrated into workflows. For instance, I expect the ability to produce small bits of code, sections of software, etc. to be farmed out to LLMs by software producers/coders as standard soon. We want to prepare our learners for "the real world" and this may involve integrating ChatGPT into their work. How we assess this hybrid work and how we communicate what is considered plagiarism and what is not will be the challenge.
More broadly, I think as a society we need to consider the implications of automation on creativity and work. The for-profit-minded companies might be to embrace as much machine automation as possible to minimise costs (on wages) but at what price? On top of the obvious human cost of people losing their jobs we run the risk of handing over huge swathes of creative work to machines, machines we should remember, mostly just absorb previously produced work and interpret it. Whilst we talk a lot about the implications of originality in academia, we should not forget that often these LLMs are built on the work of others, uncredited. We also run the risk of falling into a feedback loop, as we hand more and more creative work over to machines, that start to reference other AI-produced work.
We, at the behest of profit, embrace this too much we may run the risk of bleeding the human, creative element from much of our lives.