In any set of forecasts of what technological advances we may experience in 2023, one subject remains consistently in the headlines — the inexorable rise of robot brainpower, also known as artificial intelligence.
The Massachusetts Institute of Technology Review, run by exactly the sort of people who like to ponder such things, declares: “This is the year of the AI artists. Software models developed by Google, OpenAI, and others can now generate stunning artworks based on just a few text prompts.
“Type in a short description of pretty much anything, and you get a picture of what you asked for in seconds. Nothing will be the same again.”
Declarations that “nothing will be the same again” can be a source of delight for certain types of people — disruptors, Silicon Valley venture capitalists, computer obsessives, for example. For the rest of us, they are not always good news and can deliver unfortunate and unintended consequences.
Take ChatGPT (generative pre-trained transformer), a chatbot which can pop up on your computer screen to give you answers to questions you never knew you wanted to ask.
This one has only been around since the end of November and because it’s a smart talker — it’s backed by, among others, Elon Musk, who thinks it is “scary good” — and it is garnering huge interest. The company which launched it, San Francisco-based OpenAI, is now valued at over €30bn.
ChatGPT is also a quick learner, so much so that it can turn its computing power to writing stories, job application letters, poetry, and a whole vista of creative and knowledge-based functions. It can also be recruited into the writing of university and college essays. And that is where a problem arises because, in higher educational faculties throughout most of the world, the production of an essay is a prime metric in testing the research, retention, and understanding of a subject.
In the UK, lecturers have been urged to review forms of course assessment in the context of this new tool, which has the potential to produce credible and high-quality content with minimal human input.
If AI can produce the answers, there is less incentive for students to acquire knowledge, skills, and independent learning. Schools in New York have forbidden the use of ChatGPT over worries it will encourage plagiarism. Australian universities have signalled a return to in-person exams to protect the integrity of the assessment system.
OpenAI itself notes the bot “sometimes writes plausible-sounding but incorrect or nonsensical answers” and “will sometimes respond to harmful instructions or exhibit biased behaviour”.
Nevertheless, this is a challenge which is not going to recede. Teachers must decide whether they will harness this technology and find different forms of assessment, or spend their time trying to identify transgressors.
No one pursued a career in education to do that.