One of the seminars I taught last term was on the theme of ‘Mindless Modernism’. We read Gertrude Stein’s Tender Buttons, along with B.F. Skinner’s essay ‘Has Gertrude Stein a Secret?’, in which he suggests that Stein didn’t write her poems at all but let them emerge mechanically from her pen, like an automaton, and Alan Turing’s ‘Computing and Machine Intelligence’, in which he sets out the rules of the Imitation Game.
I had the feeling, sitting down to my marking after the Christmas break, that I was an unwilling participant in a version of Turing’s game. Was the work I was reading really written by students? Or had a machine had a hand in its construction?
There’s nothing new about suspicions of students cheating. Though straightforward plagiarism has been easy to detect for at least the last decade, it’s harder to catch students who submit work written or heavily edited by others – parents, peers or, at least for wealthier students, essay mills.
In 2022 the government passed a law making the use of ‘contract cheating’ services illegal, but that move now feels laughably redundant. A recent survey of students in the UK found that nearly 60 per cent have used ChatGPT to ‘help with their assignments’ – brainstorming ideas, correcting grammar or ‘assisting with essay structure’ – and 5 per cent of those surveyed admitted to submitting papers containing material generated by AI (this seems low to me).
Detecting the use of generative AI in students’ essays isn’t so much a question of identifying particular turns of phrase as of distinguishing a style. Usually, when students don’t do the reading or the thinking, there’s an associated sloppiness at the sentence level. Now I occasionally come across writing that is superficially slick and grammatically watertight, but has a weird, glassy absence behind it. There’s not much that can be done about the suspicions that such sentences provoke: despite the claims of some tech start-ups targeting universities, there’s no reliable way of proving that text is AI-generated, and it seems unlikely that there ever will be.
This uncertainty has become an ongoing discussion in department meetings. Some of my colleagues say we should embrace the bots: what if we were to ask students to write prompts to generate responses to questions using AI, and then get them to ‘fact check’ the answers to demonstrate just how readily the technology ‘hallucinates’ fake facts and faulty citations? Others call for a return to older tech: in-person exams, written by hand, or the use of vivas for undergraduates, or requiring students to submit drafts and plans of their work along the way.
For now, I take comfort in the thought that, while AI can be used to generate plausibly fluent boilerplate, nothing I’ve read written by a bot would achieve a mark much higher than a low 2:1. But ChatGPT is good at the inert language found in university mission statements, rushed-off references, funding proposals and, yes, marking feedback. With many universities keen to embrace AI to ‘automate operations and improve efficiency’ (save on teaching costs?) and ‘enhance student learning outcomes’, it isn’t hard to imagine a perfect closed loop in which AI-generated syllabuses are assigned to students who submit AI-generated work that is given AI-generated feedback.
Mark Papers is a pseudonym.