Vol. 44 No. 4 · 24 February 2022

How many words does it take to make a mistake?

William Davies on the mechanisation of learning

5232 words

‘Perhaps we could start with you telling us what you understand by the term “plagiarism”.’ I don’t know how many times I’ve spoken these words over the last two years. Twenty or thirty perhaps. In my role as chair of the exam board in my department it falls to me to chair plagiarism hearings, and this awkward icebreaker serves as a way of establishing whether or not the student concerned understands the allegation being made. There has been a significant increase in the number of cases brought during the campus closures of the Covid period.

Our students submit their coursework essays online as Word documents or PDFs. A few years ago, my department took the decision to run them automatically through the plagiarism-checking software Turnitin, which generates a percentage ‘score’ gauging an essay’s similarity to other digitally available documents, including journal articles, websites and all the other student essays from around the world that have been scanned by Turnitin. A score of 20 per cent or even higher doesn’t necessarily indicate wrongdoing: the conventional use of quotations or well-worn phrases must be taken into account. But scores of 30 per cent and above usually indicate that something is wrong; in a large number of these cases the essay will contain text matching internet sources or essays submitted elsewhere – evidence that this student may have used an essay mill or essay bank. A study by the University of Swansea in 2018 found that one in seven students in the UK admitted to using an essay mill. The government has recently pledged to make essay-writing services illegal, but that won’t stop operations based overseas.

The use of Turnitin for these purposes isn’t uncontroversial. For one thing, it is a form of digital surveillance, which further mechanises the process of evaluation and risks weakening the trust between students and teachers. It can also mislead students into thinking that plagiarism is defined in quantitative terms: so long as you’re below a particular threshold, anything goes. Who knows how many essays slip through the net, including those written from scratch for a fee? The problem waiting round the corner for universities is essays generated by AI, which will leave a textual pattern-spotter like Turnitin in the dust. (Earlier this year, I came across one essay that felt deeply odd in some not quite human way, but I had no tangible evidence that anything untoward had occurred, so that was that.)

It’s because ‘plagiarism’ is taken so seriously in the academy that we hold formal hearings at which the student being accused can challenge the claim or at least outline mitigating factors. One thing I always stress is that this isn’t simply a matter of incorrect citation or referencing style – that’s a ‘study skills’ problem. To accuse someone of plagiarism is to make a moral charge regarding intentions. But establishing intent isn’t straightforward. More often than not, the hearings bleed into discussions of issues that could be gathered under the heading of student ‘wellbeing’, which all universities have been struggling to come to terms with in recent years.

It isn’t difficult to understand why plagiarism happens, given the omnipresence of the internet. The students I see at the hearings are often struggling to keep up with their studies for reasons that may have to do with family, paid work or mental health. Their attendance has been dropping, they’re behind on their reading, when suddenly an essay deadline looms into view and panic sets in. The suspension of face to face teaching for more than a year, combined with limited social interaction with other students and worries about loved ones, appears to have caused many more students than usual to struggle. Loneliness and anxiety combine to produce a form of helplessness.

The experience of lockdown has also made me acutely aware of another not unrelated issue: the difficulty of sustaining traditional humanistic notions of authorship in an online-only environment. It is hardly news that the internet has put a huge strain on analogue conventions of copyright and intellectual property: paranoia about digital piracy and plagiarism is as old as the world wide web. The academic insistence on using bibliographic citation techniques developed for the printing press feels increasingly eccentric now that reading materials and essays exist in a digital (and therefore interconnected) form. The norms concerning what counts as a credible source, or a legitimate quotation or paraphrase, have been under pressure for some time – and never more so than during the last two years, when non-digital avenues for teaching, discussion and reading have been closed.

I have heard plenty of dubious excuses for acts of plagiarism during these hearings. But there is one recurring explanation which, it seems to me, deserves more thoughtful consideration: ‘I took too many notes.’ It isn’t just students who are familiar with information overload, one of whose effects is to morph authorship into a desperate form of curatorial management, organising chunks of text on a screen. The discerning scholarly self on which the humanities depend was conceived as the product of transitions between spaces – library, lecture hall, seminar room, study – linked together by work with pen and paper. When all this is replaced by the interface with screen and keyboard, and everything dissolves into a unitary flow of ‘content’, the identity of the author – as distinct from the texts they have read – becomes harder to delineate.

In their sympathetic study of the ‘post-millennial’ generation (born since 1995), Gen Z Explained, Robert Katz, Sarah Ogilvie, Jane Shaw and Linda Woodhead find students sifting through online materials and module choices in search of whatever seems most ‘relevant’ to them personally, or to the task they happen to be engaged with at that moment.* This behaviour is both instinct and coping mechanism in the face of an otherwise overwhelming array of information. This generation, the first not to have known life before the internet, has acquired a battery of skills in navigating digital environments, but it isn’t clear how well those skills line up with the ones traditionally accredited by universities.

Of course the digitisation of reading materials and essays long predated Covid-19. The disruption caused by the move online during the campus closures was more acute when it came to lectures, which we were suddenly required to give from home to the cameras on our laptops. Typically, online lectures are not broadcast live. Instead, to maximise accessibility, a common protocol is to record them and then upload the video files using software such as the fearsomely named Panopto. This has greatly accelerated the drift of universities towards a platform model, which makes it possible for students to pick up learning materials as and when it suits them. Until now, academics have resisted the push for ‘lecture capture’. It causes in-person attendance at lectures to fall dramatically, and it makes many lecturers feel like mediocre television presenters. Unions fear that extracting and storing teaching for posterity threatens lecturers’ job security and weakens the power of strikes. Thanks to Covid, this may already have happened. As Alistair Fitt, vice-chancellor of Oxford Brookes, said in response to the announcement in November 2021 of academics’ strike action, ‘Over the last eighteen months, we’ve been able to cope with zero per cent of staff being able to come onto campus, so I’m sure we can cope with this.’ Bear in mind that those lecture videos are under the control of the university.

Many students may like the flexibility recorded lectures give them, but the conversion of lectures into yet more digital ‘content’ further destabilises traditional conceptions of learning and writing. Gen Z Explained reports cases of students watching lectures at triple speed, with captions switched on to aid their concentration and help them glean the ‘relevant’ information as swiftly as possible. This indicates an uncertainty about what a lecture is actually for. Most lecturers would argue that there is some intrinsic value in convening at a particular time and place in order to ‘think together’. Perhaps we flatter ourselves. Yet the evaluation forms which are now such a standard feature of campus life suggest that many students set a lot of store by the enthusiasm and care that are features of a good live lecture. When a lecture is reduced to a simple means of transferring information from producer to consumer, though, it makes sense for it to happen as efficiently as possible. One of the strangest plagiarism hearings I have had to deal with since the pandemic began involved a student who had apparently transcribed a lecturer’s spoken words (perhaps with the aid of automatic captioning) and copied large chunks straight into their essay.

From the perspective of students raised in a digital culture, the anti-plagiarism taboo no doubt seems to be just one more academic hang-up, a weird injunction to take perfectly adequate information, break it into pieces and refashion it. Students who pay for essays know what they are doing; others seem conscientious yet intimidated by secondary texts: presumably they won’t be able to improve on them, so why bother trying? For some years now, it’s been noticeable how many students arrive at university feeling that every interaction is a test they might fail. They are anxious. Writing seems fraught with risk, a highly complicated task that can be executed correctly or not.

Millions of parents​ have been thrust into the role of home teacher over the past two years. Where once they might have been given a termly update on their children’s progress, suddenly they are made to join the daily grind of 21st-century pedagogy. As the parent of young children, I wasn’t alone in feeling perturbed by the relentlessness of the daily maths and literacy classes, and by the rote manner in which English – or literacy, rather – is taught, with sentences treated like exotic gadgets to be operated with the help of instruction manuals. The shadow of Michael Gove, who as secretary of state for education between 2010 and 2014 insisted that children have traditional grammar drummed into them, still falls heavily on English schools. ‘Dear Gavin Williamson, could you tell parents what a fronted adverbial is?’ an article by the children’s writer Michael Rosen demanded of one of Gove’s successors in the job.

Before March 2020, I was unfamiliar with the phenomenon of ‘guided reading’. My daughter (aged eight during the school closures that year) was sometimes required to read the same short passage five days in a row and to perform different tasks in relation to it. Presumably the idea was for her to learn how specific sentence constructions work, in the hope that she would be able to apply that knowledge elsewhere – but the invitation to write autonomously, beyond a sentence or two, never arrived. It wasn’t merely the emphasis on obscure grammatical concepts that worried me, but the treatment of language in wholly syntactical terms, with the aim of distinguishing correct from incorrect usage. This is the way a computer treats language, as a set of symbols that generates commands to be executed, and which either succeeds or fails in that task.

This vision of language as code may already have been a significant feature of the curriculum, but it appears to have been exacerbated by the switch to online teaching. In a journal article from August 2020, ‘Learning under Lockdown: English Teaching in the Time of Covid-19’, John Yandell notes that online classes create wholly closed worlds, where context and intertextuality disappear in favour of constant instruction. In these online environments, reading

is informed not by prior reading experiences but by the toolkit that the teacher has provided, and … is presented as occurring along a tramline of linear development. Different readings are reducible to better or worse readings: the more closely the student’s reading approximates to the already finalised teacher’s reading, the better it is. That, it would appear, is what reading with precision looks like.

Defenders of this style of pedagogy may argue that its benefits aren’t measured according to how well it serves the children of academics, but the degree to which it mitigates the cultural disadvantages of class. It delivers proficiency for all, in the manner of accredited driving lessons. But that implies a minimisation of risk-taking and the establishment of acceptable margins for error. This is an injunction against creative interpretation and writing, a deprivation that working-class children will feel at least as deeply as anyone else.

A first principle of any online teaching (arguably of any online interaction) is that it shouldn’t attempt some perfect simulacrum of a face to face encounter. The mediation of teaching by screens, cameras and keyboards changes things, and these differences have to be accommodated. Some schools – notably those with more privileged, technologically better-equipped students – tried to provide as many ‘live’ online lessons as possible during lockdown, a way of reassuring pushy parents and a hostile media that teachers were still ‘working’. But an Ofsted guidance note from January 2021 cautioned against the assumption that live lessons are the ‘gold standard’ of remote education. It is difficult for pupils to sustain concentration for the duration of a class, and for the teacher to get a meaningful dialogue going or give proper feedback in a Zoom class of thirty children. Instead, Ofsted recommended using ‘recorded lesson segments followed by interactive chats, or tasks and feedback’. The tutorials in online teaching technique that my academic colleagues and I were required to take in summer 2020 also stressed the importance of breaking material down into ‘segments’ and interspersing them with something interactive, so that a student could potentially complete a ‘lecture’ in fifteen-minute chunks spread over a few days, if it suited them. The interactive element might involve automated feedback of some kind: a quiz, for example, with the student given their score on completion. The opportunity for constant assessment is there, but the primary purpose of interrupting teaching videos with regular tasks is to sustain student engagement.

There may be very good reasons for delivering online teaching in segments, punctuated by tasks and feedback, but as Yandell observes, other ways of reading and writing are marginalised in the process. Without wishing to romanticise the lonely reader (or, for that matter, the lonely writer), something is lost when alternating periods of passivity and activity are compressed into interactivity, until eventually education becomes a continuous cybernetic loop of information and feedback. How many keystrokes or mouse-clicks before a student is told they’ve gone wrong? How many words does it take to make a mistake?

In the utopia sold by the EdTech industry (the companies that provide platforms and software for online learning), pupils are guided and assessed continuously. When one task is completed correctly, the next begins, as in a computer game; meanwhile the platform providers are scraping and analysing data from the actions of millions of children. In this behaviourist set-up, teachers become more like coaches: they assist and motivate individual ‘learners’, but are no longer so important to the provision of education. And since it is no longer the sole responsibility of teachers or schools to deliver the curriculum, it becomes more centralised – the latest front in a forty-year battle to wrest control from the hands of teachers and local authorities.

This fully automated vision may be unrealistic – it isn’t with us quite yet – but EdTech has done extremely well out of the pandemic: active users of Google Classroom doubled globally to one hundred million over the course of April 2020. Google has been especially prolific in this area; by 2017, more than half of American school pupils were using its products. A large proportion of the 1.3 million laptops provided to schools by the UK government during the first year of the pandemic were Google Chromebooks, which run Google software by default. (Contracts for these laptops worth £240 million issued during the pandemic went to Computacenter, founded by the prominent Tory donor Sir Philip Hulme.) But EdTech platforms had been creeping into English schools for some years. Spending per pupil in England has been falling steadily since 2009; over roughly the same period, the gap between private and state school spending has more than doubled. As budgets are squeezed, schools are understandably tempted by products and services that promise to improve ‘learning outcomes’ without additional staff.

Even before the pandemic, my daughter was required as part of her homework to use programs such as ActiveLearn (owned by Pearson), Times Tables Rockstars and Mathletics. These enable teachers to set tasks, some based on games, whose successful completion is rewarded with points. Mathletics, which includes speed-arithmetic games, has a reward scheme with the chilling name ‘Meritopia’. ClassDojo features a scoring system in which teachers award points to pupils for anything from giving correct answers to punctuality or general bonhomie. The ostensible idea is to motivate children and create transparency around school activities (parents can log in to see how their child is doing), but software of this sort also fuels competitiveness and can make children anxious.

Constant interaction across an interface may be a good basis for forms of learning that involve information-processing and problem-solving, where there is a right and a wrong answer. The cognitive skills that can be trained in this way are the ones computers themselves excel at: pattern recognition and computation. The worry, for anyone who cares about the humanities in particular, is about the oversimplifications required to conduct other forms of education in these ways. Questions of interpretation and understanding don’t yield responses that are either correct or incorrect. What becomes of language when it is treated as a tool for the execution of tasks, rather than for the conveyance of meaning? One effect of the school closures has been to raise awareness of such matters. But the problem can’t entirely be blamed on technology or on the unscrupulous companies pushing it. There is a much broader ideology of literacy at work here, which derives not just from the fetishisation of technology, but the idolatry of markets.

‘The most important man in English education doesn’t teach a single English child, wasn’t elected by a single English voter and won’t spend more than a single week in England this year,’ Michael Gove wrote in a column for the Independent in February 2013. ‘But Andreas Schleicher deserves the thanks of everyone in England who wants to see our children fulfil the limit of their potential.’ Schleicher is a German mathematician and statistician who has overseen the Programme for International Student Assessment (PISA) since its inception in the mid 1990s. Its league table, which ranks participating countries according to the performance of 15 and 16-year-olds in standardised tests in maths, science and reading, was first published by the OECD in 2000, and has appeared every three years since then. Schleicher now leads the OECD Directorate of Education and Skills. He greeted the school closures in 2020 as ‘a great moment’ for learning, because ‘all the red tape that keeps things away is gone and people are looking for solutions that in the past they did not want to see.’ In a piece published in January 2021 by the World Economic Forum, he imagined the

full individual personalisation of content and pedagogy enabled by cutting-edge technology, using body information, facial expressions or neural signals … Built on rapid advancements in artificial intelligence, virtual and augmented reality and the Internet of Things, in this future it is possible to assess and certify knowledge, skills and attitudes instantaneously.

In this vision, the laborious and expensive work of extracting data on literacy via surveys and tests, which produce scores only every few years, might eventually become unnecessary: the data gleaned by EdTech platforms could be combined and crunched instead. Blanket surveillance replaces the need for formal assessment.

The ambition to produce a universal measure of literacy didn’t start with Schleicher. Unesco set out plans for such a scheme in the 1950s, though it wasn’t until the 1980s, following innovations by American and Canadian statisticians, that a framework for the measurement of literacy was fully developed and applied. In place of the Victorian distinction between the ‘literate’ and the ‘illiterate’, which mapped onto the basic class structure of industrial capitalism, the idea was to place children and adults on a sliding scale of literacy. This coincided with the growing awareness among economists of the value of education in increasing productivity and GDP: workers were reconceived as ‘human capital’, investment in which would generate future financial returns.

The OECD’s first innovation in this area was the International Adult Literacy Survey (IALS), which ran from 1994 to 1998. Five ‘levels’ of literacy were identified, with level three considered a ‘suitable minimum’ for everyday adult life. Britain was informed that 52 per cent of its adult population fell below this minimum, a shock that prompted the 1999 Moser Report (‘Improving Literacy and Numeracy: A Fresh Start’), overseen by the statistician Claus Moser. This was the catalyst for the Blair government’s Skills for Life strategy, which aimed to upgrade the literacy and numeracy of more than two million adults by 2010. PISA followed soon after, along with a series of other OECD and Unesco survey instruments, some aimed at adults and others at school pupils.

IALS, which effectively continued the work underway in the US and Canada, adopted an ‘information processing’ view of literacy as the capacity to read a text, extract information from it, and then use that information to achieve a specific goal. Respondents were required to read various types of text, from a bus timetable to an advertisement or a newspaper story, and then answer questions about them. Because the aim was to measure literacy as a general cognitive capacity according to an international standard, the questions were carefully designed to avoid testing a respondent’s knowledge about the world or what the texts were referring to. In a similar way to English lessons during lockdown, the meaning of written words was separated out from the culture in which the reader was situated. What’s more, the difficulty of a test was deemed to be a function of the number of people who passed it: the rarer the type of literacy, the more advanced it must be. One of the assumptions here was that people with ‘high’ levels of literacy must already have acquired the levels below, so that someone who can, say, extract useful information from the markets pages of the Financial Times will definitely be able to follow a recipe, whereas the reverse may not be true. Where all this leaves the ability to get a niche joke or adapt an internet meme is unclear.

By the time of PISA, the concept of ‘literacy’ had colonised maths and science, both of which were subject to their own forms of assessment. A third category, ‘reading’, was defined as ‘understanding, using and reflecting on written texts, in order to achieve one’s goals, to develop one’s knowledge and potential, and to participate in society’. Any relationship to literature, beyond the ability to mine it for information in the service of some task is made invisible. ‘Reading’, according to the OECD, should involve ‘understanding’, but only in the minimal sense of responding correctly to a question. Confirming Adorno’s worst fears of the ‘primacy of practical reason’, reading is no longer dissociable from the execution of tasks. And, crucially, the ‘goals’ to be achieved through the ability to read, the ‘potential’ and ‘participation’ to be realised, are economic in nature.

The​ ideological assumption, taken as read by the OECD or the World Economic Forum, that the benefits of literacy – indeed of education generally – are ultimately to be realised in the marketplace has provoked huge political disruption and psychological pain in the UK over the last ten years. The trebling of university tuition fees in 2012 was justified on the basis that the quality of higher education is commensurate with graduate earnings. The hope, when the new fee regime was introduced, was that a competitive market would spontaneously emerge as different ‘providers’ of higher education charged different rates according to the ‘quality’ of what they had to offer. When the market failed to come about, because so many universities opted to charge the maximum fees allowed, the government unleashed further policies in a bid to create more competition. In 2013, the cap on the number of students a university could accept was unexpectedly removed, leading to vast over-recruitment by ambitious universities (many of which didn’t have enough teaching space to cope with the increased numbers), and dire financial consequences for others. In 2016, the Teaching Excellence Framework, overseen by a new Office for Students, was introduced; it measured, among other things, universities’ graduate employment rates. And since 2019, with the Treasury increasingly unhappy about the amount of student debt still sitting on the government’s balance sheet and the government resorting to ‘culture war’ at every opportunity, there has been an effort to single out degree programmes that represent ‘poor value for money’, measured in terms of graduate earnings. (For reasons best known to itself, the usually independent Institute for Fiscal Studies has been leading the way in finding correlations between degree programmes and future earnings.) Many of these programmes are in the arts and humanities, and are now habitually referred to by Tory politicians and their supporters in the media as ‘low-value degrees’.

But if the agenda is to reduce the number of young people studying humanities subjects, and to steer them instead towards STEM subjects, finance and business studies, the rhetoric is superfluous: the tuition fee hike, combined with the growing profile of league tables, had already worked its magic. The number of students studying English and modern languages at university fell by a third over the decade starting in 2011, and the number studying history by a fifth. In 2012, English was the most popular A level subject, with ninety thousand students taking the exam that summer; by 2021, that figure had fallen to 57,000. The number studying French and German at A level fell by around half over a similar period. Academic jobs – even whole departments – in these areas have been threatened by a combination of brute market forces and university managers beholden to ‘enterprise’ and ‘impact’.

One consequence of all this is that studying the humanities may become a luxury reserved for those who can fall back on the cultural and financial advantages of their class position. (This effect has already been noticed among young people going into acting, where the results are more visible to the public than they are in academia or heritage organisations.) Yet, given the changing class composition of the UK over the past thirty years, it’s not clear that contemporary elites have any more sympathy for the humanities than the Conservative Party does. A friend of mine recently attended an open day at a well-known London private school, and noticed that while there was a long queue to speak to the maths and science teachers, nobody was waiting to speak to the English teacher. When she asked what was going on, she was told: ‘I’m afraid parents here are very ambitious.’ Parents at such schools, where fees have tripled in real terms since the early 1980s, tend to work in financial and business services themselves, and spend their own days profitably manipulating and analysing numbers on screens. When it comes to the transmission of elite status from one generation to the next, Shakespeare or Plato no longer has the same cachet as economics or physics.

Moralpanics about ‘political correctness’ date back to the 1970s and early 1980s, when there was no mandated national curriculum, and conservatives could entertain paranoid fantasies of ‘loony left’ councils busying themselves with the policing of language. It’s true that teachers then exercised considerable discretion over what children learned, and central government had little power or influence in the matter. The current panic over ‘wokeism’ on the part of Tory politicians and some newspapers has an entirely different pedagogical and political context. As the academies programme expands, more and more schools have been brought under the direct supervision of the Department for Education, while all live under the gaze of Ofsted. Local authorities have been stripped of their powers or – in the case of the GLC – abolished altogether. Universities are forced to compete with one another, their performance assessed by means of financial metrics, league tables and burdensome audits. Teachers, in both schools and universities, are alienated and exhausted by the routinisation and sheer volume of work, and a worrying number want to leave the profession altogether. Despite all this, humanities teachers remain the focus in the culture war waged by the right.

Leaving aside the strategic political use of terms such as ‘woke’ and ‘cancel culture’, it would be hard to deny that we live in an age of heightened anxiety over the words we use, in particular the labels we apply to people. This has benefits: it can help to bring discriminatory practices to light, potentially leading to institutional reform. It can also lead to fruitless, distracting public arguments, such as the one that rumbled on for weeks over Angela Rayner’s description of Conservatives as ‘scum’. More and more, words are dredged up, edited or rearranged for the purpose of harming someone. Isolated words have acquired a weightiness in contemporary politics and public argument, while on digital media snippets of text circulate without context, as if the meaning of a single sentence were perfectly contained within it, walled off from the surrounding text. The exemplary textual form in this regard is the newspaper headline or corporate slogan: a carefully curated series of words, designed to cut through the blizzard of competing information.

If people today worry about using the ‘wrong’ words, it is not because there has been a sudden revival of 1970s pedagogy or radical local government, or, given the political and economic trends of the past dozen years, because the humanities are flexing their muscles. Visit any actual school or university today (as opposed to the imaginary ones described in the Daily Mail or the speeches of Conservative ministers) and you will find highly disciplined, hierarchical institutions, focused on metrics, performance evaluations, ‘behaviour’ and quantifiable ‘learning outcomes’. Andreas Schleicher is winning, not Michel Foucault. If young people today worry about using the ‘wrong’ words, it isn’t because of the persistence of the leftist cultural power of forty years ago, but – on the contrary – because of the barrage of initiatives and technologies dedicated to reversing that power. The ideology of measurable literacy, combined with a digital net that has captured social and educational life, leaves young people ill at ease with the language they use and fearful of what might happen should they trip up.

There’s no question that literacy and pedagogy must evolve alongside technology. It’s possible to recognise this while also defending an educational humanism – with a small ‘h’ – that values the time and space given to a young person to mess around, try things out, make mistakes, have a say, and not immediately find out what score they’ve got as a result. It has become clear, as we witness the advance of Panopto, Class Dojo and the rest of the EdTech industry, that one of the great things about an old-fashioned classroom is the facilitation of unrecorded, unaudited speech, and of uninterrupted reading and writing.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 44 No. 5 · 10 March 2022

William Davies captures the isolating, demoralising, often weird experience of post-pandemic university life (LRB, 24 February). At the University of Washington, the main forum for students to commiserate with one another about the anxieties of remote learning has been an Instagram page called ‘UW Confessions’. Students submit anonymous messages, which are then posted against a colourful background for thousands of followers to see. As if to confirm Davies’s argument regarding the difficulty the first internet generation has with writing, at around the time midterm papers were due, one exasperated student submitted the following confession:

I fucking hate essays. I don’t have a single original thing to contribute to this. Why do I have to establish an original perspective … Just teach me what we already know and let’s leave it at that. Why the fuck would I have anything of value to add on top of these ancient philosophers, these scientists, experts and authors? What the fuck am I supposed to say?

Townson Cocke
Seattle, Washington

Vol. 44 No. 6 · 24 March 2022

William Davies writes about student plagiarism and what it tells us about the way children learn in the internet age (LRB, 24 February). My own pre-pandemic experience as an exam board chair in the UK was that plagiarisers were most often first-year undergraduates, white, male and privately educated, not students from what universities call ‘disadvantaged backgrounds’. The former thought it normal to work together in small groups with students from backgrounds similar to their own and to ‘get help’; they saw nothing wrong in appropriating the work of other people without acknowledgment; and they regarded university as a finishing school rather than a place of intellectual inquiry.

However, as Davies suggests, the plagiarism problem isn’t information overload as such, but the failure to help students in schools and universities handle these challenges. Although most UK degree programmes include a single module in ‘study skills’, there is no space for intensive small-group tuition in how to write. Indeed, many lecturers write poorly themselves, their own education having suffered from the same defect.

Geoff Payne
Newcastle upon Tyne

William Davies worries about the fate of the humanities in a digitised environment where ‘snippets of text circulate without context,’ ostensibly self-complete. Confirmation is to be found at my own university, where assorted aspirational nouns – 'culture’, ‘integrity’, ‘scholarship’ and so on – have been inscribed in the concrete of the main square. If you follow this chain of verbal icons as if they were breadcrumbs, they lead you to the site of the campus bookshop, now permanently closed.

Peter Womack
Norwich

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences