Paul Taylor

Paul Taylor is professor of health informatics at UCL.

The argument between mitigation and suppression now seems to have been settled in favour of the latter. But when the lockdown ends, a calculation will still have to be made about the relative merits of unappealing alternatives. The poor public understanding of mortality rates won’t make this any easier. The situation isn’t helped by the fact that two very different numbers are reported: daily totals of confirmed cases who died in hospital and weekly totals of later registrations, including many, perhaps 18 per cent of the total, who died outside hospital. The reporting of the epidemic also fails to place deaths from the virus in the context of normal mortality rates. When you read the daily updates of the number of hospital deaths, you aren’t reminded that last year, in England, an average of 1360 people died every day, a total of 496,354 for the year. In London right now, the death rate is way above normal, but for the UK as a whole the number of deaths in March 2020 was lower than in the same month last year. 

From The Blog
29 April 2020

The hope is that almost all of us will download the app, that we will be diligent about using it if we develop symptoms, that the detection of identifiers will be reliable, that the subsequent risk calculation will be more or less accurate, and that we will, by and large, self-isolate if the app tells us to. Crucially, the strategy also requires easy access to tests so people can be rapidly alerted if a contact who had symptoms turns out not to have had the disease.

Letter

Superhuman

6 June 2019

Ben Jackson wonders what impact AlphaZero, a computer program capable of teaching itself to play games at a superhuman level, will have beyond chess (LRB, 6 June). How many ‘real-world situations’, Jackson asks, ‘can be productively reduced to a process of optimisation, with a unitary goal and a predefined set of rules’? Deepmind, the artificial intelligence research company that developed...
From The Blog
7 May 2018

Jeremy Hunt announced last Wednesday that as many as 270 women may have died because an error in a computer algorithm prevented 450,000 women being invited for routine breast cancer screening appointments. Stories about IT glitches will be increasingly common as artificial intelligence enables more and more healthcare to be automated. As things stand, people are still better than computers at detecting early signs of cancer on mammograms, and the neural networks currently being designed to analyse the images are intended for use as an aid rather than a replacement for human decision making. The hope is to engineer systems that combine the different strengths of humans and computers, with outcomes that neither is capable of independently. The sad reality is that we seem to end up with systems that combine an all-too-human capacity for error with a computer’s blunt force, and so wreak havoc at an inhuman scale.

Whose Property? Big Medical Data

Paul Taylor, 8 February 2018

Patients​ often complain that their GP spends more time typing and looking at a computer screen than listening to them. This isn’t really new: doctors have kept records of their encounters with patients since the time of Hippocrates. But changes in record-keeping practices have both reflected and enabled the development of modern scientific medicine, which is less concerned with...

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences