Paul Taylor

Paul Taylor is professor of health informatics at UCL.

From The Blog
17 May 2022

The results of REF2021, the latest iteration of the Research Excellence Framework assessing the quality of research at UK universities, were published last week. My institution, UCL, is boasting that it came second, above Cambridge and beaten only by Oxford. Cambridge is boasting that it came third, but behind Imperial and the Institute of Cancer Research; institutions that shouldn’t quite count, it implies, since neither covers the full range of academic endeavour. Imperial, however, is clear that it has been shown to be the UK’s top university. The same claim is made by Oxford.

From The Blog
26 August 2021

Researchers led by a team from Emory University recently announced that they had used artificial intelligence to predict patients’ self-reported racial identity from medical images. It is an unexpected, unsettling result.

From The Blog
12 February 2021

As chest X-rays of Covid-19 patients began to be published in radiology journals, AI researchers put together an online database of the images and started experimenting with algorithms that could distinguish between them and other X-rays. Early results were astonishingly successful, but disappointment soon followed. The algorithms were responding not to signs of the disease, but to minor technical differences between the two sets of images, which were sourced from different hospitals: such things as the way the images were labelled, or how the patient was positioned in the scanner. It’s a common problem in AI. We often refer to ‘deep’ machine learning because we think of the calculations as being organised in layers and we now use many more layers than we used to, but what is learned is nevertheless superficial.

From The Blog
29 April 2020

The hope is that almost all of us will download the app, that we will be diligent about using it if we develop symptoms, that the detection of identifiers will be reliable, that the subsequent risk calculation will be more or less accurate, and that we will, by and large, self-isolate if the app tells us to. Crucially, the strategy also requires easy access to tests so people can be rapidly alerted if a contact who had symptoms turns out not to have had the disease.

From The Blog
7 May 2018

Jeremy Hunt announced last Wednesday that as many as 270 women may have died because an error in a computer algorithm prevented 450,000 women being invited for routine breast cancer screening appointments. Stories about IT glitches will be increasingly common as artificial intelligence enables more and more healthcare to be automated. As things stand, people are still better than computers at detecting early signs of cancer on mammograms, and the neural networks currently being designed to analyse the images are intended for use as an aid rather than a replacement for human decision making. The hope is to engineer systems that combine the different strengths of humans and computers, with outcomes that neither is capable of independently. The sad reality is that we seem to end up with systems that combine an all-too-human capacity for error with a computer’s blunt force, and so wreak havoc at an inhuman scale.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences