The results of REF2021, the latest iteration of the Research Excellence Framework assessing the quality of research at UK universities, were published last week. My institution, UCL, is boasting that it came second, above Cambridge and beaten only by Oxford. Cambridge is boasting that it came third, but behind Imperial and the Institute of Cancer Research; institutions that shouldn’t quite count, it implies, since neither covers the full range of academic endeavour. Imperial, however, is clear that it has been shown to be the UK’s top university. The same claim is made by Oxford.
Researchers led by a team from Emory University recently announced that they had used artificial intelligence to predict patients’ self-reported racial identity from medical images. It is an unexpected, unsettling result.
As chest X-rays of Covid-19 patients began to be published in radiology journals, AI researchers put together an online database of the images and started experimenting with algorithms that could distinguish between them and other X-rays. Early results were astonishingly successful, but disappointment soon followed. The algorithms were responding not to signs of the disease, but to minor technical differences between the two sets of images, which were sourced from different hospitals: such things as the way the images were labelled, or how the patient was positioned in the scanner. It’s a common problem in AI. We often refer to ‘deep’ machine learning because we think of the calculations as being organised in layers and we now use many more layers than we used to, but what is learned is nevertheless superficial.
The hope is that almost all of us will download the app, that we will be diligent about using it if we develop symptoms, that the detection of identifiers will be reliable, that the subsequent risk calculation will be more or less accurate, and that we will, by and large, self-isolate if the app tells us to. Crucially, the strategy also requires easy access to tests so people can be rapidly alerted if a contact who had symptoms turns out not to have had the disease.
Jeremy Hunt announced last Wednesday that as many as 270 women may have died because an error in a computer algorithm prevented 450,000 women being invited for routine breast cancer screening appointments. Stories about IT glitches will be increasingly common as artificial intelligence enables more and more healthcare to be automated. As things stand, people are still better than computers at detecting early signs of cancer on mammograms, and the neural networks currently being designed to analyse the images are intended for use as an aid rather than a replacement for human decision making. The hope is to engineer systems that combine the different strengths of humans and computers, with outcomes that neither is capable of independently. The sad reality is that we seem to end up with systems that combine an all-too-human capacity for error with a computer’s blunt force, and so wreak havoc at an inhuman scale.
There are no good news stories about computers and the NHS. The reporting of Friday’s malware attack may, however, be usefully different from the typical bad news story, in which hubristic politicians or nameless bureaucrats waste millions, if not billions, of public funds on projects which go over budget, fail to deliver, prove to be unusable or collapse under pressure. In this instance it seems that, for once, inaction and underinvestment have led to something sufficiently focused to be newsworthy, showing that there can be a political as well as a human cost to doing nothing.
‘Around 6000 people lose their lives every year because we do not have a proper seven-day service in hospitals,’ Jeremy Hunt said on 16 July 2015. ‘You are 15 per cent more likely to die if you are admitted on a Sunday compared to being admitted on a Wednesday.’ A Department of Health statement later clarified that the figures came from an analysis ‘soon to be published in the BMJ’. Nick Freemantle, a professor of epidemiology at UCL, had been invited by Bruce Keogh, the chief medical officer, to update a 2012 analysis of hospital data, apparently on the suggestion of Simon Stevens, the new chief executive of NHS England. The resulting paper wasn’t accepted by the BMJ until 29 July, after Hunt’s speech. When it appeared in September, it contained no reference to the 6000 figure.
London’s two velodromes were built in the 19th and 21st centuries. The indoor track at the Lee Valley Velodrome, one of the fastest in the world, is housed in a beautiful stadium built at cost of £94 million. Its distinctive roof, a hyperbolic paraboloid clad in 5000m2 of custom-cut Western red cedar, is a prominent landmark at the edge of the 2012 Olympic park. The open-air track at Herne Hill is completely hidden in a South London suburb.
At the centre of Monday night’s Panorama programme on fraud in the NHS was an interview with Jim Gee, an expert on the financial cost of healthcare fraud. Gee showed the presenter a newly published report, of which he was the first author, and talked about its findings. He turned to a key page and the camera picked out a bar chart as the two discussed some of the figures it contained. The report was also given wide coverage in the print media this week. Stories were run in all the broadsheets and across the tabloids with many local papers picking up the story and giving it a local spin. The figure, highlighted in Panorama, that most journalists seized on was the estimate that fraud was costing the NHS around £7 billion a year, enough – the Express pointed out – to pay for 250,000 nurses.
According to the front page of yesterday’s Guardian, the NHS is to start selling our confidential medical records. Every doctor has a duty to keep patient-identifiable data secure, and only share it as far as is in the patient’s immediate best interests. At the same time, in order to run healthcare organisations or to carry out medical research, it is necessary to compile statistics about diseases and treatments. It therefore makes sense for some information collected in the course of caring for patients to be made more widely available – shared with managers, bureaucrats and researchers – but only if it is anonymised.