Paul Taylor


26 June 2024

Spear Phishing

Qilin and Synnovis, the two entities involved in the recent ransomware attack that has disabled laboratory services at London hospitals, are very different in many ways but nevertheless have a common purpose: using tech to extract money from healthcare organisations.

Read more about Spear Phishing


22 March 2024

Who read it?

Altmetric is a website that tracks mentions of academic research on social media. Last week, a paper published in Radiology Case Reports leaped to near the top of the charts. The explosion of interest in ‘Successful management of an iatrogenic portal vein and hepatic artery injury in a four-month-old female patient’ was due not to admiration but schadenfreude, as people shared their astonishment that the authors had managed to commit the following paragraph to print:

In summary, the management of bilateral iatrogenic I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model.

Read more about Who read it?


21 November 2023

Alignment Problems

Since AI programs, however intelligent they may be, are still only programs, we ought to be able to rely on them to do as they are told. The difficulty is being sure that we have in fact told them to do what we want them to do – otherwise known as the alignment problem.

Read more about Alignment Problems


25 May 2023

Weighed in the Balances

In the summer of 2018 I had a chance encounter with an acquaintance who knew I worked in health tech. He told me that he knew Ali Parsa, the CEO of Babylon Health, the company behind the app GP at Hand. We talked about Parsa’s extraordinary career and it was clear that my friend was impressed not so much by Parsa’s achievements as by his daring. He recalled Parsa reflecting that while Babylon could become a unicorn – a billion-dollar tech startup – it might also end in complete failure. Perhaps both will be true.

Read more about Weighed in the Balances


5 May 2023

T for Transformer

Over the last ten years the proportion of advances in AI that have come from research teams in the big tech companies has been gradually increasing and they are now utterly dominant. Google Brain has been one of the most important. The T in GPT stands for transformer, an algorithm developed at Google Brain that has proved uncannily successful in identifying patterns, to the extent that models built by transformers can generate realistic images and video, meaningful text, and apparently intelligent answers to queries or solutions to problems.

Read more about T for Transformer


5 October 2022

Ministers against Science

Senior staff at universities are increasingly concerned that the faction of the Tory Party now in government is significantly less interested in science than the one that wrote the manifesto on which it was elected.

Read more about Ministers against Science


17 May 2022

Who’s best?

The results of REF2021, the latest iteration of the Research Excellence Framework assessing the quality of research at UK universities, were published last week. My institution, UCL, is boasting that it came second, above Cambridge and beaten only by Oxford. Cambridge is boasting that it came third, but behind Imperial and the Institute of Cancer Research; institutions that shouldn’t quite count, it implies, since neither covers the full range of academic endeavour. Imperial, however, is clear that it has been shown to be the UK’s top university. The same claim is made by Oxford.

Read more about Who’s best?


26 August 2021

‘Reading Race’

Researchers led by a team from Emory University recently announced that they had used artificial intelligence to predict patients’ self-reported racial identity from medical images. It is an unexpected, unsettling result.

Read more about ‘Reading Race’


12 February 2021

Stochastic Parrots

As chest X-rays of Covid-19 patients began to be published in radiology journals, AI researchers put together an online database of the images and started experimenting with algorithms that could distinguish between them and other X-rays. Early results were astonishingly successful, but disappointment soon followed. The algorithms were responding not to signs of the disease, but to minor technical differences between the two sets of images, which were sourced from different hospitals: such things as the way the images were labelled, or how the patient was positioned in the scanner. It’s a common problem in AI. We often refer to ‘deep’ machine learning because we think of the calculations as being organised in layers and we now use many more layers than we used to, but what is learned is nevertheless superficial.

Read more about Stochastic Parrots


29 April 2020

Is there an app for that?

The hope is that almost all of us will download the app, that we will be diligent about using it if we develop symptoms, that the detection of identifiers will be reliable, that the subsequent risk calculation will be more or less accurate, and that we will, by and large, self-isolate if the app tells us to. Crucially, the strategy also requires easy access to tests so people can be rapidly alerted if a contact who had symptoms turns out not to have had the disease.

Read more about Is there an app for that?