Paul Taylor


21 November 2023

Alignment Problems

Since AI programs, however intelligent they may be, are still only programs, we ought to be able to rely on them to do as they are told. The difficulty is being sure that we have in fact told them to do what we want them to do – otherwise known as the alignment problem.

Read more about Alignment Problems


25 May 2023

Weighed in the Balances

In the summer of 2018 I had a chance encounter with an acquaintance who knew I worked in health tech. He told me that he knew Ali Parsa, the CEO of Babylon Health, the company behind the app GP at Hand. We talked about Parsa’s extraordinary career and it was clear that my friend was impressed not so much by Parsa’s achievements as by his daring. He recalled Parsa reflecting that while Babylon could become a unicorn – a billion-dollar tech startup – it might also end in complete failure. Perhaps both will be true.

Read more about Weighed in the Balances


5 May 2023

T for Transformer

Over the last ten years the proportion of advances in AI that have come from research teams in the big tech companies has been gradually increasing and they are now utterly dominant. Google Brain has been one of the most important. The T in GPT stands for transformer, an algorithm developed at Google Brain that has proved uncannily successful in identifying patterns, to the extent that models built by transformers can generate realistic images and video, meaningful text, and apparently intelligent answers to queries or solutions to problems.

Read more about T for Transformer


5 October 2022

Ministers against Science

Senior staff at universities are increasingly concerned that the faction of the Tory Party now in government is significantly less interested in science than the one that wrote the manifesto on which it was elected.

Read more about Ministers against Science


17 May 2022

Who’s best?

The results of REF2021, the latest iteration of the Research Excellence Framework assessing the quality of research at UK universities, were published last week. My institution, UCL, is boasting that it came second, above Cambridge and beaten only by Oxford. Cambridge is boasting that it came third, but behind Imperial and the Institute of Cancer Research; institutions that shouldn’t quite count, it implies, since neither covers the full range of academic endeavour. Imperial, however, is clear that it has been shown to be the UK’s top university. The same claim is made by Oxford.

Read more about Who’s best?


26 August 2021

‘Reading Race’

Researchers led by a team from Emory University recently announced that they had used artificial intelligence to predict patients’ self-reported racial identity from medical images. It is an unexpected, unsettling result.

Read more about ‘Reading Race’


12 February 2021

Stochastic Parrots

As chest X-rays of Covid-19 patients began to be published in radiology journals, AI researchers put together an online database of the images and started experimenting with algorithms that could distinguish between them and other X-rays. Early results were astonishingly successful, but disappointment soon followed. The algorithms were responding not to signs of the disease, but to minor technical differences between the two sets of images, which were sourced from different hospitals: such things as the way the images were labelled, or how the patient was positioned in the scanner. It’s a common problem in AI. We often refer to ‘deep’ machine learning because we think of the calculations as being organised in layers and we now use many more layers than we used to, but what is learned is nevertheless superficial.

Read more about Stochastic Parrots


29 April 2020

Is there an app for that?

The hope is that almost all of us will download the app, that we will be diligent about using it if we develop symptoms, that the detection of identifiers will be reliable, that the subsequent risk calculation will be more or less accurate, and that we will, by and large, self-isolate if the app tells us to. Crucially, the strategy also requires easy access to tests so people can be rapidly alerted if a contact who had symptoms turns out not to have had the disease.

Read more about Is there an app for that?


7 May 2018

Jeremy Hunt plays the numbers

Jeremy Hunt announced last Wednesday that as many as 270 women may have died because an error in a computer algorithm prevented 450,000 women being invited for routine breast cancer screening appointments. Stories about IT glitches will be increasingly common as artificial intelligence enables more and more healthcare to be automated. As things stand, people are still better than computers at detecting early signs of cancer on mammograms, and the neural networks currently being designed to analyse the images are intended for use as an aid rather than a replacement for human decision making. The hope is to engineer systems that combine the different strengths of humans and computers, with outcomes that neither is capable of independently. The sad reality is that we seem to end up with systems that combine an all-too-human capacity for error with a computer’s blunt force, and so wreak havoc at an inhuman scale.

Read more about Jeremy Hunt plays the numbers


15 May 2017

The Cost of Doing Nothing

There are no good news stories about computers and the NHS. The reporting of Friday’s malware attack may, however, be usefully different from the typical bad news story, in which hubristic politicians or nameless bureaucrats waste millions, if not billions, of public funds on projects which go over budget, fail to deliver, prove to be unusable or collapse under pressure. In this instance it seems that, for once, inaction and underinvestment have led to something sufficiently focused to be newsworthy, showing that there can be a political as well as a human cost to doing nothing.

Read more about The Cost of Doing Nothing