Turing’s Cathedral: The Origins of the Digital Universe 
by George Dyson.
Allen Lane, 401 pp., £25, March 2012, 978 0 7139 9750 7
Show More
Show More

A decade ago, digging through a physicist’s archive, I stumbled on a document that has haunted me ever since: a hand-typed table of integrals seemingly little different from the ones I’d kept by me as a student. The familiarity of the contents jarred with the table’s front page. Only 31 copies of the table had been printed, the recipients listed on the cover. The table, dated 24 June 1947, had been prepared to accompany a classified report. The distribution lists for the two documents were a close match; nearly everyone who was issued with the table had security clearance to handle secret defence-related materials.

How to reconcile the tables’s banal contents with its restricted circulation? What disaster would have befallen the US government if enemies of the state had learned that the integral of x/(1 + x)2 between x = 0 and x = 1 equalled 0.1931? How could the authorities have hoped to limit access to basic mathematical results? Wouldn’t anyone schooled in the routines of calculus arrive at the same answers, whether or not they appeared on the table’s special distribution list?

The classified report was written by the physicist and Nobel laureate Hans Bethe. (I found both the report and the table among Bethe’s papers at Cornell University in upstate New York.) Bethe had become one of the world’s experts on nuclear physics in the 1930s; by 1938 he had pieced together the complicated nuclear reactions that make stars shine. He served as the director of the Theoretical Division at wartime Los Alamos, reporting directly to Robert Oppenheimer. After the war, he returned to Cornell, but he remained active as a consultant to the nuclear weapons programme, as well as to the budding nuclear power industry.

In 1947, Bethe was asked to tackle the problem of shielding for nuclear reactors. In trying to work out how best to block or absorb the radiation released when heavy nuclei like uranium or plutonium are blasted apart by neutrons, Bethe kept finding that he needed to evaluate integrals of a particular form. A colleague – another Manhattan Project veteran and by then a senior researcher at a nuclear-reactor facility – prepared the table of integrals so that selected co-workers would be able to perform calculations like Bethe’s.

Similar mathematical handbooks and tables had been produced for centuries. Around the time of the French Revolution, as Lorraine Daston has written, leading civil servants produced mammoth tables of logarithms and trigonometric functions calculated to 14 or more decimal places – far greater precision than any practical application would then have required. Gaspard Riche de Prony’s tables were a deliberate demonstration of Enlightenment mastery, one more testament to the triumph of Reason, to be admired more than used.

Though the 1947 table of integrals was not prepared in anticipation of public fanfare (just the opposite, as its distribution list made clear), the table stood closer to Prony’s time than to ours. Indeed, the introduction explained that most of the integrals had been evaluated by making clever changes of variables so that the functions of interest matched forms that had been reported in the venerable Nouvelles tables d’intégrales définies, published in Leyden in 1867 by the wealthy Dutch mathematician David Bierens de Haan.

Two years into the atomic age, access to a well-stocked library filled with old, foreign-language books was still required. Hence the need for the 1947 table: although in principle anyone should have been able to compute the integrals, in practice completing such calculations required substantial resources of time and skill. It wasn’t long after Bethe prepared his report, however, that the nature of calculation would change for ever.

The unlikely setting for the transformation – as detailed in George Dyson’s engaging history – was the Institute for Advanced Study in Princeton. The institute had been founded in 1930 with funding from the department-store magnate Louis Bamberger and his sister, Caroline Bamberger Fuld. Advised by the education reformer Abraham Flexner, the founders aimed to establish a place for young intellectuals to pursue their scholarship before the routines of university life – teaching, committee work and all that – could dampen their creativity. Flexner sought some quiet place where scholars could sit and think. ‘Well, I can see how you could tell whether they were sitting,’ the government science adviser Vannevar Bush joked.

In 1933, Albert Einstein became the first permanent faculty member; he was soon joined by eccentric, lonely geniuses like the logician Kurt Gödel, who eventually starved himself to death out of fear that people were trying to poison his food. Even after Oppenheimer became director in 1947, fresh from Los Alamos, the institute remained closer in spirit to a monastery than a laboratory – a place far more likely to stack Bierens de Haan’s Nouvelles tables d’intégrales on its shelves than to host the whirr of lathes and drills. A New Yorker reporter observed in 1949 that the institute had a ‘little-vine-covered-cottage atmosphere’. Around that time, Bethe advised a young physicist who was about to relocate to the institute for a year ‘not to expect to find too much going on’ there.

The calm was disturbed by members of a new team assembled by John von Neumann, the legendary mathematician. Von Neumann too had spent much of the war at Los Alamos. There, he was gripped by a vision as remarkable as Charles Babbage’s a century before: perhaps one could build a machine to calculate. Von Neumann was motivated not only by curiosity about the workings of the brain and the essence of cognition: he needed to know whether various designs for nuclear bombs would go boom or bust.

The wartime weapons project gave von Neumann a taste for semi-automated computation. Among the challenges he and colleagues faced was tracking, in some quantitative way, the likely outcomes when neutrons were introduced into a mass of fissionable material. Would they get absorbed, scatter off the heavy nuclei or split the nuclei apart? Equally important: how would a shock wave from exploding charges propagate through the bomb’s core? During the war, calculations like these were largely carried out by chains of human operators armed with handheld Marchant calculators, a process recounted in David Alan Grier’s When Computers Were Human. Young physicists like Richard Feynman carved up the calculations into discrete steps, and then assistants – often the young wives of the laboratory’s technical staff – would crunch the numbers, each one performing the same mathematical operation over and over again. One of them would square any number given to her; another would add two numbers and pass the result to the next down the line.

That rough-and-ready process had worked well enough for wartime calculations pertaining to fission bombs. But hydrogen bombs were a whole different beast – not just in potential explosive power, but computationally as well. Their internal dynamics, driven by a subtle interplay between roiling radiation, hot plasma and nuclear forces, were far more complicated to decipher. Trying to determine whether a given design would ignite nuclear fusion – in which lightweight nuclei fuse together as they do inside stars, unleashing far more destructive power than the fission bombs that were dropped on Hiroshima and Nagasaki – or fizzle out posed computational challenges that could never be met by teams of people brandishing Marchant calculators. They required, or so von Neumann concluded, a fully automated means of solving many complicated equations at once. They required an electronic, digital computer that could execute stored programs.

Some of the original ideas behind stored-program computation had been thought up before the war by the British mathematician and cryptologist Alan Turing; indeed, the world’s first instantiation of Turing’s ideas was completed by a team in Manchester in 1948. Hence the title of Dyson’s book, though much of it is taken up with von Neumann and his group. As with the Manhattan Project, what had begun as a British idea was scaled up to industrial size by the Americans. One group, based at the University of Pennsylvania and sponsored by the US Army, had been at work on a similar device, codenamed Eniac (for ‘Electronic Numerical Integrator and Computer’), since 1943. Immediately after the war von Neumann entered into competition with the Pennsylvania group, accumulating government contracts to build his own computer at the institute.

Von Neumann had rubbed shoulders with Turing in the 1930s, when Turing was working on his dissertation at Princeton, and during the war, he had also consulted on the Eniac in Pennsylvania. Indeed, he helped to redirect the project from its original mandate – calculating artillery tables for the army’s ballistics laboratory – to undertaking calculations on behalf of Los Alamos for its nuclear weapon designs. At the time, the Pennsylvania machine could execute only fixed programs, which had to be set in advance by physically rewiring components before any calculations could be performed. Changing the program took weeks of swapping cables, alternating switches, checking and rechecking the resulting combinations. Like the Eniac’s inventors, von Neumann sought some means by which a computer could store its program – its instructions – alongside the resulting data, in the same memory, just as Turing had envisioned.

Designed before the invention of the transistor, von Neumann’s computer required more than two thousand vacuum tubes to work in concert. Such tubes, by then a decades-old technology, produced electric current by boiling electrons off a heated chunk of metal – hardly the sleek image of today’s laptops and smartphones. To counter the build-up of heat from the tubes, the machine required a refrigeration unit capable of producing 15 tons of ice per day. With astounding dexterity, von Neumann’s small team coaxed their room-sized computer to life in the late 1940s. By the summer of 1951, the machine was chugging along full-time on H-bomb calculations, running around the clock for stretches of two months at a time. When operating at full capacity, it could boast five kilobytes of usable memory. As Dyson notes, that’s about the same amount of memory that today’s computers use in one half-second when compressing music files into MP3 format.

The institute computer project was fuelled largely by contracts from the Atomic Energy Commission, the postwar successor to the Manhattan Project. The contracts stipulated that virtually no information about thermonuclear reactions was to be released to the public – a position Truman reiterated when, on the last day of January 1950, he committed the United States to the crash-course development of an H-bomb. As a cover for their main task, therefore, von Neumann’s team also worked on unclassified problems as they put their new machine through its paces. Meteorology, for example, featured many of the kinds of complicated fluid flow that von Neumann and Co had to understand if they were to design the innards of their H-bombs. Other early applications of the computer included simulations of biological evolution, whose branching processes fanned out like so many scattered neutrons inside a nuclear device.

At the end of the 1950s, C.P. Snow spoke of a clash between ‘two cultures’, between literary intellectuals and natural scientists. At the institute, von Neumann’s electronic monster brought about a clash along a different faultline: between notions of an independent scholarly life and the teamwork regimen of engineers. By 1950, the budget for von Neumann’s computer project, drawn almost entirely from government defence contracts, dwarfed the entire budget for the institute’s School of Mathematics. More than mere money was at stake: so, too, were ways of life. ‘In spirit we mathematicians at the institute would cast our lot in with the humanists,’ the mathematician Marsten Morse had written to a colleague in the early 1940s. Mathematicians, he continued, ‘are the freest and most fiercely individualistic of artists’. Einstein agreed. Reviewing an application from a young physicist to the Guggenheim Foundation in 1954, Einstein judged that the proposed topics of study were worthy, but he considered the overseas trip unnecessary: ‘After all everybody has to do his thinking alone.’ At the institute, the computer project did not pit science against the humanities: the battle was between the Romantic Genius and Organisation Man.

That difference in temperament was expressed in more tangible ways as well. The computer project was originally housed in the basement of Fuld Hall – out of sight, even if the clanging made sure that the engineers were never quite out of mind. Soon the computer group was moved to facilities further from the institute’s more solitary scholars. But even the new building required a telling compromise. The drab, functional, concrete structure the government sponsors had in mind would never have satisfied the main residents, so the institute paid an additional $9000 (nearly $100,000 in today’s currency) to cloak the new building with a brick veneer.

In the end, the computer project became a victim of its success. Von Neumann moved more squarely into policymaking for the atomic age when, in 1955, he was chosen as one of the five members of the Atomic Energy Commission. He began to spend less and less time at the institute and in his absence, the computer project had no local lion to protect its turf. Other centres across the country, meanwhile, began to make fast progress on replica machines. One of them, at the air force-funded think-tank Rand, was called the ‘Johnniac’ in his honour. Von Neumann died of cancer in 1957; the institute computer project limped along for a few more months until it was finally shut down in 1958. By that time, computation no longer relied on solitary scholars poring over dusty reference volumes. The computer age had arrived.

I reread Bethe’s 1947 memo and the accompanying table of integrals this summer while travelling in rural Montana. A tyre on my rental car had picked up a nail somewhere between Bozeman and Kalispell. While I waited for a mechanic to get me on the road again, I flipped open my laptop and began to re-evaluate some of the integrals that had been so carefully guarded 65 years earlier. These days, run-of-the-mill software on an ordinary laptop can evaluate such integrals in microseconds – the only limiting factor is one’s typing speed. Whereas the 1947 table listed numerical answers to four decimal places, my laptop spat back answers to 16 decimal places faster than I could blink. A few more keystrokes and I could have any answer I wanted to thirty decimal places. No need for access to a fancy library; no need to slog through a savant’s treatise in Dutch. I could sit in the service station, aside a dusty back road, and calculate.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Letters

Vol. 34 No. 19 · 11 October 2012

David Kaiser gives a faithful account of George Dyson’s recent book on John von Neumann and his role in the development of the US computer, but does too little justice to the contribution of Alan Turing, whose work was the intellectual foundation of the theory of modern computers (LRB, 27 September). Turing first rubbed shoulders with von Neumann in 1935, two years before they met in Princeton. Turing published a paper in March that year improving a result of von Neumann’s in group theory. Shortly afterwards, by coincidence, von Neumann arrived in Cambridge on sabbatical from Princeton to lecture on that subject. Although credit for engineering ‘firsts’ is often difficult to assign fairly, Dyson is at pains to acknowledge that Turing’s revolutionary ideas about a stored-program computer were first realised, not by the Eniac group, nor the Princeton machine, but by Manchester University’s 1948 prototype.

There seems to have been a stark contrast in viewpoint between Turing and the US pioneers. In reports to the US government, and in funding requests to the military (to calculate the effects of thermonuclear explosions), von Neumann and his colleagues expressed the view that ‘at most six or so machines should suffice for the whole country.’ Turing, in an interview with the Times in 1949, declared: ‘This is only a foretaste of what is to come, and only the shadow of what is going to be … I do not see why it should not enter any one of the fields normally covered by the human intellect and eventually compete on equal terms.’

Philip Welch
University of Bristol

send letters to

The Editor
London Review of Books
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address and a telephone number

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences