A lycée in Lyon, 1944. A young Polish refugee is hiding in the school. His identity papers are forged, and deportation to the death camps may await him if he is caught. His attention, however, is not on the dangers outside but on a mathematics lesson. When he first started taking the classes, ‘he sat uncomprehending before the meaningless words and numbers on the blackboard.’ Today, though, a picture has coalesced in his mind. It renders redundant the teacher’s protracted algebraic manipulations. He raises his hand: ‘Sir, you don’t need to make any calculations. The answer is obvious.’
In later decades, the young refugee, Benoit Mandelbrot, was most famously to deploy his talent for geometric intuition in what came to be known as ‘chaos theory’. To the search for deep pattern underlying apparently utterly irregular phenomena, Mandelbrot’s key contribution was ‘fractal’ geometry. The word is his coinage: it invokes the Latin frangere, ‘to break’ (past participle: fractus). His is not the geometry of Euclid’s and Plato’s absolutely straight lines and perfect circles. ‘How long is the coast of Britain?’ – a question asked by Mandelbrot in what became a famous article in Science in 1967 – raises issues of a different sort from ‘what is the circumference of a circle?’ One will get different answers if one measures a coastline from aerial photographs, by walking round it with a ruler, or by considering every grain of sand. Finding pattern in coastlines – and in other intrinsically rough, jagged, fragmented, irregular shapes – requires a geometry of a new kind. One can’t hope for a definitive measure of the length of a coastline but, Mandelbrot showed, one can characterise its degree of roughness by generalising the notion of ‘dimension’ so that fractional dimensions are possible.
Fractal geometry’s most widely-known structure is the ‘Mandelbrot set’. This is a geometric configuration of enormous complexity – a ‘devil’s polymer’, in Mandelbrot’s words – generated by an algorithm so simple that it can be implemented in a few lines of computer program. The set looks as if it belongs not in the pages of a mathematics journal but in a gallery of abstract art, and LRB readers will almost certainly have seen its beautiful filigree intricacies in illustrations and screen savers. (If you haven’t seen it, Googling ‘Mandelbrot set’ will lead you to thousands of sites.)
The (Mis)behaviour of Markets, jointly written by Mandelbrot and a financial journalist, Richard Hudson, covers a much less well-known aspect of his work, his contributions to economics. At their core is an apparently esoteric issue that nevertheless has fundamental practical ramifications, intertwined as it is with the fate of pensions and of savings. The issue is how to characterise mathematically the nature of price changes in markets, especially financial markets. Such changes can reasonably be viewed as ‘random’ or stochastic: in retrospect, if we had perfect knowledge, we might fully understand their causes; in advance, with imperfect knowledge, they are unpredictable. But randomness of what kind?
One form of randomness is epitomised by what statisticians call the ‘normal distribution’. Imagine an archer aiming arrows at a vertical line on a wall that stretches to infinity in both directions. Only a small number of her arrows will hit the target line. (Mandelbrot and Hudson point out that the statistician’s term, ‘stochastic’, derives ultimately from stokhos, which is what the ancient Greeks called the stake that archers used for target practice.) Some arrows will veer to the left, others to the right, and if the archer’s aim is true, we expect their numbers to be roughly equal. Most of the archer’s misses will be close. Large misses will be infrequent: the larger the miss, the less frequent.
The extent to which arrows ‘spread’ around the target line has a measure that can be calculated even if the archer goes on firing for ever: the ‘standard deviation’, or its square, the ‘variance’. If the normal distribution applies exactly, and if the archer shoots endlessly, nearly a third of her arrows will fall outside one standard deviation of the target, but fewer than 5 per cent beyond two standard deviations, and only around 0.25 per cent beyond three. Soon, one gets to frequencies so low that one could watch the archer shooting every second for as long as the universe has existed and not expect to see an impact that far out. Plot the frequency of arrow strikes against their positions along the wall, and the statistician’s famous ‘bell-shaped curve’ – the graphical representation of the normal distribution – appears. Its tails are ‘thin’: extreme events (large misses by the archer) happen very seldom.
Mandelbrot and Hudson ask us to consider a different archer, however, and their book would be worthwhile for this arresting image alone. This archer and her bow are infinitely strong – her arrows will fly any distance – but she has been blindfolded. She stands on a fixed spot opposite the target line, but she does not know where the wall is and fires arrows in random directions. Some, of course, will miss the wall entirely. Those that hit it will be distributed around the target line, but their distribution will differ from the thin-tailed, bell-shaped distribution of those fired by the archer who can see the target. The blindfolded archer’s arrows will not cluster according to the normal distribution: because some arrows will fly enormous distances almost parallel to the wall before hitting it, the probability that an arrow will strike the wall at a huge distance from the target line is no longer vanishingly small. The distribution’s tails therefore are ‘fat’, so fat that when the archer goes on firing for ever, the standard deviation and variance of the distribution are infinite. The mathematical expressions that define them do not converge to any finite value.
Which archer offers the better metaphor for price changes in a financial market? The unblindfolded archer corresponds to the canonical model of price changes, which crystallised in the late 1950s: the ‘log-normal’ model, in which the logarithms of price changes are viewed as normally distributed. The log-normal model was of great importance in permitting the development of theories of markets that are sophisticated, mathematically elegant, plausible and economically insightful. One such theory is of options: contracts that give the right to buy (or, alternatively, to sell) an asset such as a block of shares at a set price at, or up to, a given future date. As I described in the LRB on 13 April 2000, the economists Fischer Black, Myron Scholes and Robert Merton drew on the log-normal distribution (and on a number of other assumptions about market conditions) to reduce the complicated problem of understanding the prices of options to a relatively simple differential equation.
The work, published in 1973, won Scholes and Merton the 1997 Nobel Prize in economics (Black died in 1995), but this is not just academic economics: it is fundamental to the huge markets in options and other financial derivatives. A derivative is a contract or security the value of which depends on the price of an underlying asset or on the level of an index or interest rate. Figures from the Bank for International Settlements suggest that at the end of June 2003 the total notional amount of financial derivatives outstanding worldwide was $208 trillion, the equivalent of about $34,000 for every human being on earth.
No financial economist or derivatives trader thinks that the log-normal distribution applies strictly: they are all aware that, at least over short periods of time, the distributions of price changes are fat-tailed, with extreme events happening with relatively high frequency. The canonical model can be elaborated to cope with this, for example by allowing the volatility of prices to change with time, which ‘fattens’ the model’s thin tails. Mandelbrot, however, thinks that this approach is intellectually unambitious and complacent: ‘The odds of financial ruin in a free, global-market economy have been grossly underestimated.’ As his and Hudson’s title indicates, markets misbehave, and the book sketches Mandelbrot’s current approach, rooted in his fractal geometry, to the mathematical analysis of their misbehaviour.
On the canonical model, price variations are, as Mandelbrot puts it, ‘mildly’ random, and successive changes in price are independent. In Mandelbrot’s approach, mildness and independence are mere special cases. His approach incorporates ‘wild’ randomness, akin to the shots of the blindfolded archer, and also ‘dependence’, when events even deep in the past still influence today’s price changes. Volatility clusters: ‘Periods of big price changes group together, interspersed by intervals of more sedate variation – the tell-tale marks of long memory.’ Even time as experienced by traders flows at variable rates, sometimes speeded up and sometimes stretched out.
Few recent scholars have been less constrained by the structure of academic disciplines than Mandelbrot. His work on the ‘long memory’ of price changes, for example, has its roots not in economics but in hydrology, in the research of H.E. Hurst, a scientific civil servant in colonial Egypt, on the pattern of dependence in the size of successive annual floods of the Nile. In no discipline has Mandelbrot been entirely at home. For example, the alleged lack of rigour of fractal geometry was the subject of fierce criticism in 1989 by a mathematician, Steven Krantz, who condemned its heavy use of ‘computer output’ and ‘pictures’ rather than conventional theorems and proofs.
As the historians of economics Philip Mirowski and Esther-Mirjam Sent have shown, however, it’s Mandelbrot’s relations to economics that are of greatest interest. Particularly fascinating is the way in which he was first embraced and then rejected by financial economics. The latter is a relatively young academic specialism, which began to coalesce only in the 1950s and 1960s. Mandelbrot’s intellectual longevity – at 80, he is still in post and productive at Yale – has meant that his work has run alongside mainstream financial economics throughout.
He was once almost an insider. The two main sites at which modern financial economics developed were MIT and the University of Chicago, and the economists at the latter were particularly attracted by Mandelbrot’s work. Not long before he died, I interviewed one of Chicago’s Nobel laureates, Merton Miller, for work I’ve been doing on the impact on markets of modern financial economics. ‘Benoit had a great influence on a lot of us,’ Miller told me. In the early 1960s, Mandelbrot was already beginning to assemble a number of components of his current approach. What particularly excited attention back then was his championing of the applicability to markets of a family of statistical distributions first identified by the great French probability theorist Paul Lévy, who had been one of Mandelbrot’s professors at the Ecole Polytechnique in the 1940s.
Lévy was not a charismatic teacher – from the back of a lecture theatre ‘he was near inaudible,’ Mandelbrot says – but Mandelbrot persisted, at one point as the only remaining student coming to Lévy’s lectures. Lévy’s family of distributions included the normal distribution, with its ‘mild’ randomness and finite standard deviation or variance. But that is the family’s only well-behaved member. All the others are ‘wild’: their variances are infinite. They are not all quite as wild as the particular family member that models the shots of the blindfolded archer, but the image conveys the flavour.
For Lévy, the topic was a theoretical one, an investigation into the foundations of probability and statistics. However, from the start of his career Mandelbrot pursued the applications of mathematics as well as its theory: in the study of turbulence in physical systems, of word frequencies in linguistics, of the distribution of income. Another blackboard epiphany was key: in 1961, in a diagram of cotton price data on the blackboard of a Harvard economist, Hendrik Houthakker, Mandelbrot glimpsed a pattern, a pictorial trace of Lévy distributions.
In the 1960s, Mandelbrot and those influenced by him found evidence of infinite-variance Lévy distributions in a variety of markets. Eugene Fama, for example, found it in the US stock market. Fama was en route to playing a key role in financial economics. It was he who, in 1970, most clearly formulated that field’s central tenet, the efficient market hypothesis: the view that prices in mature capital markets reflect all available price-relevant information. (The hypothesis helps explain why price changes are random. If all existing information is already incorporated into prices, the latter are affected only by the arrival of new information, which by virtue of being new is unpredictable or ‘random’.) Fama wasn’t alone in his interest in Lévy distributions: top economists such as Paul Samuelson also started to investigate their consequences.
There was a sense, however, in which the conversion to Mandelbrot’s wild randomness was only skin deep. There was no unequivocal empirical proof that patterns of price changes followed a Lévy distribution. In any particular sample of data, as in any given set of the blindfolded archer’s arrows, the variance has to be finite. The hypothesis that the underlying distribution has an infinite variance remains that: a hypothesis. It is, furthermore, a hypothesis that plays havoc with many standard statistical techniques, which it renders inapplicable.
Mandelbrot relished the resultant challenges, writing in 1963 in the Journal of Political Economy that ‘when one works in a field where the background noise’ follows an infinite-variance Lévy distribution, ‘one must realise that one faces a burden of proof that is closer to that of history and autobiography than to that of physics.’ Others, however, saw in the inapplicability of standard techniques a reason to question Mandelbrot’s hypothesis. In 1964, the financial economist Paul Cootner attacked Mandelbrot’s ‘messianic tone’ and ‘disturbingly casual’ use of evidence. If Mandelbrot was correct, and the variances of the underlying distributions are indeed infinite, Cootner wrote, ‘almost all of our statistical tools are obsolete – least squares, spectral analysis, workable maximum-likelihood solutions, all our established sample theory, closed distribution functions. Almost without exception, past econometric work is meaningless. Surely, before consigning centuries of work to the ash pile, we should like to have some assurance that all our work is truly useless.’
At the start of the 1970s, interest in Lévy distributions in economics effectively evaporated. A Chicago PhD student, Robert Officer, examined empirically a key feature of them: the characteristic chaos-theory property of ‘self-similarity’. If successive price changes are independent and follow a Lévy distribution, they should look similar at all levels of magnification: the shape of the distribution of price changes should be identical irrespective of whether one is studying changes over an hour, a day, a week or a month. Officer’s data suggested that this wasn’t so. As time periods lengthened, the tails thinned towards the mild randomness of the normal distribution. Economists could have regarded this as a mere empirical anomaly (other models continued to be embraced despite analogous empirical difficulties), or suspicion could have focused on the assumption of independence rather than that of infinite variance. Crucially, however, other, more orthodox ways of modelling fat tails were becoming available, and these preserved the applicability of standard statistical techniques. The balance tipped: Mandelbrot’s Lévy distributions ceased to seem attractive to financial economists.
Mandelbrot had no need to keep doing research in economics. His post from 1958 to 1993 was not in a university department but in the IBM Research Laboratory at Yorktown Heights, north of New York City – one of the islands of more-than-academic freedom that US industry once provided. Rebuffed by economists, or so he felt, Mandelbrot was at liberty to move to other fields, and fractal geometry’s original applications were mainly non-economic.
What took him back to the study of markets was an episode that can only be described as Mandelbrotian. In the week beginning 12 October 1987, the global stock-market boom that had started in 1982 began to experience turbulence, with a sharp fall on the Wednesday, high volatility on the Thursday, and an even larger fall on the Friday, when the Dow Jones industrial average lost 4.6 per cent. In Britain, the markets’ turbulence was even mirrored in the weather. On the Thursday night, largely unforecast, the most severe storm for some three centuries caused widespread damage across southern England.
Then came the worst ever day in global stock markets. On Monday, 19 October, the Dow fell 22.6 per cent, more than on even the direst individual days of the great crash of 1929. The fall was the equivalent of around 2400 points in the Dow’s level at the time of writing, and the US financial system buckled. The clearing systems of the main stock derivatives exchanges (through which those whose positions have lost transfer money to those who have gained) almost failed on the Monday night, as some of those who had incurred huge losses had great difficulty raising money to meet their obligations. If, in consequence, clearing had failed, the exchanges would have been forced to close, possibly for ever. The New York Stock Exchange itself came to the brink of closure on the Tuesday morning, as its specialists (who match buy and sell orders, and are supposed to trade with their own money if there is an imbalance) had almost exhausted their capital and feared bankruptcy. The exchange’s chair, John Phelan, later told the Wall Street Journal that he feared that if the exchange closed, ‘we would never open it.’
The crash of October 1987 had no clear-cut external cause: no oil shock, no massive government default, no Twin Towers. It was, however, precisely the kind of episode modelled by Mandelbrot’s mathematics: the punctuation of a long stretch of mild randomness by a brief period of extreme, concentrated turbulence, involving a price movement so huge that it was vanishingly unlikely if the log-normal distribution applied. Hence Mandelbrot’s return to finance, and to the work described in The (Mis)behaviour of Markets.
Mandelbrot’s followers are a minority. He estimates that there are ‘perhaps one hundred serious students of fractal financial and economic analysis around the world’; the proponents in academia and in the financial sector of more orthodox approaches probably number in the tens of thousands. To assess the value of his approach against more mainstream analyses requires comparison of their relative empirical adequacy, but that is not straightforward, given the flexibility both of Mandelbrot’s current models and of those of the mainstream, along with the difficulty of the underlying econometric issues. The comparison also involves judgments of relative mathematical tractability, which depend in part on one’s training, and here Mandelbrot may be at a disadvantage, since his techniques are not widely taught. An economist will in addition want to assess whether his models capture economically plausible processes. Although Mandelbrot has some suggestions in respect to processes that may underpin his models, he is tentative about them. His tools, he writes, ‘permit me to describe the market in objective and mathematical terms as turbulent. Until the study of finance advances, for the how and why we will each have to look to our own imaginations.’
There is, however, a reason of a different kind for taking Mandelbrot’s approach seriously. I came across it when doing fieldwork in the Chicago derivatives markets. After the 1987 crash, the Options Clearing Corporation (the collective clearing house for all the options exchanges in the US; in other words, the ultimate guarantor of all the contracts traded on them) altered the mathematical basis on which it demands ‘margin’ deposits from market participants to cover the likely risks of options trading. Since 1991, the corporation has been using Mandelbrot’s wildly random Lévy distributions as the determinant of the sizes of the margin deposits it demands.
It is striking that the corporation should be doing so. Options, after all, are the site of what is perhaps the greatest triumph of mainstream, mildly random, financial economics, the Black-Scholes-Merton model; and by 1991 even Mandelbrot regarded Lévy distributions as largely superseded as a modelling tool. However, the corporation’s motivation was an understanding that economic models do not stand outside the world they analyse, but are part of it. A wildly random distribution with an infinite variance can be said to ‘expect’ extreme events. When such an event happens, the ‘estimators’ of the distribution’s parameters (the statistical algorithms that are used to work out the likely values of those parameters) are affected only modestly. So there is no enormous, sudden increase in the ‘margin’ demanded from market participants. One potential mechanism by which crises can be exacerbated – the forced sales that result from the need to meet big ‘margin demands’ – is thereby blocked.
In contrast, risk management techniques based on the assumption of a mildly random distribution such as the log-normal have the potential to undermine that very assumption. An extreme event can have a huge effect on the estimator of the variance of such a distribution, and can thus trigger attempts quickly to liquidate positions that suddenly seem too risky. The 1987 crash may indeed have been exacerbated by a management technique based on mild randomness: portfolio insurance. This drew on option pricing theory to buy or sell assets in such a way as to create a ‘floor’ below which the value of a portfolio will not fall. As prices fall, portfolio insurers have to sell, and one conjecture about the huge price declines of 19 October 1987 is that they were amplified by waves of such selling.
Mathematical models of the stochastic dynamics of market prices are now essential to the management of money by banks, pension funds, insurance companies, hedge funds and university endowments. Such models are not just external representations of market processes, but integral to those processes. In building or choosing a model, thought thus needs to be given to how it will affect market processes if it is used widely, not just to its empirical adequacy and conceptual depth. A paradoxical virtue of Mandelbrot’s blindfolded archer is that she has the potential to make the catastrophes she models less likely. Of course, a great deal more than that is involved in the creation of a less fractured world, but Benoit Mandelbrot, the theorist of roughness and of wild randomness, has much to offer.