About five years ago, in the course of studying the commercial applications of psychological research, I contacted the agent of Dan Ariely, professor of psychology and behavioural economics at Duke University, to inquire whether Ariely might want to speak at a conference in London. It didn’t come off: I had to explain that my ‘conference budget’ didn’t stretch to the $75,000 speaker fee and first-class return airfare. Ariely is not a typical academic. He has given very popular Ted talks, sells consultancy advice on behavioural prediction and has founded a number of companies to profit from his insights into the vagaries of human decision-making. The focus of Ariely’s research is human irrationality. His best-known book is Predictably Irrational: The Hidden Forces that Shape Our Decisions (2008), which was followed two years later by The Upside of Irrationality: The Unexpected Benefits of Defying Logic. The enthusiasm with which the marketing profession has greeted this sort of book requires little explanation: ever since the dawn of market research and advertising in the late 19th century, there has been a commercial interest in understanding what determines individual choices, as distinct from what economists or moral philosophers think should determine them.
Marketers aren’t the only ones hungry for these insights. The popularisation of behavioural economics was led by Richard Thaler and Cass Sunstein’s book Nudge (2008), which inspired the setting-up of ‘behavioural insights’ teams in governments around the world (with Cameron’s coalition government at the forefront), and has nurtured a view of policy that is attentive to our unconscious biases and irrational tendencies. Where orthodox economists explain behaviour on the basis of rational choice, assuming that each of us is a finely tuned calculator weighing the costs and benefits of every decision, the ‘nudgers’ are always on the look-out for anomalies, situations in which we habitually do things with harmful consequences: eating badly, neglecting to recycle, or failing to save for retirement. The job of policymakers, as nudgers see it, is to make minor adjustments to ‘choice architectures’ (how the various options are presented to us), which discreetly steer us towards the path of rationality. In areas such as personal finance and nutrition, employers and businesses are encouraged to change the default option to the ‘good’ one. Pension schemes are made opt-out, rather than opt-in, so that people end up with a better retirement pot by default. The touchscreen menu ordering system at McDonald’s is now so determined to steer customers towards its range of salads and sugar-free drinks that choosing to sabotage one’s own health with a burger, chips and Coke requires considerable perseverance. These are efforts to protect us from our own irrationality, but some may feel a sense of unease that the powers that be are treating citizens like children.
This new fascination with irrational decision-making coincided with the global financial crisis. Gurus such as Ariely, Thaler and Sunstein helped to modify a free-market ideology according to which consumers and investors are smart enough (smarter than regulators, at least) to calculate risks for themselves. But this sort of thinking also gained popularity just as the combination of social media and smartphones was first ensnaring hundreds of millions of people into an unremitting system of data capture and feedback. The extent to which we are predictably irrational is conditioned by how much of our behaviour the predictor is privy to. And since the launch of the iPhone and the lift-off of Facebook in 2007, the quantity of human irrationality available to be scrutinised and exploited has grown exponentially.
Given this vast and lucrative psycho-industrial complex, you might think that the human mind could hold few surprises for business and policy elites. It has become something of an orthodoxy in these circles that our behaviour is rarely governed by rational self-interest, but is swayed by norms, habits, instincts and emotions. Yet when such irrational forces, combined with techniques of psychological experimentation and influence of the sort used by nudgers on social media platforms, disrupted the democratic arena in 2016, it was as if Dionysus himself had hurtled dancing into the room. Irrationality is predictable, maybe, but not to the point of putting Donald Trump in the White House.
The myopia of the nudgers is in their assumption that irrationality is a ‘behaviour’ like any other, which can be tracked and controlled – that is, rationalised. It’s true that the relation between rationality and irrationality is ultimately one of power: which part of society (or the self) is able to boss the other? But Trump’s election demonstrated the naivety of assuming that in the end reason will always come out on top. Perhaps with sufficient surveillance, the logic of our present madness can be divined; maybe Jeff Bezos, the founder of Amazon (presently earning $75,000 every thirty seconds), with his global network of household sensors and consumer tracking, is the real rationalist now. But if Silicon Valley (not, say, the university) is now the seat of objectivity and reason, that puts rationality out of sight and out of mind for the vast majority of us. The anxiety provoked by the use of Facebook to influence elections isn’t so much that the platform is all-powerful, but that we have no way of knowing what its true capabilities are. Paranoia is the rational response to a system whose rules and goals are shrouded in secrecy.
The new dominance of such giant technology platforms as Facebook and Google represents a distinctive threat to the status of reason in society. These are businesses that make money by collecting data about our behaviour, then exploiting the intelligence that results: what Shoshana Zuboff refers to as ‘surveillance capitalism’. But as we know from controversies over ‘fake news’ circulating on Facebook and extreme content on YouTube, the platforms have no interest in establishing norms of behaviour, only in maximising engagement with the platforms themselves. As far as Mark Zuckerberg’s business interests are concerned, it doesn’t matter how absurd, stupid, dangerous or mendacious a post is, just so long as it takes place on Facebook.
The iconic model of a surveillance technology is Jeremy Bentham’s panopticon design for a prison, in which prisoners would feel visible to the prison guards at all times whether or not they were actually being watched. As Michel Foucault noted, the panopticon was a disciplinary tool, which sought to bolster the moral conscience of the prisoner to the point at which he was policing his own behaviour, and could be released back into society as a good and rational individual. But the platforms are different: they don’t aim to discipline us, merely to learn about us. And the weirder and crazier we get, the better their psychological insights become. We aren’t inhabiting a moralistic detention centre so much as joining a hedonistic and chaotic focus group, in which we are prompted to throw off our inhibitions for the benefit of the observer on the other side of the mirror. The most valuable data point in this economy is the barely conscious reaction: the ‘like’, swipe, scroll or emoji that reveals some underlying truth about why we behave as we do. The less rational we are, the more the data analyst stands to learn.
Nudgers, like Bentham, see rationality as a habit in which to be trained by a benevolent government in order to steer us towards health and happiness. My preference for a Big Mac may one day be conditioned out of me altogether. Platform giants, by contrast, regard rationality as their intellectual property, the product of a calculation going on in secret. If they can discern the underlying mathematical logic of society, they have no intention of disclosing it. Neither of these attitudes to rationality is politically attractive. The problem lies with behaviourism itself, and its assumption that human freedom is programmed and programmable: ‘rationality’ is in the eye of the all-seeing observer, not a property of conscious action at all. In this conception of reason there is no place – and crucially no time – for thinking, reflection or deliberation: each of us is reduced to a node in a network, bombarded by stimuli to which we can react only as automatons. Behaviourism flips rationalism into irrationalism. Surveillance capital treats the global population as if it were a vast zoo, spotting patterns of behaviour that the specimens themselves will never know about. Once democracy and public argument are premised on the logic of the platform, it simply doesn’t matter what anyone says or does, so long as they remain engaged and engaging. President Trump is the symptom of a society that treats rationality as a property of machines, and not people.
Justin Smith’s Irrationality is one of many books provoked by the political eruptions of 2016. Trump is a recurring preoccupation, but so is the internet and the carnival of quickfire nonsense it hosts. Taking these two themes together – the absurd liar in the White House, and the sarcastic meme culture that helped put him there – suggests that something distinctly new and dangerous has arisen. Trump, it seems, outstrips any previous conspiracy theorist or demagogue. His election means ‘the near-total disappearance of a shared space of common presuppositions from which we might argue through our differences’. In 2016, we saw ‘the definitive transformation of the internet, from vehicle of light to vehicle of darkness’. Trump’s pre-eminence forces us to defend principles and institutions we shouldn’t have to defend. We find ourselves having to assert that good reasons are better than bad reasons, that rational government policies are better than irrational ones. Distinctions between scientific fact and conspiracy theory now have to be explained and justified. These are tasks that many rationalists, in the ‘new atheist’ tradition of Steven Pinker and Richard Dawkins, have been happy to pursue. Arguing as much with (what they perceive as) the relativism of the left as with the dogmatism of the right, these bombastic defenders of Western reason exhibit a spirit of hostility towards anyone daring to question the benefits and rectitude of the natural sciences. Dawkins in particular has converted a defence of scientific method into a defence of cultural hierarchy, with ‘the West’ at the top. Pinker clings to a form of Benthamism, in which statistical data prove that modernity is still on the right track, regardless of what political or cultural anguish might be at large.
Faced with a choice between a world governed by brute Pinker-esque reason and the Dadaist nightmare of fantasy and propaganda emanating from the White House, Smith seems in no doubt where he stands. Yet Irrationality is unique among recent paeans to Enlightenment and liberalism in marrying a resolute defence of reason with a recognition of how futile such defences tend to be. What troubles Smith is that ‘rationality’ means nothing without some ‘irrationality’ from which to distinguish itself, yet the precise nature of this distinction is impossible to establish. Whenever some apparently ‘rational’ activity or epoch is inspected further, it turns out that ‘irrationality’ isn’t so much absent as hidden or ignored. Take the Enlightenment, the period so celebrated by Pinker in particular. As many of Pinker’s critics pointed out in response to his book Enlightenment Now (2018), no sooner had principles of scientific reason apparently triumphed than a romantic counter-Enlightenment was reshaping cultural sensibilities. It wasn’t until the late 19th century, with the codification of academic disciplines, that science was fully segregated from the fields of philosophy and the humanities. But the institutional autonomy of ‘science’ was then immediately challenged by the insurgency of psychoanalysis, modernism and Continental philosophy, which set out to challenge the separation of truth from aesthetics and desire. As Smith expertly reveals, wherever one looks in the history of Western philosophy, rationality is haunted and teased by its other.
The aspect of Enlightenment lauded by Pinker – namely, the rising power of scientific method – was an achievement of hubris and force as much as reason. Building on the foundations laid by Descartes, a new strain of ‘aggressive reason’ took hold in Europe at the end of the 17th century, which regarded Europe as representative of all humanity, European history as world history, and mathematical rationality as a universal language. The imperialist implications are clear. Purveyors of ‘aggressive reason’ had (and, as the example of Dawkins shows, still have) a very simple way of distinguishing rationality from irrationality: rationality is something that belongs to ‘the West’, which the rest of the world is urged or forced to import. In the hands of modern states, rationality risks becoming a tool of colonialism and tyranny, especially where no curiosity regarding the lives and cultures that exist beyond the perimeter of ‘Western’ civilisation is involved. The more Enlightenment is wedded to forceful universalism, the more urgently it will be met with seemingly irrational reactions of romanticism and nationalism. Rationality can become a dogma that brooks no dissent. This is the entangling of reason and myth – the dialectic of Enlightenment – that provoked such pessimism on the part of the Frankfurt School in the 1930s and 1940s, and which Smith partly inherits.
Away from the frontiers and mythology of Enlightenment, the meaning of ‘rationality’ (and hence ‘irrationality’) becomes difficult to pin down. You can resort to the otherworldly ideas of logic and mathematics floating free from all politics and culture. But the academic study of ‘rational choice’ makes little sense once diverted from the kinds of strategic problem – war and profit – it has long been tasked with solving. When we reflect on how we actually live, it becomes all the harder to identify what an ‘irrational’ action or choice might be. Smith wonders ‘whether an anthropologist external to our cultural world would, in studying us, be able to make sharp distinctions among the horoscope, the personality quiz and the credit rating’, or even be able to tell ‘whether we ourselves clearly understand how they differ’. Equally, it isn’t clear how one would distinguish between the scientific societies of the 17th century, to which so much subsequent progress is owed, and, say, a website dedicated to picking through the evidence that vaccines cause autism. Understood purely as ‘culture’ or as ‘behaviour’, rationality becomes ritual or (as the nudgers have it) habit, and ‘irrationality’ is just a pejorative term for the habits we consider bad.
A democratic as well as pragmatic response to this is to shrug, and accept that ‘rationality’ is broadly what people mean by it, even if they never quite reach a consensus. Truth, as Richard Rorty put it, is whatever my contemporaries will let me get away with. The authority of experts in society ultimately depends on people trusting those who are designated as experts. It’s hard to say precisely why it is ‘rational’ to have one’s child vaccinated and ‘irrational’ not to, given that so few of us have any understanding of disease or immunology. Established truths in modern secular societies are no less dependent on hierarchy and trust than those of religious societies. Trust in experts, Smith writes, ‘is a commitment that is more likely to be threatened or rendered fragile by changes in the social fabric than by new empirical evidence about the scientific truth of the matter’. The apparently epistemological problem of ‘post-truth’, which became all the rage in 2016, is ultimately a political problem of ‘post-trust’. In the absence of any conclusive definition of ‘rationality’, some kind of Rortyan pragmatism may be the best political option. If everyone disagrees on what counts as ‘irrational’, then we will have to accept that some quantity of it will have to be part of any decent democratic society. Maybe the odd benign ‘nudge’ here and there, on the basis of expert calculation, is OK, but ‘if we were exclusively to do things that are good for us, this would in itself not be good for us.’
Yet two sources of anguish run through Irrationality that stand in the way of this laissez-faire contentment. The first is the internet. Regardless of how much specific influence over the political upsets of 2016 one attributes to Facebook or Cambridge Analytica, it is clear that social media has contributed to a radical reshaping of the public sphere, in a way that benefits such figures as Trump. The so-called ‘attention economy’, in which every brand, political party and online influencer is competing for the attention of internet users, privileges the type of outrage, hilarity and shock that Trump is guaranteed to produce. In Smith’s account, what is so devastating about online discourse is that the (always ambiguous) boundary between ‘rationality’ and ‘irrationality’ stands to be abandoned altogether, out of a mixture of gamesmanship and malice. The internet is the perfect resource for anyone wishing to fool around, invariably at someone else’s expense, and to the detriment of democracy and mutual understanding. Dialogue becomes performance art, and the irrationalist can dress up as a rationalist if and when it suits them. The threat posed by pseudoscience and Trump himself is that, in their behaviours and appearances, they occasionally resemble normal science and normal presidents. Contrary to the comforting myths of Enlightenment, irrationality is no longer peripheral to science or liberal democracy, but appears swirling around in their core.
Smith is especially troubled by the elevated status of play and humour in this dystopia (‘Jokes are like little morsels of condensed irrationality’), noting that ‘the internet troll armies of the alt-right, a decisive force in the success of Trump’s campaign, shared more in the spirit of explosive hijinks of Woodstock than they did with the Young Republicans’ associations of old.’ The irony and attention-seeking of the online humorist is in many ways an appropriate cultural and political response to a public sphere in which it simply does not matter what anyone says, means or does. In the face of this performance, liberal humour seems to lack political bite, and becomes ‘merely palliative’. Smith’s own writing is wry and sometimes funny, but maybe that’s a sign of how stuck we all are.
Silicon Valley is culpable. The collision of unaccountable software engineers, blinkered by delusions of rationality, with a polity already struggling with lies and fictions, produced the explosions of 2016. The behaviourist hubris which shapes the giant platforms is no more reasonable than the conspiracy theories and ‘fake news’ that they circulate, and no less dangerous. Once reason becomes an instrument of monopoly power, whether in the hands of the state or a corporate platform, it ceases to be a means of understanding, and instead turns into something manipulative and potentially violent. This is the new face of ‘aggressive reason’, which leaves no thought, feeling or action alone, insisting that everything must be computed, so as to render the world more predictably irrational under the eye of the tech giants. In the age of the platform, rationality is hoarded as a form of competitive advantage, generating a type of ‘epistocracy’ with the likes of Bezos and Zuckerberg in control. A schism is imposed between the algorithmic conditions of social life (which are invisible to the vast majority of us), and the visible and deranged ‘content’ that zips around the network, leaving a trail of havoc and LOLs behind it.
This is the nub of the crisis that we’re in. It can’t be that irrationality is simply bad or dispensable: every society – indeed every psyche – must find a place for it, in the arts, in sleep or on the analyst’s couch. The colonial Enlightenment fantasy was that unreason was external to the white European male; the digital-behaviourist fantasy is that unreason is external to the algorithm. One crucial difference is that, as the standard of reason shifts from white, male, European minds to the machines of Silicon Valley, our society no longer demands that white men provide a model of rational behaviour. Hence President Trump.
The second source of unease in Irrationality seems to lie somewhere within Smith himself, or at least in his relationship to the philosophical canon, giving the book an existentialist subplot. Hints are scattered throughout of the way the events of 2016 affected Smith personally, in particular how exposure to some of the darker, weirder corners of online culture overhauled his core assumptions. The alt-right meme culture that engulfed Trump’s 2016 campaign seems to have shaken Smith, transforming his view of comedy and the place of reason in society, and awakening a new seriousness in the process. In an interview published in October in the Point, a Chicago-based journal that shares Smith’s intellectual ethos, he describes the dilemma posed by ‘playfulness’: how to name and criticise the dangerous frivolity of so much online discourse without simply turning into a curmudgeonly conservative? How, as he puts it, to avoid becoming Theodor Adorno, who died of a heart attack in 1969 shortly after a brush with the ‘trolls’ of his day, when his lecture was disrupted by bare-breasted student protesters scattering flower petals on his head? This is partly just the circle of intellectual life, in which the one-time radical suddenly realises that he is perceived as part of the establishment by the younger generation. For Smith, the dilemma arises acutely because he is highly cognisant of, perhaps even somewhat mesmerised by, the culture of online playfulness and stupidity that he sees as the threat. He seems unwilling or unable simply to ignore social media altogether, either out of ethnographic commitment, or because he is as vulnerable to the lure of platforms as the rest of us. The question is how much, if any, of a pre-internet culture of public critique can survive in an age where every intellectual exchange can swiftly be derailed by a joke, a personal attack, a cry of victimhood or a strategic misunderstanding of the other’s argument. What if none of it can? What if, as the title of one of Smith’s essays in the Point puts it, ‘It’s All Over’?
As academic disciplines, philosophy and the natural sciences will survive the age of Trump and Facebook. Game theorists and economists in universities will continue to model ‘rational’ choices in abstract mathematical terms. Yet the message of Irrationality is partly about philosophy’s repeated failure ever to impose sufficient clarity and reason on the world. Efforts to distinguish philosophy from mere sophistry (wordplay) or mystical revelation are never completely satisfactory; philosophy struggles to secure its own foundations to the extent that it pretends. And philosophers’ reliance on a model of honest, egalitarian deliberation as the test of a ‘reasonable’ argument underestimates the obstacles that any such model faces in the real world. Smith is admirably open about this problem, but is convinced that it is still worth arguing back. At the very least, the philosopher retains the power to narrate the cultural apocalypse, as Adorno once did. Much of Irrationality consists of outraged, often startling, declarations of abject defeat, echoing the rhetoric of the early Frankfurt School. There’s a noble if self-punishing streak to Smith’s enterprise, in seeking to bring philosophy into a public sphere dominated by jokers and trolls, where the rules of intellectual exchange are in constant flux. He resembles a teetotaller who insists on remaining at a house party until two in the morning to argue with people, growing increasingly frustrated with the slurred voices and noise all around him, wondering why nobody is listening to anyone else. The risk of this position, and perhaps the risk of philosophy in general, is that you end up taking yourself too seriously, provoking yet more laughter. On the other hand, it may be the only alternative to the hijinks and fakery that has swept digital democracy these past few years.