Creditworthy: A History of Consumer Surveillance and Financial Identity in America 
by Josh Lauer.
Columbia, 352 pp., £27, September 2018, 978 0 231 16808 3
Show More
Show More

When​ the Chinese e-commerce platform Alibaba, the biggest retailer in the world, launched an app allowing its customers to buy products and transfer money instantaneously, it understood that the resulting data – who buys what and who sends money to whom – could also be used to underwrite credit. In a society where grandmothers use smartphones to pay for groceries and beggars have their own barcodes to help them accept donations, there’s plenty of data to make use of. Alibaba’s three-digit ‘Sesame score’ sets the terms of a loan according to variables such as the academic degrees a customer holds, their activity on social media and how many video games they have purchased in the past month. A person’s score affects their employability and likely success in applying for a Schengen visa; it also affects how prominently their profile is displayed on Chinese dating sites. In Europe, meanwhile, Kreditech, a lending service that aims to become the ‘Amazon of consumer finance’, is mining Facebook data its customers voluntarily share. In the US, ZestFinance, the brainchild of Google’s former chief information officer Doug Merrill, is using big data to rank customers whose credit rating has never recovered from the subprime mortgage crisis.

To ZestFinance, ‘all data is credit data.’ It claims to have built a system capable of gathering thousands of details about an individual on the basis of their online activity, on the basis of which it makes split-second decisions as to whether they should be given credit. These details include the content of text messages; online browsing behaviour and purchases; education history; records of online money transfers; data use; phone type; and activity on Facebook, Twitter and Pinterest. ZestFinance also gathers similar information from the borrower’s social circle – the ‘data shadows’ cast by friends and acquaintances. It’s unclear how having wealthy Facebook friends, playing Pokémon Go or using an Android phone translates into creditworthy behaviour, but a decade after the financial crash, we are being urged to trust the numbers.

Credit is an exchange made on a promise to pay at some future point. The provision of credit depends, therefore, on an assessment of whether or not to trust that promise. It’s a risky business. Josh Lauer traces the history of credit scoring – a means of quantifying this risk – back to the 1840s. Of course, merchants had always had ways of deciding which among their customers should be given credit, and ledgers are full of accounts of local blacksmiths who couldn’t repay their loans on time. But by the 1830s such personal assessments weren’t always possible. Migration and trade between cities in the newly industrialised economy had made it harder to establish the personal relationships in which people were willing to wait to be paid or paid back, and to trust that this would happen. Credit deals could still be made locally, but regional and national traders found it increasingly difficult to gauge the worth of trading partners. In 1837, Lauer writes, ‘a cascade of defaulted debts wiped out investments, wrecked business, and crippled the American economy.’ The panic was the reflection of the changing social order, but it manifested as a crisis of confidence in finance; inflated land values and wildcat banking had eroded trust in the system. Merchants were left clutching worthless paper promises from virtual strangers. One of them, Lewis Tappan, a silk merchant from New York State, was bankrupted. Four years later he established the Mercantile Agency, which collected detailed information about businessmen in order to get a clearer picture of the risk they represented.

By the 1870s Tappan’s agency had expanded massively. It now also collected information about consumers, a relatively new (and much larger) category, and employed more than ten thousand correspondents to carry out the work. Credit evaluation had progressed from straightforward ‘blacklisting’ – the ‘little lists of names’ of poor prospects – to an ‘affirmative/negative’ system. It was no longer enough to single out the worst offenders; the system now attempted to create a ranking that ran from deadbeat to upstanding citizen. Tappan was quick to defend the bureau against its critics, claiming in one advertisement that it was ‘not a system of espionage, but the same as merchants usually employ – only on an extended plan – to ascertain whether persons applying for credit are worthy of the same and to what extent’. Lauer thinks differently. Credit scoring was, he argues, the birth of a new form of surveillance, with data gathered and monetised at the behest of a private organisation.

In Britain, lines of credit were largely dictated by class and capital (assets, properties, wealthy relatives), but the new American system also examined ‘capacity’ (age, experience, past success or failure in business) and ‘character’, a nebulous category that was shorthand for all kinds of things, including local reputation, work ethic and personal disposition. A credit assessment didn’t just establish whether you had the means to make good on your debts, but also whether you were the kind of person likely to do so. It is as if Benjamin Franklin had anticipated this intensity of scrutiny when he wrote, in his ‘Advice to a Young Tradesman’ in 1748:

The most trifling actions that affect a man’s credit are to be regarded. The sound of your hammer at five in the morning, or nine at night, heard by the creditor, makes him easy six months longer; but if he sees you at the billiard-table, or hears your voice in a tavern, when you should be at work, he sends for his money the next day.

As part of the shift from merchant to consumer credit, credit agencies and department stores hired agents to dig into their customers’ lives. The details in their reports were mostly gleaned from local gossip and coloured by the speculations of the mainly white, middle-class professionals who put them together. One reporter from Buffalo, New York believed that ‘prudence in large transactions with all Jews should be used,’ while another in Georgia described a liquor store he was profiling as ‘a low Negro shop’. In the 1950s, Lauer writes, agencies began to offer ‘welcoming services’ for newcomers to their district. A hostess would visit, bearing gifts and promotions for local businesses as well as a credit application. She also cast a trained eye over their living circumstances and reported back to the agency. The Dallas Chilton Bureau even made a foray into babysitting services.

Besides poor housekeeping, credit agents took note of physical appearance, religious faith, marital problems, drinking habits and the views of neighbours, employers and acquaintances. Occupation was obviously significant and not only because it indicated income. More important was what a profession implied about a person’s class and sense of social responsibility. Teachers, nurses and doctors were highly rated, while hotel employees, waiters, janitors and painters were riskier prospects. Shopping habits were also factored in: ‘If a man who purchased only necessities suddenly begins to buy “jimcracks” then it was thought likely that he had fallen into a foolish courtship or taken to abusing his credit privilege.’ Another key metric was geography, and long before zip codes, credit managers had ‘detailed mental maps’ of their districts, with particular neighbourhoods and streets effectively blacklisted. ‘Nationality’, a proxy for ethnicity, was also seen as significant, with ‘Negroes, East Indians and Foreigners’ ranked bottom of the list.

Gradually, the handwritten reports housed in vast reference books in the New York offices of Tappan’s agency were translated into code: ‘1 6 8 11 14 17 21 25’ after a dealer’s name stood for ‘making money’, ‘economical’, ‘business not too much extended’, ‘does not pay large interest’, ‘good moral character’, ‘credits prudently’ and ‘not sued’. Shops gave colour-coded cards to customers: in one store the key was ‘red for no credit, black for $25, blue for $50, green for $100 and gold for $150 or more’. In the 1920s, Rand Visible Filing Systems were marketed with transparent celluloid tabs in various colours for this purpose. In another system, all the files started out white, but were permanently stained blue or red in the event of slow or non-payment.

From the 1950s onwards there were several attempts to subject underwriting to mathematical modelling. The most popular method was developed by the English statistician and eugenicist Ronald Fisher. ‘Discriminant analysis’ allowed researchers to boil down the information from credit applications to a set of discrete variables, each allocated a value and weight reflecting its statistical association with payment or default. A statistician could select a random sample of ‘good’ and ‘bad’ loans and attempt to read backwards the significance of easily coded attributes like age, occupation and income in predicting risk. In the 1960s, Fair, Isaac and Co., whose FICO score is now a fixture of consumer lending in the US, used the technique to test variables – bank accounts, income, family size – and correlations between them (e.g. number of bedrooms combined with family size). Income, it turned out, wasn’t a good predictor of anything, and neither was marital status, but things like room-per-child ratio and having a telephone in your house were. And yet, while statistics could tell credit analysts which variables were significant, they were often hard put to explain why.

Scholars often present these techniques as having been so persuasive that they swept away all trust in human judgment. Lauer shows, however, that credit men were reluctant to give up on the old belief that one could rely on a ‘feeling, perhaps rather vague and indefinable’, that someone was a good or a bad bet. Many agents were suspicious of any technique that strayed too far from ‘good old-fashioned sense’. But as time went on, statistical scores came to be thought of as more accurate than human insight, so easily affected by prejudice. Numbers were impartial. Or so it seemed. In truth the practice of scoring had never been only about ‘economic facts’. The score took qualitative first impressions – ‘effeminate gestures’, poorly kept back yards, neighbourhood grudges – and made them quantitative, so that they had the appearance of facts. By these means it could easily become accepted institutional knowledge that ‘coloured ministers’ were poor credit prospects, or that certain ethnic neighbourhoods should be avoided at all costs.

It was a few decades before these codified systems crossed the Atlantic. Until the 1980s, UK bank managers had to give personal approval to loans for customers at their branch and were trained in underwriting from the start of their careers. Then, following the expansion of financial services and banks’ customer bases, these underwriting capacities were transferred to centralised credit departments.

Some thirty years later, credit scoring allowed the securitisation of home mortgages into risk-structured financial instruments. The subprime lending boom distributed this risk by means of complex instruments such as collateralised debt obligations and credit default swaps. But after the system went into crisis in 2008, financial institutions went through what the economic anthropologist Bill Maurer calls a period of ‘re-risking’. The old scoring mechanisms had failed. New financial technologies emerged, claiming to establish, once and for all, an accurate measure of risk.

Companies like ZestFinance or Branch, a for-profit firm that assesses ‘people’s credit based on smartphone data’, use machine learning, which doesn’t merely establish the rules by which an algorithm acts on datasets (charitable donations = good; video games = bad), but teaches the algorithm to refine and refine again the functions that produce decisions, often in inscrutable ways. For example, people who fill out the ZestFinance loan application in capital letters are deemed to be riskier borrowers than those who write in a combination of upper and lower case. Doug Merrill says he doesn’t know why. Others have discovered that such categories as browser type and screen resolution play a role in determining credit scores.

Although nobody really knows how these automated scores work, Branch, Kreditech and ZestFinance argue that they are not only more accurate, but fairer: the algorithm ‘sees beyond’ the prejudices that skewed access to credit in the past. And yet, studies of machine learning algorithms show that they mirror historical forms of racial, economic and gender discrimination. Critics point to the inherent biases of those who write the code, and note that the machine learns from historical data riddled with prejudiced inferences and old resentments. It has been shown that having an African American name negatively affects the new credit scores, also that the availability of Amazon Prime’s same-day delivery service decreases in areas where people have tended to be excluded from credit. Since Amazon is rumoured to be applying for a financial licence in order to set up bank accounts for its 300 million customers, this automated bias could have a huge impact. And a mathematical ‘score’ is harder to contest than a decision made by a person.

The maths behind the assessment of our worth is becoming harder to untangle and dispute at the very time it is playing an increasing part in shaping our future. Facebook is experimenting with a ‘trustworthiness score’ for its more than two billion users. And the Chinese government is developing a ‘social credit’ system that will assign a score to each of its 1.4 billion citizens based on an aggregation of economic and social factors. While it remains unclear what kinds of ‘social’ behaviour will be rewarded or penalised (one journalist wondered whether women would be penalised for remaining single or not having children), the government has made it clear that poor scores will affect people’s access to public transport, employment and basic social services. The pilots are up and running, and penalties are already being applied. Figures released recently by the National Public Credit Information Centre show that by the end of 2018, the courts had banned 17.5 million ‘discredited’ citizens from buying plane tickets, and a further 5.5 million from travelling by train.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences