Towards the end of the 19th century an existential crisis hit China. How could the hanzi script – the system of characters used for millennia across China, Japan, Korea and Vietnam – survive into the modern era, with its global communications networks enabled by those transformative Western inventions, the telegraph and the typewriter? Hanzi script had tens of thousands of characters, creating obvious difficulties for its use in these new technologies. Lu Xun, a celebrated essayist and unforgiving critic of his own society, issued a call for action in 1936: ‘If the hanzi does not go, China will surely perish’ – though he wrote these words in hanzi script. The comment is typical of Lu, and I suspect there is a joke here somewhere, though one almost certain to be missed.
The members of the educated class, lamenting the loss of territories, markets, ports, transportation and other key assets after China’s defeat in the Opium Wars, moved to modernise the state. Mastering telecommunications, at a time when the rapid transmission of information was remaking power and commerce, was a necessary first step. Particular questions required urgent attention: what would a simple, elegant Chinese telegraphic code look like? Was it possible to design a Chinese typewriter that would replicate the success of the Remington or Underwood? And how might a phonetic alphabet be developed?
Missionaries, merchants and Western diplomats had long complained about what they saw as China’s excessively difficult and rather backward script. But hanzi was only one of several scripts recognised by the multilingual Qing dynasty, which controlled China from the mid-17th century until 1912. The ruling Manchus – a nomadic people from the north-east – made sure that the Manchu alphabet as well as Mongolian and Tibetan scripts had equal prestige, at least in theory. This political accommodation of multilingualism and multiple scripts was more or less maintained in the official language policy of the People’s Republic of China, which incorporated the Roman alphabet for ethnic minority languages as well as pinyin, the phonetic notation system introduced in 1949 to transcribe hanzi characters. In other words, the idea that China did not have an alphabet is an exaggeration, but historical accuracy is not the point. What matters is that China did not universally employ the Roman alphabet, and it was the Roman alphabet that took over the world.
Historians such as Harold Innis, writing in the late 1940s, showed the extent to which empires depended on effective long-distance communication, and with the expansion of modern European empires to the rest of the world in the 19th and early 20th centuries the Roman alphabet emerged as the dominant medium of this communication. Others have wondered whether there might be more to it, arguing that it was the phonetic alphabet itself that was transformative. Drawing on evolutionary theory, linguists and media pundits like Marshall McLuhan – who took Innis’s work in a different direction – sought to demonstrate that hieroglyphs, pictographs or ideographs are inevitably replaced by more advanced systems as societies evolve. The phonetic alphabet, in their view, epitomised the noblest achievement of human rationality by virtue of its simplicity, efficiency and speed of acquisition. It could be taught to young children in weeks rather than years. Hanzi, by contrast, relied on concepts – pictography, ideography, logography – that the phonetic alphabet had superseded. The Roman alphabet, it was argued, had prevailed not because of its association with the British and American empires but because it was intrinsically superior to all other systems of inscription. Who in their right mind would argue against simplicity, efficiency and speed?
I.A. Richards, who taught in China in the 1920s and 1930s, met enthusiasts for Romanisation from a Kuomintang-supported movement called Gwoyeu Romatzyh. They wanted to eliminate hanzi and adopt the Roman alphabet for writing Mandarin and other Chinese languages. But Richards felt he had something better to offer when he arrived at Tsinghua University. As well as English literature, he taught Basic English. This stood for ‘British, American, Scientific, International, Commercial’ and was a language of 850 words that he had developed in Cambridge with C.K. Ogden, with the object of facilitating better global communication. Richards hoped it might help solve the perpetual misunderstandings between China and the West, which troubled him deeply. Driven by his obsession, he stayed in Beijing even after Japanese troops descended on the city in 1937, and was among the last academics to evacuate to the south.
In the southern city of Guangzhou, Du Dingyou was engaged in similarly quixotic linguistic pursuits until he, too, had to flee Japan’s bombing campaigns in 1938. As Jing Tsu points out in Kingdom of Characters, Du may not have been aware of Basic but as the librarian of Sun Yat-sen University he knew the Dewey Decimal Classification. He invented a new indexing system to bring order to the unruly world of Chinese characters by imposing the equivalent of alphabetical order – apart from anything else, this would speed up the retrieval of books from the stacks. Unlike Richards or the advocates of Gwoyeu Romatzyh, Du retained his faith in the hanzi script, coming up with an ingenious scheme for ordering the characters according to geometric shape. He believed his system would play an important role in building trust among nations and be a harbinger of world peace.
To the male Chinese elite – educated women are seldom mentioned in this context – the survival of Chinese civilisation was at stake. Other countries seemed to feel a similar anxiety. Turkey made the use of the Roman alphabet compulsory in 1928, abandoning the Ottoman Perso-Arabic script. Japanese genbun itchi reformers were busy updating their hybrid writing system – which included phonetic and syllabic scripts (hiragana and katakana) as well as kanji, their word for hanzi – in an attempt to eradicate characters imported from China. In the early years of the Meiji Era, some radicals had started a rōmaji movement that still had its adherents. By the 1920s, French colonial administrators had mostly succeeded in abolishing Vietnam’s chữ nôm and chữ hán writing systems, which were based on hanzi. To the north, in the Central Asian Turkic republics that had become part of the Soviet Union, the momentum for a New Turkic Alphabet was gathering force. Soviet linguists in St Petersburg developed a script that Qu Qiubai, a member of the Chinese Politburo, attempted to adapt for China. The First Conference on the Latinisation of Chinese Writing took place in Vladivostok in 1931. It was clear that a code switch was taking place around the world, with political support from both left and right, and that literacy was increasingly becoming identified with the use of the Roman alphabet.
There had already been intense competition between different phonetic scripts. One was the Mandarin Alphabet, devised by the political activist Wang Zhao at the turn of the 20th century, which contains 62 discrete phonetic symbols, fifty borrowed from the Japanese katakana and twelve modelled on the Manchu alphabet. Tsu has some entertaining stories about the adventures of the self-aggrandising Wang, as related in his multi-volume memoir. After being charged with treason in 1898 for opposing what he saw as the Qing dynasty’s capitulation to the Western world, Wang fled to Japan; two years later he returned in disguise as a Buddhist monk, proficient in martial arts, and made his way across the country on foot, sheltered by uneducated peasants whose habits he described disdainfully. Eventually arriving in Beijing, he presented the governor general of the capital province with a pamphlet he had concealed in his robes: the Mandarin Combined Tone Alphabet, his proposal for a phonetic system that would see Mandarin – a lingua franca based on the languages and dialects of the north – imposed as the national language of officialdom and education.
But Wang’s scheme didn’t win out when the Ministry of Education approved a phonetic alphabet in 1913, a year after the founding of the Republic of China. At the turn of the century, the anarchists and revolutionaries Zhang Taiyan, Wu Zhihui, He-Yin Zhen and Liu Shipei were living in Paris and Tokyo, where they wrote in Chinese-language journals – Tianyi (or Natural Justice) in Tokyo, Xin Shiji (or Le Nouveau Siècle) in Paris – on such issues as female labour, anti-militarism and anarcho-syndicalism, and published Chinese translations of Kropotkin, Proudhon and Bakunin. They felt that a common language was urgently needed in order to bring revolutionary thought to China. Both of the shortlived journals associated with these revolutionaries – Tianyi was banned by the Japanese authorities after it published a translation of the Communist Manifesto in 1908 – debated the merits of Esperanto, and both advocated the use of a phonetic script that could be used by the masses: Romanisation wasn’t necessary, but standardisation and ease of use was.
It was with this in mind that Zhang Taiyan, a philologist as well as an activist, developed Bopomofo, a system of 37 phonetic symbols and four tone marks that could be used to transcribe every possible sound in Chinese languages. As with the Roman alphabet, its name was derived from the sounds of the first four letters of the system: ㄅ ㄆ ㄇ ㄈ – bo, po, mo, fo. After returning from Paris in 1912, Wu Zhihui led the new republic’s Commission on the Unification of Pronunciation, which sought to abolish the linguistic chaos of the imperial era by adopting Bopomofo as the National Phonetic Alphabet. It would never replace hanzi, but Bopomofo, rather than any of the Romanisation schemes, enabled China to master modern communications technologies. Methods based on the Roman alphabet – the Kuomintang’s Gwoyeu Romatzyh, the communists’ Latin New Script, the pinyin that succeeded them under Mao – made it possible to use a Qwerty keyboard to transcribe spoken Chinese. But as a system of encoding language, Bopomofo was considerably more powerful. Until the 1920s, Chinese telegraphy relied on a vast codebook that assigned an arbitrary number to each of hanzi’s six thousand characters; now any hanzi character could be generated using the 37 Bopomofo symbols. Bopomofo is still widely used in Taiwan, and it became one of the primary ways to input hanzi on a computer, its symbols mapped onto the letters of a standard keyboard.
Typing Chinese was always a challenge. In 1900, the San Francisco Examiner spread a rumour that a Chinese typewriter had been spotted in the back room of a newspaper office on Dupont Street: it was twelve feet long, with five thousand keys. As Thomas Mullaney explained in his exhaustively researched The Chinese Typewriter (2017), the first real progress came in the 1910s when two engineering students in the US – Zhou Houkun at MIT and Qi Xuan at NYU – refused to accept that the problem lay in hanzi and, years before the first phonetic scripts, came up with prototypes that understood hanzi characters not as irreducibly complex symbols but as combinations of modular units. The sound of a word was irrelevant if a typewriting machine could print a character by assembling the shapes from which it was constituted.
In Kingdom of Characters, Tsu makes it clear that the development of the Chinese typewriter was in part an American story: those early prototypes were followed by other models. Lin Yutang, a Chinese American writer whose book My Country and My People was a bestseller thanks to Pearl Buck’s sponsorship, was a vocal critic of Richards’s Basic English and criticised the attempt to abolish hanzi, describing it as an act of trimming the foot to fit the shoe. The shoe being the technology of Remington and Underwood. In 1946 he filed a US patent for the Mingkwai typewriter, which allowed ninety thousand different characters to be generated on a standard-size keyboard, thanks to a ‘magic viewer’ that enabled the typist to select from the possible alternatives each combination of key presses brought up. Obstructive Remington executives made sure the device was never manufactured, but the technology was acquired by the US air force and applied during the Cold War to further US interests in machine translation, data storage and optical information retrieval.
There is one significant absence in Tsu’s story: the role of imperial Japan. In the 1920s the Chinese market in typewriters was dominated by the Commercial Press in Shanghai, whose machines were based on the work of Zhou Houkun. But in 1932, in the wake of the invasion of Manchuria, Japanese agents provocateurs in Shanghai’s International Settlement sparked a series of demonstrations, providing the pretext for Japan to launch an aerial attack on the city. On 28 January six bombs landed on the offices of the Commercial Press. They narrowly missed the machine shop, but the ensuing fire destroyed an untold number of manuscripts and brought the company’s typewriter business to a halt.
As a result, by the start of the Pacific War the Nippon Typewriter Company had a near monopoly in the East Asian market. Its flagship machine, the Bannō (All-Purpose) typewriter, accommodated multiple scripts – Japanese, Manchu, Chinese, Mongolian – and was a sign of Japan’s rapid appropriation of the kanjisphere: the societies that used the hanzi script. Japan became its self-appointed guardian, seeking to bring the people of ‘the same script’ and ‘the same race’ together in a struggle against the Western powers. Its technolinguistic domination extended to control of key infrastructural assets linking the imperial centre to the continent: submarine cables, the wireless, telephone and railways, as well as the expanding network of postal and telegraph offices. Information and communication technologies were vital in sustaining Japanese power.
It could be argued that it was the challenge of hanzi that first showed up telegraphy and the typewriter as imperfect technologies: they were good at the efficient management of information and instant communication, but fell short when faced with complex scripts, images or systems of inscription. Phototypesetting (Germany’s Digiset), advanced laser typesetting (the UK’s Monotype), the fax machine and the microcomputer all brought new possibilities. The fax in particular made it easy to send documents of any kind across the globe in an instant: images of legal documents, marked-up text, drawings. Whether something was written in hanzi script or Roman letters was no longer relevant, and there was no need to reduce every piece of text to a uniform code. It’s no surprise that Japanese companies dominated the fax machine industry in the 1980s and 1990s, accounting for more than 90 per cent of worldwide sales.
With the rise of the computer all previous technologies became obsolete. Whether by voice, handwriting recognition or direct character input, hanzi can be transmitted in any way you choose. Technically, there is no longer any reason for one system of writing to dominate all others, and this finally put China, Japan, Korea and the rest on an equal footing with the West, at least in terms of input systems. In 1992, CJK characters – encoding all languages written with hanzi-based scripts – were incorporated into Unicode, the worldwide standard that enables all computing platforms to display any character in exactly the same way.
The temptation to tell a success story is irresistible. Like the computer scientist Zhi Bingyi, who pioneered the digital processing of hanzi characters in 1979 and suffered terribly during the Cultural Revolution (he wrote his pathbreaking code on the lid of a teacup for want of paper), China has had to overcome many trials to take its place on the world stage. But this would be to ignore the inconvenient fact that, as with the majority of IT standards, Unicode is governed not by China or another country in the kanjisphere but by Silicon Valley.