New Tech, Old War
Israel’s latest assault on Jenin killed twelve Palestinians, injured a hundred and temporarily displaced three thousand in just over 48 hours. Some had fled Israeli military violence once or twice before, in 1948 and 1967. Most had their homes razed in 2002 during a ten-day bombardment that left 56 Palestinians and 23 Israeli soldiers dead.
The destruction was at odds with recent IDF press releases, which have claimed Israel is on the brink of revolutionising warfare. Israel’s military has cast itself as an ‘artificial intelligence superpower’, promising that automated weapons would make warfare more precise and, by implication, more humane. But AI has not revolutionised warfare for the Palestinians in Gaza and the West Bank who will continue to live under its terrorising effects.
Military think-tanks and arms industry leaders were trumpeting machine learning algorithms and large language models long before the appearance of ChatGPT, which sparked a flurry of AI hype across the world’s media. In 2021, the US National Security Commission announced that dominance over AI innovation was the only way to save American civilians from Chinese military aggression. Google’s Eric Schmidt and Facebook’s Sheryl Sandberg proclaimed that any attempts to regulate big tech could severely limit the US military’s capabilities.
Following the United States’ lead, the Israeli army claimed in May 2021 that it had waged ‘the world’s first AI war’: algorithms trawled through troves of surveillance data to determine where drones would drop bombs in eleven days of fighting that killed more than 230 Palestinians in the Gaza Strip and injured more than two thousand. In the two years since, military spokespeople have taken the two subsequent assaults on Gaza – which have left 82 dead and thousands homeless – as an opportunity to advertise the IDF’s cutting-edge machine learning capabilities. Last month, the information technology and cyber commander, Eran Niv, promised that soon ‘the entire IDF will run on generative AI.’
Such announcements are never divorced from the economic incentives of Israel’s arms industry, which is closely tied to its military apparatus. Still, AI’s impact on warfare over the past few years should not be understated, as technological advances have further increased the asymmetry of already asymmetric conflicts. Last September, Israel installed a remote-controlled riot gun, which uses AI to track its targets, at a checkpoint in Hebron. Biometric recognition cameras track civilians through urban spaces, sustaining a regime of mass surveillance with minimal human intervention. Large language models determine where autonomous drone swarms should drop missiles over crowded refugee camps, minimising the number of UAV operators exposed to the bloodshed of aerial warfare.
Israeli military spokespeople like to cast the IDF as a pioneer, exploring the uncharted territory of automated warfare. However, these technologies are now ubiquitous on battlefields worldwide. The government in Tripoli deployed small Turkish-manufactured drones to hunt down and kill militants in western Libya without human intervention in 2020. In Ukraine, both sides have dispatched autonomous weapons to kill enemy soldiers and defend critical infrastructure, from small kamikaze drones to unmanned ground vehicles fitted with machine-guns and explosives.
Weapons manufacturers advertise their technologies as efficient and humane security solutions. But as machine learning becomes a hallmark of military violence, the new autonomous weapons – like their predecessors throughout the history of arms development – have failed to deliver on that promise. For armies with advanced technical arsenals, military offensives have become easier to wage without the need to muster political support from a public divorced from their effects. For civilians living under the incessant threat of aerial bombardment and sniper fire, war has become a chronic condition.
Last week’s assault in Jenin showed once again the cruel asymmetry of today’s automated wars. The 48-hour offensive was the largest IDF operation on the West Bank in decades, yet barely punctured the routines of Jewish Israelis beyond the Green Line, long sheltered from their government’s policies towards Palestinians. Of the two thousand Israeli soldiers mobilised in the latest assault, many were sitting behind computer screens in fortified bases watching as algorithms planned missile strikes and directed relatively small numbers of ground troops across a crowded refugee camp. The IDF came away with only one casualty, a soldier probably killed by friendly fire.
For Palestinians, innovations in autonomous warfare have only compounded the terrorising impact of Israel’s military operations. According to Jenin’s deputy governor, 80 per cent of Palestinian homes in the city were damaged in the raid last week. United Nations officials described the attacks as a form of ‘collective punishment’ and warned that the scope of destruction may amount to a war crime, citing reports of Israeli troops blocking ambulances from evacuating the wounded and shooting at Palestinian journalists.
As Palestinians returned to their homes this week, older residents described having the same nightmares as after previous raids. Children were afraid of sleeping alone in their bedrooms. ‘The trauma is enduring, it’s chronic, it’s historical and it’s intergenerational,’ Samah Jabr, the head of the Palestinian Authority’s mental health services, told al-Jazeera.
None of the technologies adopted by Israeli forces over the past two decades stopped the children who saw their homes bulldozed in 2002 from growing up to lead the militant groups targeted in the latest operation. Nor did they prevent last week’s assault from subjecting another generation of Palestinians to the same terror. As new technologies allow an old war to drag on, the human cost of automated warfare is increasingly evident.