Blogger’s note: nice work by Arquilla…all the pessimism I’ve been moaning about for weeks….with a healthy dose of optimism. I like it!
June 24, 2012
Technology has made conflict cheaper, safer and faster–and the world is better for it.
“It is well that war is so terrible,” Confederate General Robert E. Lee once said, “lest we should grow too fond of it.” For him, and generations of military leaders before and since, the carnage and other costs of war have driven a sense of reluctance to start a conflict, or even to join one already in progress.
Caution about going to war has formed a central aspect of the American public character. George Washington worried about being drawn into foreign wars through what Thomas Jefferson later called “entangling alliances.” John Quincy Adams admonished Americans not to “go abroad in search of monsters to destroy.” Their advice has generally been followed. Even when it came to helping thwart the adventurer-conquerors who started the 20th century’s world wars, the United States stayed out of both from the outset, entering only when dragged into them.
This pattern briefly changed during the Cold War, with the launching of military interventions in Korea and Vietnam. The former was fought to a bloody draw; the latter turned into a costly debacle. Both were quite “terrible,” costing tens of thousands of American lives and untold treasure — nearly 100,000 lives and trillions of dollars — reaffirming Lee’s reservations.
Operation Desert Storm — a lopsided win against a weak opponent in Iraq — seemed to break the pattern, ushering in President George H.W. Bush’s “new world order.” But the military experiments in regime change begun by his son — an unexpectedly long and bloody slog through Iraq and Afghanistan — reawakened traditional concerns about going to war, propelling Barack Obama to the presidency and energizing Ron Paul’s support within the GOP.
Even Obama’s “intervention-lite” in Libya proved unsatisfying, unleashing much suffering and uncertainty about the future of that sad land. And a furious debate rages about the practical and ethical value of drone bombing campaigns and “targeted killing” of our enemies — due in part to the deaths of innocents caught up in these attacks, but also because of the possibility of fomenting rabidly anti-American sentiments, perhaps even revolution, in places like nuclear-armed Pakistan.
But now, somehow, it seems that war may no longer seem so terrible.
How has this come to pass? The culprit is the bits and bytes that are the principal weapons of cyberwar. It is now possible to intervene swiftly and secretly anywhere in the world, riding the rails of the global information infrastructure to strike at one’s enemies. Such attacks can be mounted with little risk of discovery, as the veil of anonymity that cloaks the virtual domain is hard to pierce. And even when “outed,” a lack of convincing forensic evidence to finger the perpetrator makes heated denials hard to disprove.
Beyond secrecy, there is also great economy. The most sophisticated cyberweaponry can be crafted and deployed at a tiny fraction of the cost of other forms of intervention. No aircraft carriers needed, no “boots on the ground” to be shot at or blown up by IEDs. Instead, there is just a dimly lit war room where hacker-soldiers click for their country, and the hum of air conditioners keeping powerful computers from overheating. Cool room, cool war.
The early returns seem to suggest the great efficacy of this new mode of conflict. For example, the Stuxnet worm, a complex program of ones and zeros, infected a sizable proportion of Iran’s several thousand centrifuges, commanding them to run at higher and higher speeds until they broke. All this went on while Iranian technicians tried fruitlessly to stop the attack. The result: a serious disruption of Tehran’s nuclear enrichment capabilities — and possibly of a secret proliferation program.
The sabotage occurred without any missile strikes or commando raids. And, for now, without any open acknowledgment of responsibility, although reporters and others have pointed their fingers at the United States and Israel. It is loose lips in high places, not sophisticated “back hacking,” that seem to have divulged the secret of Stuxnet.
Another example of the looming cool war is the malicious software known as Flame, which sought information via cybersnooping from target countries in the Middle East. The code that constitutes it seems to make the point that we no longer need physical agents in place if we can now rely on artificially intelligent agents to dredge up the deepest secrets. There will be no new John le Carr to chronicle this era’s spies. Not when the closest thing to George Smiley is a few lines of source code.
Beyond Stuxnet-like “cybotage” and software-driven spying, the coming cool war might also influence whether some traditional wars are even going to break out. The good news is that a preemptive cyberattack on the military command-and-control systems of two countries getting ready to fight a “real war” might give each side pause before going into the fight. In this instance, the hackers mounting such attacks should probably publicize their actions — perhaps even under U.N. auspices — lest the disputants think it was the enemy who had crippled their forces, deepening their mutual antagonism. There are no doubt some risks in having a third party mount a preemptive cyberattack of this sort — but the risks are acceptable when weighed against the chance of averting a bloody war.
The other potential upside of cool war capabilities, in addition to tamping down military crises between nations, would lie in multilateral tracking of transnational criminal and terrorist networks. These villains thrive in the virtual wilderness of cyberspace, and it is about time that they were detected, tracked, and disrupted. Think of Interpol, or an international intelligence alliance, using something like Flame to get inside a drug cartel’s communications network. Or al-Qaeda’s. The potential for illuminating these dark networks — and bringing them to justice — is great and should not be forgone.
On balance, it seems that cyberwar capabilities have real potential to deal with some of the world’s more pernicious problems, from crime and terrorism to nuclear proliferation. In stark contrast to pitched battles that would regularly claim thousands of young soldiers’ lives during Robert E. Lee’s time, the very nature of conflict may come to be reshaped along more humane lines of operations. War, in this sense, might be “made better” — think disruption rather than destruction. More decisive, but at the same time less lethal.
Against these potential benefits, one must also weigh the key downside of an era of cyberconflict: the outbreak of a Hobbesian “war of all against all.” This possibility was first considered back in 1979 by the great science-fiction writer Frederik Pohl, whose dystopian “The Cool War” — a descriptor that might end up fitting our world all too well — envisioned a time when virtually every nation fielded small teams of hit men and women. Their repertoires included launching computer viruses to crash stock markets and other nefarious, disruptive capabilities.
In Pohl’s novel, the world system is battered by waves of social distrust, economic malaise and environmental degradation. Only the rebellion of a few cool warriors — some, but not all, were hacker types — at the end, offers a glimmer of hope for a way out and a way ahead.
The question that confronts us today is whether to yield to the attractions of cyberwar. We have come out of one of mankind’s bloodiest centuries, and are already in an era in which wars are smaller — if still quite nasty. Now we have the chance to make even these conflicts less lethal. And in reality, there may be no option. Once the first network or nation takes this path — as some observers believe the United States is doing — others will surely follow, starting a new arms race, this time not in weaponry, but in clandestine and devastating programs like Stuxnet and the Flame virus.
It is a curious irony that the United States, a power traditionally reluctant to go to war but furious in its waging, is now seemingly shifting gears. It is becoming a nation with the capability to go to war easily, while at the same time far less ferociously. Is this an improvement? Perhaps. Delaying Iranian proliferation with bits and bytes seems far superior to the costs and risks that would be incurred, and the human suffering inflicted, by trying to achieve such effects with bombs and bullets.
But looking ahead, how will Americans respond when others begin to employ cyber means to achieve their ends, perhaps even by attacking us? After all, Stuxnet escaped from that Iranian facility into the wild, and is certainly being studied, reverse engineered and tweaked by many around the world. No country may be foolish enough to engage the incomparable U.S. military in open battle, but we seem like fairly easy pickings to the computer mice that may soon roar.
Despite all these concerns, though, a Cool War world will be a better place to live in than its Cold War predecessor. Yes, conflict will continue in the years to come, but it will morph in ways that make our self-destruction as a civilization less likely — even if it means living with occasional disruptions to vulnerable high-tech systems.
The bargain made when “cyber” and “war” came together need not turn out to be Faustian. This story can still have a happy ending: As war becomes “cooler,” mankind’s future may edge a bit closer to the utopian end that all of us, secretly or not so secretly, truly desire.
John Arquilla is professor and chair in the department of defense analysis at the U.S. Naval Postgraduate School, and author, most recently, of “Insurgents, Raiders, and Bandits.”