DeM Banter: This is something we have been pondering here at the MCG for a long time…it is not difficult to see where this line is going. The bigger issue… the “train has left the station,” and it is not going backwards. So how do we embrace this change and lead responsibly? The technology is here to stay…when we look at cutbacks, these robotic weapon systems are oh so attractive. I don’t think we can say they are cheap…but they don’t eat, they don’t have families, they don’t need medical care, they don’t need a retirement… all the the things we hear in the press that are “issues” for the military. Where do you see this going? Maybe I watch too much SciFi Channel?
Wall Street Journal
March 15, 2013
Pg. 13
If the moral dilemmas now seem difficult, wait until robotic armies are ready for deployment.
Recent reports on the Obama administration’s use of military drones to fight terrorism sparked controversy about foreign policy and about international and constitutional law. Yet the development of drones is just one part of a revolution in war-fighting that deserves closer examination—and considerable soul-searching—about what it will mean for the moral and democratic foundations of Western nations.
Drones are unmanned aerial vehicles that, together with unmanned ground and underwater vehicles, constitute primitive precursors to emerging robotic armies. What now seems like the stuff of Hollywood fantasy is moving toward realization.
Over the next two to three decades, far more technologically sophisticated robots will be integrated into U.S. and European fighting forces. Given budget cuts, high-tech advances, and competition for air and technological superiority, the military will be pushed toward deploying large numbers of advanced weapons systems—as already outlined in the U.S. military’s planning road map through 2036.
These machines will bring many benefits, greatly increasing battle reach and efficiency while eliminating the risk to human soldiers. If a drone gets shot down, there’s no grieving family to console back home. Politicians will appreciate the waning of antiwar protests, too.
The problem is that robotic weapons eventually will make kill decisions on the battlefield with no more than a veneer of human control. Full lethal autonomy is no mere next step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object.
When robots rule warfare, utterly without empathy or compassion, humans retain less intrinsic worth than a toaster—which at least can be used for spare parts. In civilized societies, even our enemies possess inherent worth and are considered persons, a recognition that forms the basis of the Geneva Conventions and rules of military engagement.
Lethal autonomy also has grave implications for democratic society. The rule of law and human rights depend on an institutional and cultural cherishing of every individual regardless of utilitarian benefit. The 20th century became a graveyard for nihilistic ideologies that treated citizens as human fuel and fodder.
The question now is whether the West risks, however inadvertently, going down the same path.
Unmanned weapons systems already enjoy some autonomy. Drones will soon navigate highly difficult aircraft-carrier takeoffs and landings. Meanwhile, technology is pushing the kill decision further away from human agency. Robotic systems can deliver death blows while operated by soldiers thousands of miles away. Such a system can also easily be programmed to fire “based solely on its own sensors,” as stated in a 2011 U.K. defense report.
The kill decision is still subject to many layers of human command, and the U.S. Defense Department recently issued a directive stating that emerging autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
Yet this seems more like wishful thinking than realistic doctrine. Military budget cuts are making robotic autonomy almost fiscally inevitable. A recent study by the Reserve Forces Policy Board concluded that current military-personnel levels are unsustainable, consuming half the Defense Department budget. The Center for Strategic and Budgetary Assessments, in a study published in July, found that military-personnel costs will account for the entire defense budget by 2039, if costs continue growing at the current rate and defense spending increases only by inflation. Many robotic units cost one-tenth of what it takes to put a human soldier in the field.
In possible future military engagements with antagonists such as Iran, North Korea or China, the unfettered air superiority that the U.S. and its allies enjoyed in Iraq and Afghanistan will be challenged. It will be far more difficult for human operators to communicate reliably with remote unmanned weapons in war’s chaos. The unmanned weapons will be impossible to protect unless they are made autonomous.
Recently the military verbiage has shifted from humans remaining “in the loop” regarding kill decisions, to “on the loop.” The next technological steps will put soldiers “out of the loop,” since the human mind cannot function rapidly enough to process the data streams that computers digest instantaneously to provide tactical recommendations and coordinate with related systems.
Fully autonomous weapons systems have already been deployed. Israel’s Iron Dome antimissile system automatically shot down dozens of Hamas rockets in November. Iron Dome (and similar systems protecting U.S. Navy ships) would respond autonomously to inbound manned fighter jets and make the kill decision without human intervention.
Since these systems are defensive and must be autonomous to protect the innocent effectively, do they pose the same moral dilemma as offensive weapons? Should lethal autonomy be restricted to defensive weapons? At what point do defensive capabilities embolden offensive operations?
So far, debate about robotic autonomy has focused solely on compliance with international humanitarian law. In December, Human Rights Watch released a report calling for a pre-emptive ban on autonomous weapons, noting that “such revolutionary weapons” would “increase the risk of death or injury to civilians during armed conflict.”
Michael N. Schmitt, chairman of the U.S. Naval War College’s International Law Department, responded that war machines can protect civilians and property as well as humans. This assurance aside, it is far from clear whether robots can be programmed to distinguish between large children and small adults, and in general between combatants and civilians, especially in urban conflicts. Surely death by algorithm is the ultimate indignity.
Time is running out for military decision makers, politicians and the public to set parameters for research and deployment that could form the basis for national policy and international treaties. The alternative is to blindly accept as inevitable whatever technology offers. Let’s not be robotic in our acquiescence.
Maj. Gen. (Ret) Latiff, a consultant on national defense and intelligence technology, is an adjunct professor at the University of Notre Dame. Mr. McCloskey, the author of “The Street Stops Here: A Year at a Catholic High School in Harlem” (University of California, 2010), serves on the faculty at the School of Education at Loyola University Chicago.
Interesting article. My thoughts, as I’ve been mulling this broad topic quite a bit recently…
1) Why should drones be treated differently than any other use of force by the US government? A target is either legal or illegal… appropriate or inappropriate. It doesn’t matter whether an F-16, F-22, B-1 or MQ-9 is unloading the armament. The discussion should center on processes built to effectively identify targets.
2) Computer identification of targets is not necessarily bad. As the author stated, it can process data more accurately and effectively. What does it matter if the target is a large child or small adult- if it is firing a gun at American troops then I don’t have a problem with the target’s legality. If smart people figure out the technology to identify enemy posture to the same effectiveness than the most accurate human, then why not use it? Where is the moral dilemma?
Brian: Great question and I wish I had a clear answer. What I would offer is that we can take more risk in targeting using an RPA or robotic weapon system. I am not sure we would be striking targets in certain countries if we had to put a jet and a pilot over the target. With a robot there is less risk, less risk…alters the calculus in striking the target…and thus begins an escalation in the number of strikes and where/when they occur.
So it is not really about the platform that delivers the weapon…it is more about placing the platform in harms way. Just me…but if seems we are engaging in more strikes with less risk… I don’t have the numbers to back that up. But I do know there are a lot less folks heading to Gitmo and more meeting Allah… is that due to RPAs? Or is it something else? Whatever it is, we can send more folks to their maker with a lot less risk.
I might be wrong… but less risk seems to equate to more conflict. I think there was a Star Trek episode on that one…
I don’t necessarily disagree with your explanation of executive logic, but I do agree with the fundamental logic itself. Either striking the target is the right thing to do, or not. If it is the right thing to do, the mechanism of the strike should be irrelevant. So the increase in strike due to lower risk is merely unmasking the underlying logic behind our foreign policy. The problem with drone strikes isn’t so much that drones lower risk, but that government thinks that this is an appropriate use of force. It’s a problem that we don’t acknowledge a state’s sovereignty; a lower risk of failure SHOULD not lead to an inappropriate risk of strike. Instead, we’re finding that the government has very low standards for the use of force- and it is precisely this that should be criticized, instead of the tool that is implementing this policy.
Correction- the mechanism of strike isn’t irrelevant, but now the practical cost-benefit analysis of a wartime encounter should be the driving logic. But only after the decision that a target is legal and desirable had been established.
I don’t disagree, but I don’t know if this is what’s currently in the decision making process. The threshold has been lowered. Heck, you are stepping into the RPA world…let me know. Vooj and I have had some great convos on the matter…