This essay refocuses the debate over autonomous weapons systems to consider the potentially salutary effects of the evolving technology. Law does not exist in a vacuum and cannot evolve in the abstract. Jus in bello norms should be developed in light of the overarching humanitarian goals, particularly since such weapons are not "inherently unlawful or unethical" in all circumstances. This essay considers whether a preemptive ban on autonomous weapons systems is likely to be effective and enforceable. It examines the grounds potentially justifying a preemptive ban, concluding that there is little evidence that such a ban would advance humanitarian goals because of a foreseeable lack of complete adherence. The essay concludes by suggesting three affirmative values that would be served by fully vetted and field-tested technological advances represented by autonomous weapons. Properly developed and deployed, autonomous weapons might well advance the core purposes of jus in bello by helping the balance the twin imperatives of military necessity and humanitarian interests.
CONTENTS I. INTRODUCTION II. THE LAW AND TECHNOLOGICAL ADAPTATION III. AUTONOMOUS WEAPONS AS THE NEW LEGAL FRONTIER IV. HOW LAW AND TECHNOLOGY CAN DEVELOP CONCURRENTLY V. AUTONOMOUS WEAPONS IN COMPLIANCE WITH JUS IN BELLO VI. CONCLUSION I. INTRODUCTION
The prospect of autonomous weapons as a regularized feature of future wars poses existential implications for the entire field of law regulating the use of force. Such autonomous weapons may over time become so commonplace and so uncontrollable that the idea of human decision-making, based on good faith efforts to comply with legal norms, as an outer limit to war making becomes eviscerated. At the time of this writing, autonomous weapons thus face pre-demonization aimed at freezing development or proliferation. Proponents of a complete ban on autonomous weapons frame the issue as centering on the prohibition of so called "killer robots." (2) This notion admittedly strikes a visceral nerve among the public at large, not to mention military experts and ethicists. The law of war is an inescapable aspect of the dialogue among people of good conscience who appreciate the awful consequences inherent in the waging of modern wars. For those seeking a preemptory ban on autonomous systems, such unease overshadows the costs of failing to explore the limits of technology or develop a full understanding of the information interface between humans and autonomous weapons that might well alter the ethical compass.
Indeed, the Secretary General of the United Nations questioned whether it can ever be "morally acceptable to delegate decisions about the use of lethal force to such systems" and wondered aloud whether the lack of individual culpability against a machine launching lethal force would ever make it "legal or ethical to deploy such systems." (3) The U.N. Special Rapporteur on the subject goes a step further to specifically recommend an immediate "national moratorium on at least the testing, production, assembly, transfer, acquisition, deployment of LARs [lethal autonomous robotics] until such time as an internationally agreed upon framework on the future of LARs has been established." (4) When the debate is framed as one in which "tireless war machines" hold the potential to "take decisions of life or death out of human hands," (5) such concerns seem eminently appropriate for preserving the integrity of international efforts to protect innocent lives during conflicts.
This short essay nevertheless aims to refocus the debate. Law does not exist in a vacuum. Legal norms always operate in synergy with changing contexts and often rapidly emerging challenges. They establish societal expectations and shape correlative rights in accordance with the shared experiences of other states. The experience of warfighters is an essential component of the effort to regulate armed conflicts. As with any emerging technology for waging war, the legal regime serves to reinforce accepted value structures. In perhaps his most famous observation, Justice Oliver Wendell Holmes noted that
"the life of the law has never been logic: it has been experience. The felt necessities of the time, the prevalent moral and political theories, intuitions of public policy, avowed or unconscious, even the prejudices which judges share with their fellow-men, have had a good deal more to do with the syllogism in determining the rules by which men should be governed." (6)
Because normative shifts in the law never serve as a complete tabula rasa, they, as well as the policy preferences that animate such legal reforms, do not march with the linear certainty of mathematical extrapolation or algebraic formulae. They move instead in episodic response to shifting valuations and perceptions in light of the everchanging tide of human events and the inevitable technological innovations that have shaped our world since the Enlightenment.
This essay will question the overall validity of the legal assumptions marshalled to support a preemptive ban on autonomous weapons. It concludes by postulating some salutary purposes that could be served by the development of lethal autonomous technologies, or at a minimum could guide future research in order to solve some of the most vexing issues the warrior faces in modern combat. Autonomous weapons platforms that operate in conformity with international humanitarian law could actually advance compliance with the law in the aggregate, as well as minimizing the human costs of conflict. This essay will first consider the arguments for a preemptive ban and conclude by postulating three possible contributions to legal norms of fielded autonomous weapons. Such salutary considerations have been almost entirely avoided in the passionate positions already staked out. Given the seemingly inevitable pace of technological change, it seems most opportune to think prospectively about the affirmative role that autonomous weapons could serve in actually reinforcing compliance with the laws and customs of warfare, rather than simply assuming that they cannot be compatible. If we know anything about the pace of technological innovation over the past two decades, we should recognize that it proceeds far faster than preconceptions will predict. An affirmative vision of how to make technology serve larger societal interests is much more salutary when it is articulated early enough to actually shape innovation to serve social goals, rather than forcing legal and social norms to mold around newly fielded technologies. So it should be with the interface between autonomous weapons systems and the laws and customs of warfare.
THE LAW AND TECHNOLOGICAL ADAPTATION
At the outset, it must be understood that the campaign for a moratorium on all autonomous weapons systems must be assessed against the backdrop of personalities and politics. Some of the same people that championed the 1997 Ottawa Convention, (7) banning the use, production, or transfer of anti-personnel landmines, figure prominently in the similar efforts regarding autonomous weapons. The game is the same, but played on a different field before a different audience and with different rules. The minimal decline in landmine use worldwide over the past two decades, (8) and the devastating spread of improvised explosive devices (which are their functional equivalent), (9) should provide an instructive and cautionary exemplar when advocates automatically assume the utility of a total ban on autonomous weapons systems. Total bans have never been wholly effective in changing state practice as a standalone matter. This is partially true because there will always be an incentive for some actors to gain disproportionate advantages by ignoring such constraints. (10) Similarly, the strident movement for a moratorium on the development or deployment of autonomous weapons is squarely situated against the international angst caused by the widespread use of drones to strike enemies of the United States on foreign soil without the public expression of support by the affected sovereign state. The push for a ban on autonomous weapons has at times had the flavor of a politicized push to punish the United States for its controversial use of extraterritorial drone strikes. (11) There is a strong waft of politics that cannot be entirely disentangled from the legal posturing. Legal evolution never serves as a tabula rasa] thus, the lawyer/counselor must be as objective as possible in framing the issues based on law and precedent rather than passion and politics.
In this vein, the history of the law of armed conflict itself provides reason to believe that the law will adapt as needed in the wake of technological innovation. Throughout its history, jus in bello (12) has adapted at the precise points of friction with changing technology in order to reinforce the validity of legal regulation, while preserving the ability to wage war lawfully. Esteemed scholars have advocated precisely this view on the basis that autonomous weapons "are not inherently unlawful or unethical," and that their responsible and effective use can conceivably be situated within a modified understanding of legal and ethical norms. (13) It cannot be overstated that the moral tension between evolving technology and the application of the precise legal rules designed to minimize the cruelty and inhumanity during conflict represents one of the enduring threads within the field. Shifting legal norms have always echoed a strand of Just War thinking that has sought to define the proper bounds for waging war. (14) St. Augustine wrote that peace "is not sought in order to provoke war, but war is waged in order to attain peace. Be a peacemaker, then, even by fighting, so that through your victory you might bring those whom you defeated to the advantages of peace." (15)
Law and technology have operated in a healthy tension with each...