Autonomy or Humanity: Robotic Interrogators, Weaponry, and Police Forces

Few games have inspired such devotion and careful study throughout history as chess. But chess has drastically changed in the past 20 years. The development of computers has enabled chess computer systems that can beat even the greatest grandmasters such as Garry Kasparov. The evolution of computers enabled a new form of chess to emerge: Advanced Chess. Advanced chess combines human players with supporting computer systems to enable a level of play that has generally surpassed the most sophisticated computers and the most skilled humans that play alone. The lesson of chess is that, in many cases, the most effective systems are those that combine humans with powerful computers. The same may hold true for the military, big data, and the police.

In the military, autonomous robots pose very real threats to humanitarian law and democracy. Ian Kerr and Katie Szilagyi divide robotic weaponry into three categories: remote controlled weapons, semi-autonomous robots, and lethal autonomous robots. While the first two are already widely used, the third option is only now starting to be deployed in places like the Korean DMZ. As Kerr and Szilagyi note, fully autonomous robots may violate humanitarian law by failing to properly discriminate between combatants and non-combatants. Even if the technology were well enough established to successfully implement ethics on the battlefield, autonomous military robots have the potential to distance humans from the costs of war, reducing casualties on only one side.

Are such robots truly more effective at waging war than a combination of humans and robots (semi-autonomous)? History would suggest that robots may replace humans in some respects, but it is far more likely that a combination between humans and robots will yield the most efficient outcomes. If a human is regulating robots in the field, somewhat autonomous systems could be much safer and have a higher likelihood of respecting international law. A greater focus on semi-autonomous robots in warfare may also limit the social impacts that Daniel Suarez noted in his TED talk "The kill decision shouldn't belong to a robot." The necessity for humans in the loop, so to speak, helps ensure that there is a real cost to war and that humans remain conscious of the impacts of military actions despite being somewhat removed.

Similar initiatives have also been proposed to introduce autonomy into police forces. These efforts focus on automatic citations that can be quickly processed or robotic interrogation units that can use physiological information to detect if a patient is lying. Here, again, we find that a balance between robotic autonomy and human intervention may likely prove the most effective to mitigate concerns. In "Confronting Automated Law Enforcement," the authors outline a variety of concerns and considerations in the deployment of automated law enforcement schemes. Their model suggests that existing technologies could be combined to automatically ticket people for speeding or immediately catch criminals using automated surveillance techniques.

In such an approach, a variety of issues would arise, including privacy, legal discretion, a higher burden of processing infraction appeals, and more. If the implementation is incorrect or laws are improperly converted to computer code, the autonomous system may encounter issues. Again, it seems that relying on a human to arbitrate issues would improve the effectiveness of the system. Humans have discretionary capabilities that could be applied to infractions that have been caught by computers. The combination could ensure that police forces ticket the right people, reducing the false positive rates inherent to large systems.

Robotic interrogations are no different. Humans have an innate capability to carry the direction of a conversation, probing suspects where necessary to extract information. Robots, on the other hand, can collect significant physiological information on suspects, delivering real time data about whether a suspect may be deceiving the investigators. As a result, a human-assisted robotic interrogation may prove the most effective.

Whether in the military or with respect to law enforcement, full automation gives computers control over all the decisions. It grants robots the power to navigate ethical dilemmas or extract a confession. When we cede power to autonomous systems, though, we may fall victim to situations in which we act on data before understanding all the relevant information. This tendency toward preemptive action, well-outlined in Ian Kerr and Jessica Earle's “Prediction, Preemption, Presumption: How Big Data Threatens Big Picture Privacy,” has the potential to go too far when legal punishment or human lives are on the line. The military and law enforcement agencies may want to take a cue from the world of chess when thinking about deploying autonomous systems. Oftentimes, a little humanity can help robots more consistently make the right decisions.