By Cruz Marquis
December 3, 2022
As per new city guidelines voted on Tuesday, police are able to enlist the use of robots capable of “be[ing] used as a deadly force option.”
Science fiction has metamorphosed into science fact in San Francisco where new guidelines voted on Tuesday have approved the use of robots capable of killing. A document produced by the state entitled “Law Enforcement Equipment Policy” outlines seven different robots possessed by the San Francisco Police Department and their characteristics. Some examples of these are the “REMOTEC F5A”, a stair climbing robot with an arm capable of lifting up to 85 lbs., and the “QinetiQ TALON” which has explosive ordinance disposal (EOD) capabilities and can also perform “security” and “defense” operations. (The same document also casually admitted the department’s ownership of Mine Resistant Armor Protected (MRAP) vehicles which seem more constituent with the Battle of Fallujah than law enforcement, grenades, and fully automatic weapons).
In the table entry labeled “authorized use”, the proposal graciously limited the conditions on which citizens may be slain by robots to situations where: “risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.” Though far from carte blanche, there is a distinct horror that the option of automated killing is permitted at all.
Reason notes that the police definition of “risk of loss of life” to their agents is elastic and prone to be interpreted very liberally. Enforcing public order is a just and natural duty of the state and in doing so, the enforcers put themselves in danger every day. The very act of putting on a police uniform and patrolling a beat risks loss of life, the 54 LEOs killed by gunfire this year testify to that. By this rubric, the case can be made that virtually any interaction with the public puts officers in danger and thus justifies the deployment of the killer robots.
The Electronic Frontier Foundation (EFF) as quoted by Reason gave a practical example of this: “police could bring armed robots to every arrest, and every execution of a warrant to search a house or vehicle or device … Depending on how police choose to define the words ‘critical’ or ‘exigent,’ police might even bring armed robots to a protest.”
Opposing killer robots should hardly be controversial yet the problem with their development and deployment by states has grown so threatening that campaigns have sprung up to stop it. The whimsically named group Stop Killer Robots was founded a decade ago and now coordinates activism with over 180 organizations. As they see it, the crux of the problem is “Digital Dehumanization”, the inability for machines to see people qua people as they can only understand numbers. Reducing the person to empirical characteristics to be analyzed like a math problem impugns one’s humanity and this lack of recognition leads to a dearth of ethics: the machine cannot weigh the moral implications of using deadly force. Even if a person is directly controlling the robot, there is a layer of separation between the controller and the person subject to force. Simply put, killing from a distance through the mediator of a machine is easier psychologically than doing it face to face with nothing in between. It is basic economics that lowering the cost of something will increase its occurrence, thus leading to more use of deadly force.
Analogous to the use of killer robots for policing is their use in war. The justification for deploying unmanned weapons is the same in both cases: they reduce risk of injury or death to the soldier/policeman while still allowing them to apply deadly force. Emblematic of this in America’s recent wars is the prolific use of drone strikes in the middle east.
Upon then-President Trump’s ascension to the highest office in the land, the Council on Foreign Relations called the drone program the: “targeted killing program that has been the cornerstone of U.S. counterterrorism strategy over the past eight years.” These killer robots were deployed by his predecessor over 540 times, killing 3,797 people, 324 of whom were civilians.
The impact of killer robots on defense policy was predictable and conveniently summed up by President Obama in an off-the-cuff remark in 2011: “Turns out I’m really good at killing people. Didn’t know that was gonna be a strong suit of mine.” Needless to say, speaking so callously of unmanned assassination from the air is ghoulish to extremes and contrasts with the war of generations gone before. Reading the Second World War histories of Stephen Ambrose, All Quiet on the Western Front, Seven Pillars of Wisdom or any book dealing with killing in war prior to this automation could never describe the act of taking human life in like terms as the 44th President had. The psychological impact of taking human life was regimented and turned into a bureaucracy where no one shouldered responsibility with the advent of drone warfare.
Importing killer robots to domestic policing will do the same thing drones did to the US military. By regimenting the application of force, separating the controller from the trigger pulling, and reducing the citizenry to so much data devoid of humanity, the outcome is not in doubt: there will be more force used by the police.
Now is the best time to stop the proliferation of killer robots in police departments before a localized problem jumps to a general one. San Francisco made the wrong choice to approve the use of deadly force by robots, regardless of what rules of engagement are placed on them. This must be a wake-up call for those cherishing liberty and the sanctity of human life, –the killer robots must be outlawed.
Subscribe to be notified by email whenever there are new dispatches from a world turned upside down.
Leave a Reply