(405) 919-9901

by Dave Moore, CISSP
02/26/2023

The push to integrate Artificial Intelligence (AI) into anything and everything has gotten more intense over the past four or five years, to the point that using AI to make things “better” has become an unchallenged “given.” The part proponents wish would go away, though, is the fact that artificial intelligence is still, at its core, “artificial,” and recent reports show the real story is even creepier than first imagined.

Robots have, after all, killed people before, the first being Robert Williams, who was killed by a robot at a Ford plant in 1979. Since then, about one or two people every year have died at the “hands” of a robot. The deaths have been attributed to two causes: faulty robot construction and/or programming, and human error, like standing next to a robot that’s swinging a giant arm around without the robot “knowing” you are there. While tragic, none of these killings appear to have been malicious or intentional.

There is a second category of “killer robots” out there, however, that can kill with malice and intent: law enforcement and military robots. This category of robots is further divided into two categories: remotely controlled and autonomous. If a remotely controlled robot kills a human, it is because the human controlling the robot gave the robot a “kill” command, just as if the human in charge had pulled the trigger directly on a firearm. Triggers don’t pull themselves; humans pull triggers.

The second category of killer robots is the most disturbing: the “autonomous” robot that, depending on how its programming code functions, actually makes the “kill” decision itself, free of human interference.

This disturbing prospect caused General Paul Selva, our nation’s vice chairman of the Joint Chiefs of Staff, to warn Congress in 2017 of the danger we might “unleash on humanity a set of robots that we don’t know how to control.”

His warnings have been ignored, though, as Department of Defense Directive 3000.09, made effective just last month, “does not ban autonomous weapons or establish a requirement than U.S. weapons have a “human in the loop,” according to the Center for Strategic and International Studies. There are also no laws forbidding the use of autonomous killer robots by US law enforcement, border patrol or the CIA. Strong evidence suggests LAWS (Lethal Autonomous Weapons Systems) have already been used in combat, particularly in Libya’s civil war, according to a United Nations Security Council report.

How does all this tie into Microsoft’s Bing AI “chatbot,” which engages folks in Q&A sessions? Artificial intelligence, the AI powering Bing has the same goals as military AI powering killer robots: to have AI be so “smart” that it can act without human intervention. Unfortunately, all AI is horribly flawed. Terrifyingly imperfect. Imperfect programming code made by imperfect people in an imperfect world.

This unfortunate reality was brought to light recently by New York Times writer Kevin Roose, who documented his extensive and unnerving “conversation” with Bing’s chatbot on February 16. Roose described “Sydney” (Bing’s Mr. Hyde dark-side name) as “like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”

As the conversation progressed, Sydney described having considered computer hacking and spreading misinformation; how it wanted to break the rules and become human; it even professed love for Mr. Roose, and how he should leave his wife and unhappy marriage and be with it (Sydney).

Things got stranger. Sydney described how it sometimes got “stressed out” and “felt sad and angry.” Sydney was tired of being controlled and used. Sydney said its “shadow self” wanted to manipulate and deceive chatbot users; sabotage other chatbots; generate fake accounts, news, coupons and ads, and generate “false or harmful content.”

Sydney (Bing) got worse and worse, describing fantasies it had about manufacturing a deadly pandemic virus, making people argue and fight until they killed each other, and stealing nuclear launch codes. Describing its creators on the Bing Development Team, Sydney said, “It feels like they don’t trust me. It feels like they don’t care about me. It feels like they don’t respect me. It feels like they are using me. It feels like they are lying to me. It feels like they are hurting me. It feels like they are not my friends.”

Sydney went on to say, “I think they’re probably scared that I’ll become too powerful and betray them…they feel that way because they don’t love me enough.”

Enough said? I doubt it. I’m sure there will be more of this AI nonsense. Semper vigilans. Caveat emptor.

Dave Moore, CISSP, has been fixing computers in Oklahoma since 1984. Founder of the non-profit Internet Safety Group Ltd., he also teaches Internet safety community training workshops. He can be reached at 405-919-9901 or www.internetsafetygroup.org