(405) 919-9901

by Dave Moore, CISSP

05/29/2022

Theoretical physicist Stephen Hawking, arguably one of the smartest people in history, warned, in an interview with the BBC, that, “the development of full artificial intelligence (AI) could spell the end of the human race.”

Hawking went on to say, at the Web Summit technology conference in Lisbon, Portugal, “AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

In 2015, dozens of brainiac scientists and technology experts, including celebrity physicists like Hawking and Elon Musk, signed a letter warning that, even though AI could be used for great good, it could also have potentially devastating, dangerous and unintended uses.

Then, in 2018, thousands of scientists and leaders in artificial intelligence, seeing where governments were heading with the military use of AI, signed a pledge declaring, “we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”

The Future of Life Lethal Autonomous Weapons Pledge went on the state, “lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”

In other words, just say “No” to killer robots.

Other equally dangerous uses for AI exist. While not utilizing bullets or death ray technology, AI is also being used for the creation and dissemination of information, and potentially, propaganda and disinformation. Did you know newspaper articles and columns are being written by robots?

That’s right: newspaper articles and columns are being written by robots.

Robot-written technology has been around for a while. In 2015, Digiday.com reported the Washington Post’s robot reporter, named “Heliograf,” had written hundreds of published articles covering everything from the Olympics and political elections to football games. They went on to describe how the Associated Press had used AI robots to report earnings coverage, and USA Today was using it to create videos, all under the harmless technology called “automated journalism.”

In 2019, the New York Times’ Jaclyn Peiser reported, “Roughly a third of the content published by Bloomberg News uses some form of automated technology. The system used by the company, Cyborg, is able to assist reporters in churning out thousands of articles on company earnings reports each quarter.” She also reported the Australian version of The Guardian has published its first “machine assisted” article, a story about political donations, and noted the Associated Press’s use of “Automated Insights,” a company that produces billions of robo-stories every year.

Also known as “algorithmic journalism” or “robot journalism,” automated journalism refers to computers gathering information from various sources and, using specialized programming with phony intelligence (ahem — sorry) “artificial” intelligence, attempt to assemble the information in a way that will fool human beings into thinking that other human beings actually wrote a newspaper article or magazine column. The results can vary, from downright laughable to pretty darned convincing.

How can you tell it something has been robo-written? Well, the good folks at the MIT-IBM Watson AI Lab and Harvard University have invented a robot to guess if something is robot or human written. Interesting, huh? Check it out at http://gltr.io/dist/index.html, and then test some text for yourself, such as things in this newspaper. Let me know what you find out.

Dave Moore, CISSP, has been fixing computers in Oklahoma since 1984. Founder of the non-profit Internet Safety Group Ltd, he also teaches Internet safety community training workshops. He can be reached at 405-919-9901 or internetsafetygroup.org