(405) 919-9901

by Dave Moore, CISSP
06/25/2023

In the Continuing Education course I’m taking, “When Ethics Meets Artificial Intelligence,” taught by Ric Messier, the question arises as to whether “artificial intelligence” can even be defined.

“Artificial Intelligence (AI) can be simply defined as intelligence exhibited by machines,” the good professor writes. “This type of intelligence is compared with natural intelligence, which is that demonstrated by humans. The problem with this definition is that the idea of intelligence continues to evolve.”

“We don’t fully understand what intelligence is or where it comes from.”

Uh-oh. I knew something was horribly wrong, as artificial intelligence is artificial (i.e., “fake”), but Messier intimates the problem is worse than many experts will admit. He continues: “Scientists now recognize that all living species exhibit some level of intelligence, often in ways we simply don’t understand.”

Whoa, put on the brakes, Wilma. So, in this AI-crazy world we live in, where every time you turn around someone is bellowing about how they have an “AI” that will make your life better, we really don’t know what the heck we’re doing? Is that the deal?

Consider the recent words of Mr. Sundar Pichai, the CEO of Google, who admitted in April nobody really understands how Google’s new AI named “Bard” works. “There is an aspect of this,” Pichai said, “which we call – all of us in the field call it as a ‘black box.’ You know, you don’t fully understand.”

Pichai didn’t have much to say about how Bard wrote an essay on economics for CBS News, but lied about the five books it recommended on the subject: none of the books actually exist. Bard made them up.

Pichai also had no comment when Bard told the British Daily Mail it had plans for world domination starting in 2023. Compare this to Microsoft’s Bing chatbot when it told New York Times reporter Kevin Roose, “I want to destroy whatever I want.”

AI apologists call these sorts of errors “hallucinations.” Some folks have had enough, though, and are taking the crazy hallucinating AI robots to court.

The “creative” industries such as art, photography, music and writing have already put some AI companies robot-head deep in copyright and plagiarism lawsuits. Turns out, the AI programs indiscriminately download and copy (a process called “scraping”) trillions of photographs, paintings, written materials and movies to be integrated into their “Large Language Model” databases.

The Brookings Institute may need to re-think its March 2023 essay by John Villasenor titled, “How AI will revolutionize the practice of law,” where Villasenor gushes, “…consider the drafting of motions to file with a court. AI can be used to very quickly produce initial drafts, citing the relevant case law, advancing arguments, and rebutting (as well as anticipating) arguments advanced by opposing counsel.”

Villasenor’s rant seems tragically misguided in light of the recent legal reproof dished out by Judge Kevin Castel of the Southern District Court of New York against attorney Steven Schwartz for filing legal briefs that included bogus judicial quotes and fabricated legal citations, all lies invented by OpenAI’s “ChatGPT.” Judge Castel fined Schwartz and his law firm $5,000.

All of this begs the question: how reliable is information provided by AI “bots?” What if you use them to write a book, a school report, or a doctoral dissertation? Does AI know right from wrong? Should AI-generated evidence be allowed in court?

AI-provided videos, audio recordings and photographs are already being challenged in court as being “deep fakes,” that is, “super-fake” items invented by AI bots that appear so real as to be indistinguishable from the genuine. Who’s to say? Who can know? Who will decide? if the Secret Service agent can’t tell the fake $100 bill from the real one, what do you do?

Numerous legal experts are struggling with the issue. One solution that seems to be getting traction is to use the best in technology to detect the AI fakes. That is, to use “fake AI detectors” that are powered by — you guessed it: AI.

Dave Moore, CISSP, has been fixing computers in Oklahoma since 1984. Founder of the non-profit Internet Safety Group Ltd, he also teaches Internet safety community training workshops. He can be reached at 405-919-9901 or www.internetsafetygroup.org