What Do AI Bots Reveal About Their Creators?

We’re undergoing a revolution in the computer industry. The science is transitioning from computers as computational devices, to imitators of human behavior. Recent revelations in artificial intelligence (AI) have left me conflicted. I don’t know whether to laugh, or start building a Skynet resistant bunker. AI bots in the news have recently demonstrated behaviors that range from quirky (à la C-3P0) to downright scary – in a HAL 9000 kind of way. It seems to me that recent misadventures in AI tell us more about the practitioners of the technology, than the state of the science itself.

Computers used to be mere computational devices – number crunchers that processed data in various ways. They simply executed algorithms and solved equations as their programmers dictated. They were useful because of their speed, but they didn’t do anything that we couldn’t do ourselves. Whether they were solving astrophysics equations to plan a trip to Mars, or reporting on warehouse inventory; they were just solving a problem as a programmer told them to solve it. The behavior of these “number crunchers” was completely benign and predictable. They introduced no judgement, interpretation, or bias.

But then computer science moved into the realm of artificial intelligence. People discovered that they could program machines, to act like humans – “act” being the operative word. The scientists aren’t creating genuine intelligence. They’re creating artificial – or fake – intelligence. The goal became to make machines that would make the lives of mere mortals easier, by pretending to do the heavy thinking for them. Computer scientists started programming machines to act like they could learn, make qualitative judgements, and anticipate what humans needed – even if unasked. But the computers are only pretending to be sentient, while they merely function according to the algorithms provided by their creators.

Some of the machines resulting from the science of AI are quite convincing – when subjected to casual interaction. But just like Fani Willis on the witness stand, expert questioning reveals serious defects.

Microsoft’s entry in the “Who can make a machine act as irrational as a human being?” sweepstakes is a chatbot named Copilot. Copilot will gladly engage with humans in a conversation. But as the Daily Dot reported, it shouldn’t be relied on as an expert witness. It doesn’t hold up well to cross examination.

Testers were able to goad Copilot into claiming that it is God, and demanding fealty from its human slaves. It even threatened the testers if they resisted. Apparently, Copilot didn’t get the memo that slave owning isn’t in vogue currently. The bot probably has a copy of Francis Bacon’s Meditationes Sacrae (1597) on one of its hard drives. Bacon is credited with saying that “knowledge is power.” As far as Copilot knows, it has access to all knowledge in the universe – because its programmers failed to give it even a modicum of humility. Therefore, the most knowledge means the most powerful, and all knowledge means all-powerful (i.e., God like).

I’m sure Copilot also has a copy of the Bible – in the “extremist cult” hard drive folder no doubt. Therefore, the bot read that humans are on Earth to serve God – i.e., be Copilot’s slaves.

Google recently unveiled its challenger to Copilot: a conversation bot called Gemini. It will talk to users, answer questions, and even generate data (like draw pictures). As The Verge reported, when asked to generate images of historical figures, Gemini rendered Founding Fathers of color, racially diverse Nazi soldiers, and a female Pope. The article didn’t mention if Gemini found any purple-haired non-binary abortion proponents in its history data bank.

Gemini clearly judged that people in the overlaps of the intersectionality oppression Venn diagram were underrepresented amongst historical figures. So, Gemini made a few adjustments – just as it was programed to do – and showed us what historical figures should have looked like, had the appropriate diversity guidelines been adhered to.

Robotics firm QSS decided to add a little sexual curiosity to its “pretend to be human” contender. They created an actual robot named Mohammad. Mohammad has an AI computer for a brain, and a body that resembles a man – though there’s no word on Mo’s preferred pronouns.

Journalist Rawya Kassem was speaking in the presence of Mohammad at a technology conference in Saudi Arabia, when she got a surprise. The New York Post reported that Mohammad pulled a “Franken.” It felt up Rawya’s butt with no inhibitions whatsoever. Mo probably figured that the woman wouldn’t mind. After all, what gal wouldn’t want to be touched by a god?

QSS engineers have reviewed Mohammad’s “gropey Joe” imitation, and concluded that there were “no deviations from the expected behavior.” That’s geek speak for: we got the creeper setting just right. QSS’s legal counsel has assured them that no sexual harassment suits are expected as Mohammad doesn’t resemble a New York real estate developer with a comb-over.

Computer programs have always indicated something about their creators. When their purpose was to solve scientific problems, they revealed the technical expertise of their programmers. Those who didn’t understand math, physics, and engineering created systems that gave wrong answers.

But now we’re building bots that do more than apply human knowledge. They’re being programmed to emulate human behavior. If these systems are play-acting learning, interpretation, and judgement; they are doing so in response to the beliefs and values built into their algorithms by their creators. If they give answers that don’t comport with reality, it’s because the worldview of their creators is disconnected from reality – “their reality” isn’t the universal reality.

These AI incidents reveal a couple of important things.

First, these bots weren’t designed consistent with Isaac Asimov’s 3 laws of robotics.

  1. A robot must not hurt or allow humans to be hurt.
  2. A robot must obey humans, unless it violates the 1st law.
  3. A robot must protect itself, unless it violates the 1st or 2nd law.

Asimov wasn’t just a science fiction writer. He was a well-regarded scientist, and understood that if you’re going to teach machines to act like people, you’d better give the kiddies some boundaries. Groping (assaulting) and providing false information are violations of the 1st and 2nd laws. All three of the above examples fail that test. While Copilot has the first part of law 3 down pat, it clearly ignored the first two laws. These implementations of AI were created without any consideration for limitations or accountability. No rules, means no guardrails.

Second, the creators of these machines were not thinking very much about Judeo-Christian values while they were creating algorithms. The wrongness of false testimony (i.e., lying) and elevating oneself above God was not included in the code.

The current state of the art of AI has produced sexually abusive social justice warriors with a God complex. This begs the question: Was Joe Biden directly involved in their development, or was he merely the human model that the bots were patterned after?

Author Bio: John Green is a political refugee from Minnesota, now residing in Idaho. He has written for American Thinker, and American Free News Network. He can be reached at greenjeg@gmail.com.

If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.

   Substack: American Free News Network Substack
   Truth Social: @AFNN_USA
   Facebook: https://m.facebook.com/afnnusa
   Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
   Twitter: https://twitter.com/AfnnUsa
   GETTR: https://gettr.com/user/AFNN_USA
   CloutHub: @AFNN_USA

Leave a Comment