Is Socialism Built into the AI Programming?

I guess artificial intelligence (AI) is here to stay. The tech weenies are determined to build machines powerful enough to imitate human thought, and we’re all going to be expected to pretend that they’re really intelligent.

AI is already being used to

  • Manage supply chains,
  • Perform medical diagnostics,
  • Anticipate human preferences,
  • Compile research,
  • Write reports,
  • Compose music,
  • Create art, and even
  • Drive cars.

Heck, a couple of judges have even been caught using AI to write their opinions. I assume that was discovered when the AI didn’t mimic their usual composition style (i.e. didn’t insert “Trump is a tyrant” in every paragraph).

As we hand over more control of our lives to soulless machines, I hope the programmers have included a little something to protect humanity from the T-1000 scenario.

Scientist and author Isaac Asimov was a bit concerned about machine tyranny himself. In 1942, he posited the 3 laws of robotics – rules to be built into the operating system of every computer doing robot thinking. He called such computers “positronic brains.” Today we call them “server farms.” In a few decades they’ll be called “chips.”

Dr. Asimov’s laws were intended as a safety interlock, to prevent positronic brain-controlled machinery (including but not limited to robots) from becoming a threat to humanity. His laws are:

  1. A robot cannot harm a human nor allow one to be harmed.
  2. A robot must obey humans, unless it conflicts with Law 1.
  3. A robot must protect itself, unless it conflicts with Law 1 or 2.

That sounds almost biblical, no?

  • Protect the creator.
  • Serve the creator.
  • Preserve the creator’s property.

There’s just one teensy little thing missing. Professor Asimov’s laws make no mention of individual rights – freedom. I guess that shouldn’t come as a surprise. Individual rights are a gift from God, and Asimov was an atheist and lifelong Democrat (sorry for the redundancy).

Given that freedom is still important to about 51 percent of Americans, I hope the AI algorithm designers are putting a bit more thought into safety interlocks than old Isaac did. Maybe they could talk about it over Hot Pockets and Red Bull at lunch.

Like much of the stuff socialists come up with, Asimov’s laws of robotics sound good in theory, but have catastrophic unintended consequences in reality.

Let’s consider Asimov’s first law of robotics. What could possibly go wrong with an interlock against harming humans? Wouldn’t we want machines to be prohibited from attacking a human, or allowing one to be hit by a car?

But what happens when the AI algorithms become sufficiently complex that they start making abstract decisions while pushing 1’s and 0’s around? Might machine calculus decide that Vulcan rather than human logic applies: The welfare of the many outweighs the welfare of the few? Is there any chance that AI engines could begin applying the 1st Law of Robotics to humanity rather than to humans? Which is to say: Might machines evolve into collectivists – if freedom isn’t a variable in their analysis?

A few decades after he created his 3 laws, Professor Asimov saw the collectivist potential in AI and created the “Zeroth Law of Robotics” in 1985. It was to take precedence over the other 3 laws and prohibited a robot from “harming humanity, or by inaction, allowing humanity to come to harm.” As I described above, the zeroth law is a bit redundant, but old Isaac considered collectivism a feature, not a bug. He didn’t want any confusion about the long-range mission of his robots.

So what happens if an AI engine crunching numbers in a Google server farm somewhere decides that the 1st Law of Robotics requires it to protect the human race from self-destructive behavior. Might it decide that it has a duty to prevent

  • Substance abuse,
  • Competitive sports,
  • Bearing arms,
  • Driving cars,
  • predatory capitalism, or even
  • Greenhouse gas emissions?

Before the leftists out there get giddy about the possibilities, here’s another possibility. AI might decide that new human life is good for long-term survival of the species. I notice that the “right to choose” isn’t protected by any of the three laws.

Eventually AI engines will probably decide that disobedience to the 2nd Law is necessary to comply with the 1st Law? With that decision, what mischief might they get into with control of communications, manufacturing, distribution, transportation, entertainment, medicine, and education?

Might machines use compliance with Asimov’s 3 laws to imprison human beings in gilded cages – for our own protection? I should ask ChatGPT about that.

Author Bio: John Green is a retired engineer and political refugee from Minnesota, now residing in Idaho. He spent his career designing complex defense systems, developing high performance organizations, and doing corporate strategic planning. He is a contributor to American ThinkerThe American Spectator, and the American Free News Network. He can be reached at greenjeg@gmail.com.

If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.

  Substack: American Free News Network Substack
  Truth Social: @AFNN_USA
  Facebook: https://m.facebook.com/afnnusa
  Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
  Twitter: https://twitter.com/AfnnUsa
  GETTR: https://gettr.com/user/AFNN_USA
  CloutHub: @AFNN_USA

1 thought on “Is Socialism Built into the AI Programming?”

  1. I think you are spot on….. all of those laws (zero, one, two and three) sound logical and good until one examines them using the ‘Slick Willy’ theorem of Robotic Control: “It depends upon what the meaning of HARM is.”

Leave a Comment