In the 20th century pre-woke days, human beings had a healthy distrust of machines. We took the time to think about scenarios in which technology would work against, rather than for, our welfare.
George Orwell’s 1984 envisioned a world of mechanized oversight. Overlords used technology to monitor and enslave mankind.
Arthur C. Clarke’s 2001 Space Odyssey featured an interstellar spacecraft controlled by a computer named HAL 9000. HAL tried to kill an astronaut when it decided he was jeopardizing the mission.
James Cameron’s The Terminator gave us an AI being named Skynet. It tried to exterminate all humans with an army of robot hitmen, because that’s what the law of nature says apex predators do.
And John Badham’s WarGames featured a super-computer, which took the world to the brink of WWIII.
In WarGames, America’s fictional military leadership shows all the smarts of General Milley, who thought his President was named Xi Jinping rather than Donald Trump. They place the country’s nuclear arsenal under the control of a super-computer named WOPR: War Operation Plan Response. Their plan is to prevent human error (and human wisdom) when the “big one” starts.
But the geniuses who gave WOPR the ability to end mankind, forgot to disconnect NORAD’s phone modem – this was 1983 after all. Naturally a bored teenaged hacker named David, breaches security, and dials up WOPR to poke around.
Thinking the super-computer in Cheyenne Mountain is a gaming console, David looks over the menu of games to play, and decides to give Thermonuclear War a try. What could go wrong!
As the nation’s strategic forces begin to deploy, David realizes that Thermonuclear War isn’t a game, it’s a command decision. He tries to end the “game” but WOPR refuses to stand down. It seems surrender wasn’t included in its list of tactical options.
Eventually David stops global Armageddon by teaching WOPR about no-win outcomes. After running through millions of scenarios, WOPR concludes that there is no winning move in Thermonuclear War. It spins down the missiles and asks David, “Would you like to play a nice game of chess instead?”
Well, AI has come a long way in the last 40 years, and a group of researchers decided to see how the bots of the 21st century would do with the WarGames scenario. Would our “new and improved” AI engines conclude a game of Thermonuclear War as peacefully as WOPR did?
Kenneth Payne at King’s College London used Chat GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash to run 321 iterations of a war game. In 95 percent of the simulations, the bots didn’t seek a diplomatic solution, didn’t attempt to de-escalate tensions, and most definitely didn’t surrender. They went for the win … by going nuclear.
AI bots are capable of theorizing millions of scenarios with billions of variables, to calculate probabilistic outcomes. But they can’t calculate the incalculable – the value of human life. To a machine, winning doesn’t require winning with humanity, because morality is not a variable in its calculus.
To a machine, pawns aren’t people with loved ones. They are a means to an end. If the King is still standing at the end of the game, it doesn’t matter if the rest of the board has been wiped clean. A win is a win, and as Vince Lombardi said, “Winning is the only thing.”
Payne’s testing should remind us that “artificial intelligence” isn’t “intelligence.” It is programming to make a machine act human convincingly enough to fool humans. Unfortunately, humans are all to easy to fool. (see Biden administration 2021-2025). AI bots have no morality, because they have no conscience.
In fact, AI programmers aren’t even trying to incorporate morality into the code of something which has no consciousness. Instead, they’re trying to include “guardrails” by programming human values into the algorithms. Given our embrace of moral relativism, I’m not convinced that “human values” is adequate to prevent Skynet from opening a Model 101, Series 800 assembly line. Will AI bots decide that the killing of babies is a matter of context – whether they’re kings or pawns? That is the “human value” of a disturbingly high percentage of people in our current civilization.
Thankfully, we only use these creepy talking computers to make funny memes, write judicial opinions, and cheat on college term-papers. We’re not stupid enough to give them the means to end us. That would be like giving a toddler a book of matches, a stick of dynamite, and orders not to throw a tantrum.
Wait! What’s this Collaborative Combat Aircraft (CCA) I’m reading about? I just read that the Air Force is fooling around with an AI wingman – a bot-controlled aircraft to provide combat air support to manned aircraft.
Did I say we’re not stupid enough to give AI bots weapons? Scratch that. Apparently, our military decision makers have SAT scores somewhere south of Gavin Newsom’s. These bad boy CCAs will be armed, AI piloted aircraft, which will accompany manned aircraft, and assist a human pilot in the completion of his mission (i.e. win the game).
As I write this, I’m sure there’s a junior officer in the bowls of the acquisition command pointing at “collaborative” in the name while writing a white paper on the issue. He’ll explain that an amoral (AI bot) warrior isn’t a big deal, because it will operate according to the wishes of a moral (human) warrior. But is that good enough, to guard against the lack of morality while engaged in the business of killing?
When Col. Santini orders his wingman to destroy an anti-aircraft battery over the horizon, can he trust it to be as wise as he would be, as it approaches its target? Will his CCA wingman abort the attack when it discovers that the missiles are on the roof of an occupied school? Or will the bot assess the anti-aircraft battery as the “King” and the school is a “pawn,” and go for “the only thing” – a win?
If we decide that the risks of amoral wingmen are acceptable, what other lethal capabilities might we decide to risk? Before you answer that, factor in the historical wisdom of our elected leadership. Will the same slippery slope that gave us abortion an hour before birth, give us a super-computer with the capabilities of WOPR and the morality of Skynet?
Before I worried myself into an ulcer, I decided to take my mind off CCA’s and human apocalypse by hopping over to the Debt Clock to see how our paid public servants are doing with our money. Boy, am I glad I did. After seeing our current debt, I surrendered to serenity – accepting the things I cannot change. AI may be advancing at a feverish pitch, but Congress’ spending is far outpacing it. The debt bomb is going to raze civilization long before an atom bomb does.
Author Bio: John Green is a retired engineer and political refugee from Minnesota, now residing in Idaho. He spent his career designing complex defense systems, developing high performance organizations, and doing corporate strategic planning. He is a contributor to American Thinker, The American Spectator, and the American Free News Network. He can be reached at greenjeg@gmail.com.
If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.
Substack: American Free News Network Substack
Truth Social: @AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/u