AI for Me, Not for Thee: When Students Get Punished for What Leaders Get Paid to Do

The irony here is so thick you could spread it on toast.

A couple years ago, when ChatGPT first exploded onto the scene, I was teaching at a Christian school. My philosophy with technology has always been simple: learn it before you fear it. Every major technological shift in history has followed the same pattern—first confusion, then panic, then acceptance once people realize it’s not going away.

So I did what teachers are supposed to do. I explained the technology to my students.

I showed them what ChatGPT was capable of. I explained, in simple terms, how large language models work—basically very sophisticated prediction machines trained on massive amounts of text. I told them this tool was free and that they should talk to their parents about using it. The school laptops even had access to OpenAI at the time.

In short, I treated AI like a new calculator for writing and research.

Not long after that, student government elections rolled around.

One of my students—an exceptional kid, straight-A student, never been in trouble in her life—decided to use ChatGPT to help organize and polish her campaign speech.

Not write it for her.

Help structure it.

Think of it as the digital equivalent of asking a parent or teacher, “Does this sound okay?”

Well, the administration found out.

And the hammer dropped.

The student was suspended.

This was a kid who had never been reprimanded before. She was devastated. Her parents were understandably upset. And I felt responsible because I had encouraged students to explore the technology responsibly.

But the decision stood.

Apparently using AI to help write a speech was treated like she had just robbed a bank.

Fast forward to today.

Watch the State of the Union.

Watch speeches coming out of Washington.

Watch the carefully structured messaging from federal agencies and the Pentagon.

If you’ve spent even five minutes around AI writing tools, the fingerprints are obvious. The tone, structure, and phrasing have that familiar polished rhythm. And that’s not speculation—the Department of Defense openly works with AI systems like Anthropic’s models for analysis, drafting, and communications support.

In other words, AI is now a standard tool of government messaging.

So let’s review the rules.

A high-school student uses AI to help write a speech for student government…

Suspended.

But if a politician uses AI to help craft a speech that millions of Americans will hear…

They get a standing ovation and a taxpayer-funded salary.

Apparently the technology becomes morally acceptable somewhere between detention and Capitol Hill.

This isn’t really about AI, of course.

It’s about institutions doing what institutions have always done when new tools appear: panic first, understand later.

We’ve seen this movie before.

Calculators were once banned in classrooms because teachers thought students would “never learn math.”

Word processors were treated like cheating because typing made editing easier.

Google was accused of destroying research because suddenly information wasn’t trapped inside encyclopedias.

Now it’s AI.

Every generation reacts the same way. The first people to use a new tool are treated like they’re breaking the rules. Then the institutions quietly adopt the tool themselves once they realize it’s too useful to ignore.

The real lesson here isn’t about artificial intelligence.

It’s about institutional hypocrisy.

Students are told they must operate under strict purity rules while the adults running the world operate under a completely different standard.

In school, using AI is framed as academic dishonesty.

In government, it’s called innovation.

In classrooms, it’s cheating.

In Washington, it’s efficiency.

And in the Pentagon, it’s apparently a strategic capability.

The truth is AI is exactly what it appears to be: a powerful tool. Like calculators, search engines, and word processors before it, it will eventually become part of everyday life.

Students will use it.

Businesses will use it.

Governments will definitely use it.

The only question is whether our institutions will teach people how to use the tool responsibly—or continue pretending it’s forbidden until the adults in charge decide they need it.

Because right now the rule seems pretty clear.

If you’re a teenager trying to write a better student government speech, AI is unacceptable.

But if you’re writing a speech for Congress, the Pentagon, or the State of the Union…

Apparently the robots are welcome to help.

If you enjoyed this article, then please REPOST or SHARE with others; encourage them to follow AFNN. If you’d like to become a citizen contributor for AFNN, contact us at managingeditor@afnn.us Help keep us ad-free by donating here.

Substack: American Free News Network Substack
Truth Social: @AFNN_USA
Facebook: https://m.facebook.com/afnnusa
Telegram: https://t.me/joinchat/2_-GAzcXmIRjODNh
Twitter: https://twitter.com/AfnnUsa
GETTR: https://gettr.com/user/AFNN_USA
CloutHub: @AFNN_USA

Leave a Comment