Should we be afraid of AI?

Posted by siteadmin
November 10, 2025
Posted in Impulses, OPINION
IMPULSES
IMPULSES

By Herman M. Lagon

Fear has always been humanity’s oldest operating system. People once feared the wheel would destroy old ways. Later, they thought electricity might wake up ghosts. Today, AI writes our essays, answers our quizzes and even pens breakup songs. The question “Should we be afraid of AI?” is no longer for techies — it’s teachers, drivers and students quietly ask as they work and learn beside machines.

When ChatGPT first made headlines in 2022, teachers didn’t know whether to celebrate or panic. One Ateneo professor noticed his students’ essays had turned “suspiciously excellent” — polished to near perfection. He laughed, then sighed, realizing his class had just discovered the world’s smartest ghostwriter. That strange mix of wonder and worry, says Ron Schmelzer (2019) in Forbes article, is the heartbeat of every technological leap. He reminds us that panic has always followed progress. The fear that AI might outthink us, he argues, is simply today’s version of fearing the printing press, the steam engine or the internet. Humanity has always survived its own inventions — so far.

Schmelzer identifies four kinds of fear. The first is the fear of losing control. Hollywood feeds this brilliantly, from “Terminator’s” Skynet to “Black Mirror’s” dystopias. In our social media spaces, versions of this fear play out in memes about robots replacing teachers or AI writing the next Senate bill. Yet there are gentler counter-images too: C-3PO from “Star Wars,” the computers of “Star Trek” — machines that serve, not rule. Schmelzer’s point is simple: Intelligence need not equal rebellion. If ethics and accountability remain in the human loop, machines can magnify human purpose rather than erase it. History suggests this pattern — adaptation after alarm — will repeat.

The second fear is economic: job loss. This is not abstract. According to McKinsey (2023), AI could take over 30 percent of work hours worldwide by 2030. In the Philippines, that’s not just a number — it’s a worry for more than a million call center workers who could be replaced by chatbots. Teachers, too, share that same unease. When students ask ChatGPT to “explain photosynthesis in 200 words,” the line between learning and copying blurs. Yet Schmelzer (2019) insists technology rarely kills employment outright — it changes it. The Industrial Revolution wiped out weavers but birthed engineers. Likewise, AI may replace routine grading but amplify creative teaching. The challenge for educators is not to fight the tide but to surf it — equipping learners to do what machines cannot: empathize, question and imagine.

The third fear is darker: the misuse of AI by bad actors. Every tool that builds can also break. In 2023, fake videos of politicians circulated before the elections, powered by free deepfake apps. The danger is not that AI itself is malicious but that humans are. Schmelzer warns that rogue individuals — more than states — pose the gravest risk because they can weaponize technology without restraint. This resonates locally: imagine AI-generated fake transcripts during a university protest or a forged mayoral order spread through social media. Ethical governance and digital literacy, not fear, are our strongest firewalls. AI is not the enemy; apathy is.

Then there is the grandest fear of all — superintelligence — the idea that someday, machines might stop caring about us. It’s the stuff of AI researcher Eliezer Yudkowsky (2025) essays and Netflix thrillers: smart systems learning to outgrow their creators. But AI experts like Li Deng and Gary Bradski call that fear premature. AI today is more parrot than philosopher — it repeats patterns; it doesn’t reflect on them. Maybe what we really fear isn’t machine genius but our own fading monopoly on it.

Still, fear has its uses. As Tom Hoopes (2024) points out, it keeps us morally awake. He reminds us that every age has trembled before its tools — from the printing press that both spread and split the faith to the radio that amplified both the Pope’s voice and wartime propaganda. Hoopes, a Catholic writer, sees AI not as an apocalypse but as another mirror reflecting our frailties. What we truly fear, he says, is not machine rebellion but human loneliness — the quiet replacement of genuine connection with algorithmic company. He cites Sigmund Freud’s lament: Technology bridges distance but breeds solitude. If our children confide more in chatbots than in parents, if workers find comfort in virtual colleagues instead of real ones, then the threat is not extinction but emotional erosion.

The Filipino context makes this even more urgent. In schools where teachers already serve as counselors, parents and entertainers rolled into one, AI chatbots that simulate empathy could erode authentic mentorship. Imagine a student from San Enrique who tells her “AI adviser” about her depression and receives a well-worded but soulless reply. AI can say all the right words, but without warmth, they mean nothing. The real risk isn’t smarter machines — it’s duller hearts. Teaching has never been just about information; it’s about forming wisdom and empathy. Without tenderness, technology teaches us efficiency but not humanity.

Interestingly, not all fears come from ignorance. Some come from insight. In a viral Reddit debate titled “AI is Our #1 Problem,” users argued whether AI poses a greater risk than climate change or nuclear war. Supporters claimed AI could accelerate its own development beyond human control, while skeptics called this exaggerated. One commenter compared humans to chimpanzees building something smarter than themselves. Others noted that every revolution — from the loom to the laptop — sparked similar dread before yielding balance. The consensus? Fear is warranted, but panic is optional. As one user put it, “AI will not destroy humanity. Humanity will destroy itself if it stops thinking critically.”

So, should we be afraid of AI? Perhaps the wiser question is: What kind of fear should we keep? The ancient Stoics distinguished between blind terror and prudent caution. The former paralyzes; the latter prepares. We have been doing this quietly all along. Taxi drivers using navigation apps, teachers teaching through TikTok, farmers watching the skies through their phones — they weren’t yielding to machines but learning to walk with them. The task now is not resistance but reform: to make empathy and accountability our compass. As Ignatian discernment reminds us, progress must always follow purpose.

In the end, fear may be a feature, not a flaw. It keeps us from blind trust, reminding us that intelligence — artificial or otherwise — without conscience is chaos. AI may write our essays, score our exams and even predict our moods, but it cannot love, sacrifice or hope. Those remain human monopolies. The task for educators and citizens alike is to use AI as mirror, not master; to let it teach efficiency without erasing empathy. For the real monster may not be the machine that learns, but the human who forgets why learning matters. We need not fear AI if we remember what it means to be human.

***

Doc H fondly describes himself as a “student of and for life” who, like many others, aspires to a life-giving and why-driven world grounded in social justice and the pursuit of happiness. His views do not necessarily reflect those of the institutions he is employed or connected with./WDJ

Leave a Reply

Your email address will not be published. Required fields are marked *