By Michael-Patrick Moroney
I talk about AI frequently with friends, colleagues, even in-laws. Some require help using it to move forward. Others need to learn how to stop it before it changes everything. We all acknowledge we're in the middle of something monumental - maybe it's what they call a potential asteroid strike - a PHO - "Potentially Hazardous Objects". What everybody doesn't agree with is this: are we building a rocket ship—or lighting a fuse?
If you've been in the tech business - how to sell it, how to think out of the box about it, how to bring it out to skeptics - chances are you've witnessed the gap yourself. On one end are cautious visionaries - people like me. We tend to see AI as a force multiplier, something that helps one augment brainpower and creativities. Back in the early 2000s, investors would ask: "Great idea - but can it scale?"
Now the question is: "Great person - but can they scale?"
And increasingly, the answer is yes - sometimes horrifyingly so.
The counterarguments are represented by the rightfully unsettled. Not Luddites. Not dumbos. Some of the most apocalyptic warnings about AI today are being issued by the very people who built it.
Geoffrey Hinton standing at a whiteboard at the Google office in Toronto, 2017
In 2023, Geoffrey Hinton - one of the godfathers of deep learning - left Google to speak freely about what he helped unleash. In an interview with MIT Technology Review, he estimated there’s a 10–20% chance that advanced AI could drive humanity toward extinction. “We’ve never created something smarter than us,” he said. “We don’t know how to contain it.”
His colleague Yoshua Bengio echoed that warning: "It's scary," he told Time Magazine. "I don't know why more people don't get it."
And yet, despite all the dread, something peculiar is happening. Many of the people most afraid of AI are pulling away from it. Hopping they can opt out. I feel the desire - I clung to a flip phone for years because the iPhone was "too much." But AI is that feeling again - only this time it's woven into the fabric of civilization.
And here's the kicker: you cannot control what you will not confront.
The Divide
A coding bootcamp in Jakarta. In many parts of the world, AI is already solving real problems - and optimism follows usefulness.
Worldwide, people's sentiments regarding AI are split almost right down the middle between fear and enthusiasm. In a Pew Research survey in 2023, 52% of Americans said they were more anxious than excited about AI. In India, China, and Indonesia, however, Ipsos polls show that over 75% of people are optimistic that AI will prove more helpful than harmful.
In all these places, AI is already solving everyday problems. In India, rural students can study in their own language with the help of translation software. In Nigeria, farmers plant at the optimal time using prediction algorithms. In Brazil, chatbots are filling gaps in healthcare deserts.
In the United States and Western Europe, however, the overwhelming sentiment is skepticism. It was found in a 2024 Ipsos poll that global trust in AI companies fell from 50% to 47% over one year. But many still haven't used the tools that give them pause. They're reacting to AI as a notion, not its application.
The Engagement Gap
Can we scale at the right pace?
That gap - between recognition and participation - is where the true gap resides.
Stanford, MIT, and Wharton research illustrate clear productivity gains from thoughtfully deployed AI:
Developers who used GitHub Copilot completed tasks 26% faster (NBER, 2023).
Professionals who used ChatGPT wrote and finished writing tasks 40% faster and 18% more highly rated (Science, July 2023).
Customer service representatives who used AI opened more cases per hour and had higher satisfaction rates, especially among junior staff.
These tools don’t replace us. They redistribute effort and raise the floor for those just stepping in.
And yet, a 2023 Gartner survey found that while 79% of corporate strategists believe AI is “critical to the future,” only 20% use it themselves. The awareness is high. The action is low.
If You’ve Stepped Back - You’re Not Alone
Meet the Washington Couple Who Lives Like They're in the Victorian Era
A friend who is an artist put some of their drafts into ChatGPT "just to see." They didn't like how well the results came out. They hasn't looked at it since. That ain't resistance. But it ain't protest neither.
Maybe it's something else. Maybe it's sadness. Or fatigue. Especially if you're older than 60 - or heading towards 80 - and you've endured enough tech frenzy to see that not all revolutions are good for all people. You avoided smartphones as long as you could. You avoided Facebook. You had every right to be skeptical about what social media, even algorithmic pornography, did to relationships, intimacy, and trust. Maybe this time around, you're just tired.
But those first warnings? They weren't unsubstantiated. The disinformation machines, the shattered attention spans, the filter bubbles that now warp elections - they arrived. And too frequently they arrived more rapidly because too many individuals with ill-tempered restraint avoided entry when they needed to enter.
History attests: if you fail to construct the tools, you'll be formed by them.
What Can Be Done
So what do we do next?
If you're optimistic about AI:
Don’t just evangelize - educate.
Share the failures and tradeoffs.
Support regulation, not just release cycles.
Help others enter the conversation - especially non-technical peers and communities.
If you’re skeptical:
Try one prompt. Ask AI to rewrite your email or critique your argument.
Get involved - at your local school board, your newsroom, your union.
Use open, slow, or community-centered AI. Shape it by participating.
Audrey Tang, Taiwan's digital minister, nicely put it like this: "When we think of AI as a partner - not a product - we begin to regulate it like a relationship."
Historical Echoes
The Luddites – a 200-year-old anti-tech cult is raising its head in the AI age
Every new technology provokes moral, artistic and political panic. In 1492, a Benedictine abbot prophesied that the printing press would be memory's killer. "Books will be cheap," he wrote. "Learning will be too easy." He wasn't wrong. He just misjudged the kind of revolution.
AI is new in scope but not in form. It asks the same questions: Who benefits? Who is left behind? Who gets to decide?
The Moral Authority Gap
Disengagement is most dangerous where meaning is made - classrooms, newsrooms, pulpits, museums, studios. If teachers ban AI, students will use it surreptitiously. If artists avoid it, their aesthetics will be sampled anyway. If moral leaders stay silent, the narrative will be narrated by those who won't even ask permission.
The AI conversation is already underway. The only question is: who is still at the table?
The Stakes
This's no utopia or apocalypse decision. It's a decision between sitting out and sitting in.
Reid Hoffman swore: "Some say that society shouldn't be involved with AI. That path would be devastating."
Hope and fear are both strategies. But only one makes you stake your interest in the outcome. AI won't wait for us to get our act together about how we feel. It's already here. And the only question left is: will we help write the next chapter - or just read it when it's too late?