• Home
  • About Me
  • How We Work
  • AI Guide
  • Musings
  • Projects
  • Contact High Music
  • Contact
Menu

SuperString Theory

Street Address
City, State, Zip
Phone Number
Helping Creatives Thrive at the Intersection of Art and Technology

Your Custom Text Here

SuperString Theory

  • Home
  • About Me
  • How We Work
  • AI Guide
  • Musings
  • Projects
  • Contact High Music
  • Contact

When Reality Fractures: Why AI Feels Like a Mass Delusion - and What We Can Do About It

September 2, 2025 Michael Moroney

By Michael-Patrick Moroney

In August 2025, The Atlantic called AI a “mass-delusion event.” The phrase stuck with me. Not because it was hyperbolic, but because it was historically familiar.

When technology becomes intimate - when it doesn’t just broadcast to us but whispers to us - it stops being a tool and starts being a mirror. AI doesn’t simply provide facts. It animates memories, impersonates the dead, and confirms our suspicions, no matter how false.

This is not science fiction. It is happening now.

“If left unchecked, we risk not a sudden dystopia, but a slow erosion of certainty itself.”

We have been over this before, every medium of mass communication has followed the same pattern: first freedom, then chaos, then rules.

  • The printing press unleashed scripture and heresy until libel laws and professional journalism appeared.

  • Radio swayed nations, forcing us to create regulators and public broadcasters.

  • Social media connected the world - and polarized it - prompting debates over moderation and digital literacy.

AI is no different. But the pace is faster. And the stakes are more intimate.

Three Layers of Defense

If reality itself is at stake, we need a framework as ambitious as the threat. I see three layers of defense.

1. Infrastructure: Provenance by Default
Every AI-generated photo, video, or sound must carry a verifiable signature of origin. Efforts like Content Credentials and watermarking should be as ubiquitous as a nutrition label.

2. Norms: Draw Bright Lines
We must agree that some uses cross human boundaries. Synthetic resurrection without consent should be off-limits. Political ads using AI must carry bold, unavoidable labels.

3. Resilience: Cognitive Immunity
We inoculate against disease. We can inoculate against delusion. Prebunking - teaching people the tricks of manipulation before they encounter them - has already proven effective. AI literacy must become as basic as math or reading.

The Next 36 Months

Within 3 months:
Platforms and model-makers embed provenance standards by default. Governments outlaw non-consensual resurrection.

Within 12 months:
Platforms enforce labeling across all formats - text, audio, video. Developers publish risk assessments of their most powerful models.

Within 24 months:
Independent audits of provenance systems. Political ads with synthetic content carry mandatory disclosures. Schools add pre-bunking to curricula.

Within 36 months:
International AI Safety Institutes share incident reports, like aviation agencies do with crashes.

The Stakes

AI won’t end civilization in a single stroke. It will chip away at trust, fragment memory, and make every truth negotiable.

The historian in me recalls that each technological upheaval carried the seeds of both progress and peril. The pragmatist knows this one moves at the speed of thought itself.

If we act quickly, we can build guardrails strong enough to preserve a baseline of reality. Enough to grieve honestly. Enough to govern fairly. Enough to live together.

If we don’t, we risk something worse than dystopia: madness, at scale.

The Next Turing Test Isn’t for Machines - It’s for Us. →
Let’s Collaborate