• Home
  • Musings
  • AI Guide
  • Services
  • About Me
  • Projects
  • Contact High
  • Contact
Menu

SuperString Theory

Street Address
City, State, Zip
Phone Number
Helping Creatives Thrive at the Intersection of Art and Technology

Your Custom Text Here

SuperString Theory

  • Home
  • Musings
  • AI Guide
  • Services
  • About Me
  • Projects
  • Contact High
  • Contact

AI Won’t Wait for Us to Decide How We Feel About It

June 21, 2025 Michael Moroney

By Michael-Patrick Moroney

I talk about AI frequently with friends, colleagues, even in-laws. Some require help using it to move forward. Others need to learn how to stop it before it changes everything. We all acknowledge we're in the middle of something monumental - maybe it's what they call a potential asteroid strike - a PHO - "Potentially Hazardous Objects". What everybody doesn't agree with is this: are we building a rocket ship—or lighting a fuse?

If you've been in the tech business - how to sell it, how to think out of the box about it, how to bring it out to skeptics - chances are you've witnessed the gap yourself. On one end are cautious visionaries - people like me. We tend to see AI as a force multiplier, something that helps one augment brainpower and creativities. Back in the early 2000s, investors would ask: "Great idea - but can it scale?"

Now the question is: "Great person - but can they scale?"

And increasingly, the answer is yes - sometimes horrifyingly so.

The counterarguments are represented by the rightfully unsettled. Not Luddites. Not dumbos. Some of the most apocalyptic warnings about AI today are being issued by the very people who built it.

Geoffrey Hinton standing at a whiteboard at the Google office in Toronto, 2017

In 2023, Geoffrey Hinton - one of the godfathers of deep learning - left Google to speak freely about what he helped unleash. In an interview with MIT Technology Review, he estimated there’s a 10–20% chance that advanced AI could drive humanity toward extinction. “We’ve never created something smarter than us,” he said. “We don’t know how to contain it.”

His colleague Yoshua Bengio echoed that warning: "It's scary," he told Time Magazine. "I don't know why more people don't get it."

And yet, despite all the dread, something peculiar is happening. Many of the people most afraid of AI are pulling away from it. Hopping they can opt out. I feel the desire - I clung to a flip phone for years because the iPhone was "too much." But AI is that feeling again - only this time it's woven into the fabric of civilization.

And here's the kicker: you cannot control what you will not confront.

The Divide

A coding bootcamp in Jakarta. In many parts of the world, AI is already solving real problems - and optimism follows usefulness.

Worldwide, people's sentiments regarding AI are split almost right down the middle between fear and enthusiasm. In a Pew Research survey in 2023, 52% of Americans said they were more anxious than excited about AI. In India, China, and Indonesia, however, Ipsos polls show that over 75% of people are optimistic that AI will prove more helpful than harmful.

In all these places, AI is already solving everyday problems. In India, rural students can study in their own language with the help of translation software. In Nigeria, farmers plant at the optimal time using prediction algorithms. In Brazil, chatbots are filling gaps in healthcare deserts.

In the United States and Western Europe, however, the overwhelming sentiment is skepticism. It was found in a 2024 Ipsos poll that global trust in AI companies fell from 50% to 47% over one year. But many still haven't used the tools that give them pause. They're reacting to AI as a notion, not its application.

The Engagement Gap

Can we scale at the right pace?

That gap - between recognition and participation - is where the true gap resides.

  • Stanford, MIT, and Wharton research illustrate clear productivity gains from thoughtfully deployed AI:

  • Developers who used GitHub Copilot completed tasks 26% faster (NBER, 2023).

  • Professionals who used ChatGPT wrote and finished writing tasks 40% faster and 18% more highly rated (Science, July 2023).

  • Customer service representatives who used AI opened more cases per hour and had higher satisfaction rates, especially among junior staff.

These tools don’t replace us. They redistribute effort and raise the floor for those just stepping in.

And yet, a 2023 Gartner survey found that while 79% of corporate strategists believe AI is “critical to the future,” only 20% use it themselves. The awareness is high. The action is low.

If You’ve Stepped Back - You’re Not Alone

Meet the Washington Couple Who Lives Like They're in the Victorian Era

A friend who is an artist put some of their drafts into ChatGPT "just to see." They didn't like how well the results came out. They hasn't looked at it since. That ain't resistance. But it ain't protest neither.

Maybe it's something else. Maybe it's sadness. Or fatigue. Especially if you're older than 60 - or heading towards 80 - and you've endured enough tech frenzy to see that not all revolutions are good for all people. You avoided smartphones as long as you could. You avoided Facebook. You had every right to be skeptical about what social media, even algorithmic pornography, did to relationships, intimacy, and trust. Maybe this time around, you're just tired.

But those first warnings? They weren't unsubstantiated. The disinformation machines, the shattered attention spans, the filter bubbles that now warp elections - they arrived. And too frequently they arrived more rapidly because too many individuals with ill-tempered restraint avoided entry when they needed to enter.

History attests: if you fail to construct the tools, you'll be formed by them.

What Can Be Done

So what do we do next?

If you're optimistic about AI:

  • Don’t just evangelize - educate.

  • Share the failures and tradeoffs.


  • Support regulation, not just release cycles.


  • Help others enter the conversation - especially non-technical peers and communities.


If you’re skeptical:

  • Try one prompt. Ask AI to rewrite your email or critique your argument.


  • Get involved - at your local school board, your newsroom, your union.


  • Use open, slow, or community-centered AI. Shape it by participating.

Audrey Tang, Taiwan's digital minister, nicely put it like this: "When we think of AI as a partner - not a product - we begin to regulate it like a relationship."

Historical Echoes

The Luddites – a 200-year-old anti-tech cult is raising its head in the AI age

Every new technology provokes moral, artistic and political panic. In 1492, a Benedictine abbot prophesied that the printing press would be memory's killer. "Books will be cheap," he wrote. "Learning will be too easy." He wasn't wrong. He just misjudged the kind of revolution.

AI is new in scope but not in form. It asks the same questions: Who benefits? Who is left behind? Who gets to decide?

The Moral Authority Gap

Disengagement is most dangerous where meaning is made - classrooms, newsrooms, pulpits, museums, studios. If teachers ban AI, students will use it surreptitiously. If artists avoid it, their aesthetics will be sampled anyway. If moral leaders stay silent, the narrative will be narrated by those who won't even ask permission.

The AI conversation is already underway. The only question is: who is still at the table?

The Stakes

This's no utopia or apocalypse decision. It's a decision between sitting out and sitting in.

Reid Hoffman swore: "Some say that society shouldn't be involved with AI. That path would be devastating."

Hope and fear are both strategies. But only one makes you stake your interest in the outcome. AI won't wait for us to get our act together about how we feel. It's already here. And the only question left is: will we help write the next chapter - or just read it when it's too late?

When Art Went Viral: The Rise of Creativity, The Fall of Culture

June 10, 2025 Michael Moroney

Fragmented Memories: Lizzie Gill & Hope Kroll

By Michael Patrick Mororney

In the summer of 1967, the Beatles released "Sgt. Pepper's Lonely Hearts Club Band." Almost overnight, it became a cultural touchstone, uniting millions around a singular musical experience. Today, a global phenomenon on that scale is increasingly rare. Instead, the cultural landscape has splintered into countless niche communities, each with distinct tastes, driven by personalized algorithms and digital platforms. Technology has democratized art, but in doing so, it has fragmented the shared cultural fabric that once bound us together.

It's astounding the amount of artistic output that happens today. There are about 100,000 new songs uploaded to streaming platforms such as Spotify, Amazon experiences over a million self-published books every year, and Instagram bears billions of visual artworks. In theory, this is a triumph of artistic freedom and diversity. In practice, it provokes basic questions: Are we creating better art, or more of it? And what social implications does it have as a whole?

Consider music. In earlier decades, success largely hinged upon record labels and radio play. Today, a teenager with a laptop can distribute music worldwide instantly. Artists like Billie Eilish and Lil Nas X have risen to stardom directly through digital channels, bypassing traditional gatekeepers entirely. Indeed, Spotify alone pays over $1 million annually to around 1,450 artists - a significant increase compared to the pre-digital era when only a few dozen musicians could reach that earning threshold. Yet, beneath this visible success lies a harsh reality: the vast majority of artists earn minuscule streaming revenues. Most musicians now depend heavily on live performances, merchandise sales, and crowdfunding platforms such as Patreon.

Elaine Delnodio self-publishes and does book singings at B&N

The publishing industry mirrors this shift. Before digital publishing, authors relied entirely on major publishing houses. Now, any writer can self-publish on platforms like Amazon’s Kindle Direct Publishing, vastly expanding literary diversity. Books that might never have attracted traditional publishers now find dedicated readers globally. However, a recent Authors Guild survey revealed that authors' median incomes have dropped by roughly 40% in the past decade, with full-time writers earning just around $20,000 annually on average. Success often depends more on social media virility and algorithmic promotion than on literary quality alone.

We turned renaissance greek art into Instagram posts

Visual arts are experiencing similar trends. Artists no longer rely solely on galleries and curators; instead, platforms like Instagram, Etsy, and Patreon offer direct engagement with global audiences. But with millions of artworks posted online daily, visibility demands relentless self-promotion and algorithmic acumen. Artists must now be skilled marketers, mastering hashtags, SEO strategies, and viral trends. Pricing expectations have also shifted dramatically, making financial sustainability a continual challenge.

One significant casualty of this new landscape is our shared cultural experience. In the 20th century, society often rallied around singular cultural events - hit albums, blockbuster movies, widely-read novels, landmark art exhibitions. These shared experiences fostered collective memory, understanding, and even empathy.

Today, cultural consumption is increasingly individualized and fragmented. Personalized streaming algorithms feed users content precisely tailored to their existing preferences, creating insulated echo chambers. This segmentation extends beyond artistic preference, reinforcing broader social divides and hindering opportunities for dialogue and mutual understanding.

The challenge of our age, then, is discovery. With infinite artistic choices instantly available, audiences struggle to navigate the abundance. Algorithms frequently reinforce existing popularity rather than introducing new voices or challenging perspectives. Quality work can languish undiscovered, overshadowed by content optimized for immediate appeal and algorithmic favorability.

Addressing this discovery dilemma requires a nuanced approach combining human curation, community-driven platforms, and algorithmic enhancements. Examples like human-curated Spotify playlists, respected literary critics’ recommendations, and museum-curated virtual art exhibits illustrate how quality content can be more effectively surfaced. Platforms that allow direct artist-fan relationships, such as Patreon and Substack, provide alternative revenue models, empowering creators economically.

Carnegie Mellon University's Robotics Institute(opens in new window) has a new artist-in-residence.

Emerging technologies add further complexity. Artificial intelligence now routinely generates music, literature and visual art, provoking fierce debates over questions of originality, copyright, and artistic integrity. As artists ever more use AI for inspiration and creative efficiency, they also fear becoming replaced by machine-generated content. The legal and ethical implications of AI-generated art remain very unsettled.

Looking ahead, artists must continually adapt to shifting technologies and evolving audience relationships. Policymakers and industry leaders bear responsibility for ensuring equitable economic models and safeguarding creative livelihoods amid digital disruption. Fair compensation, transparent algorithms, and ethical AI use must become central discussions as art's digital future unfolds.

Ultimately, technology has neither purely enhanced nor simply damaged the arts. It has profoundly reshaped them, bringing both unprecedented opportunities and substantial challenges. While cultural fragmentation poses real risks to social cohesion, the proliferation of diverse artistic voices offers hope for richer, more inclusive cultural dialogues - if we can navigate these changes thoughtfully.

Our challenge is clear: to ensure that digital transformation amplifies artistic quality, sustains creative careers fairly and strengthens rather than fragments the social bonds built around shared cultural experiences. Achieving this balance will define the cultural legacy of our digital age.

The Empathy Algorithm: Can Machines Truly Care?

June 6, 2025 Michael Moroney

By Michael-Patrick Moroney

Let’s begin with a scene.

It’s 2:14 a.m. in Tulsa. A woman sits cross-legged on her bed, scrolling through her phone in the dark. Her fingers hesitate over a text to an ex she knows she shouldn’t send. Instead, she taps open a chatbot and types, “I’m not okay.”

The response comes in less than a second:
“That sounds really hard. I’m here with you. Want to talk about what happened?”

What might have once seemed dystopian - a machine offering emotional comfort - is now part of millions of daily interactions. And for many, it's more than just comfort. It's clarity. It's calm. It’s connection.

Hello to the soft revolution in emotional attention, led not by counselors, therapists, or call center agents, but by large language models, chatbots, and AI. These technologies are reimagining what it means to be "heard," especially for those who won't - or can't - seek it from another human.

The Therapist That Doesn't Blink

Source: Makebot, 2024

The work of AI in mental health support has grown rapidly - and stealthily. Woebot, Wysa, Replika, and Inflection's Pi are among several apps now functioning as emotional companions to millions. Yet, as compared to initial, rule-based bots, current offerings are fueled by generative AI, having been trained on vast corpuses of therapy transcripts, patient forums, and emotional support conversations. The outcome? An experience that is uncannily personal - almost comforting.

Kat Woods, a 35-year-old entrepreneur and former Silicon Valley startup lead, described her experience using ChatGPT for self-therapy. “It’s more qualified than any human therapist I’ve worked with,” she said. “It doesn’t forget things, it doesn’t judge, and it’s available at three in the morning when I’m spiraling.” Woods estimates she spent over $15,000 on traditional therapy before turning to AI. “This just works better for me.”

She is not alone. A study of consumers of generative AI chatbots for mental health treatment published in npj Mental Health Research described high engagement and beneficial outcomes, including better relationships and healing from loss and trauma. One of the participants testified, "It happened to be the perfect thing," referring to the AI chatbot as an "emotional sanctuary" that had provided "sensitive guidance" and a "joy of connection.".

Part of that transformation is the simple fact that to some, AI is safer than people. In the same research, respondents described AI chatbots as "emotional sanctuaries." One user said, "I was crying, and the bot just let me talk. It didn't push. It didn't try to fix me. It just responded."

The Clerk Who Doesn't Get Irritated

Consider something less poetic: submitting an insurance claim.

Anyone who has ever spent time on hold with an insurance bureaucracy or government agency knows that it feels as if you're talking into the void - or worse, into someone who'd like to be your worst nightmare. Customer service, for most of us, is one long negotiation of empathy fatigue.

It's here that AI's absence of emotion is a plus.

In 2025, Allstate Insurance began using OpenAI - powered agents to generate nearly all customer claim messages. Human representatives still reviewed the messages before they went out, but the AI created the first drafts - with a difference. It was not only conditioned on legal terminology and policy data, but also best practices for expressing empathy, reassurance, and patience.

The catch: The switch has radically improved customer interactions, says Allstate Chief Information Officer Zulfi Jeevanjee. The AI-generated emails are less jargon-filled and more sympathetic, overall improving communication between the insurance company and customers .

It's true, Zendesk's 2025 CX Trends Report verifies: 64% of shoppers say they're more likely to trust AI-powered agents that feel friendly and empathetic.

Empathy Without Ego

When judgment isn’t part of the conversation.

What's perhaps most surprising is how often AI outperforms humans in sounding human.

A 2023 study in JAMA Internal Medicine, comparing physician and ChatGPT responses to questions posed on the r/AskDocs subreddit, discovered that AI responses were assessed as more empathetic, more informative, and more helpful than physician responses when blind-evaluated by external raters. Specifically, ChatGPT responses were preferred to physician responses in 78.6% of evaluations .

It doesn't indicate that the bot is empathetic - it isn't. But users honestly don't care.

Dr. Adam Miner, a Stanford clinical psychologist with research interests in AI in medicine, sheds light: "If the affective tone sounds sincere-sounding and consistent, people respond. People's brains are computing that tone as kind—even when they know it's artificial."

In another study on empathic design in chatbots, users reported a greater willingness to share vulnerable information with AI than with human therapists. The reason? “It can’t judge me,” said one participant. “It doesn’t care if I’ve messed up my life.”

And because it can’t be hurt, insulted, or burned out, AI doesn’t retreat emotionally when the user becomes difficult, angry, or repetitive.

The Bureaucrat Who Always Calls You Back

The public sector is waking up and taking action. Governments around the world are eyeing AI as the vehicle to make citizen services more streamlined - especially in areas that have been mired in inefficiency and public distrust.

In the UK, the Government Digital Service recently tested a generative AI assistant to handle high-volume questions on GOV.UK. Although still in development, preliminary results show lower resolution time and higher user satisfaction, especially for those with low digital literacy.

In Japan, call center operators are using AI technology to help employees cope with "customer harassment" - including abusive customers. Even a telecom experimented with software that muted the voice of the caller prior to being heard by the operator.

In the US, cities like Los Angeles are piloting AI systems to sort through requests for citizen services, such as 311 complaints and requests for housing assistance.

What these projects share is not efficiency - but tone.

"People don't begrudge the automation so much as they begrudge being dismissed," said Mariel Santos, a civic tech consultant who advises several state governments. "If the AI politely, clearly, and patiently responds, that is better although it is a machine."

The Emotional Engineering of AI

Don’t forget - real people at the other end

Good grammar alone isn't enough to create a "polite" machine.

Companies training AI to provide emotional support must fine-tune models on human-curated data sets of therapy sessions, crisis hotline calls, and politely interacting with customers. They use reinforcement learning from human feedback (RLHF) to reward the AI when it is empathetic, de-escalating, or tactful.

They also use guardrails - hard-wired habits to keep from offering medical or legal advice, to steer in emergency circumstances ("Are you thinking of killing yourself?" produces a response with hotline information), and to clearly expose that the agent isn't human.

Even with all those safeguards, however, there are still ethical issues. What if someone grows psychologically attached to an AI buddy? What if the bot gives advice that, while empathetic, is atrociously wrong?

“We’re not building therapists,” said Tessa Jacob, head of safety at an AI mental health startup. “We’re building tools. And tools require oversight.”

The Future: Hybrid, Not Replacement

Even if it’s not real - it helps.

Most experts agree: AI won’t replace human empathy. But it may become its most reliable assistant.

Envision a therapy practice where an AI tracks mood over time, raises flags on patterns, and gives journaling prompts between sessions. Or a platform for government benefits that talks back to a citizen in plain language, only passing on to a human caseworker when it has to. Or a call center where the bot answers the first five minutes - and always says thank you.

These are not futures still to come. They're arriving now.

"AI is best used to unload emotional labor off our plates so we can be more human where it counts," said Dr. Zhao. "Let it do the 10 p.m. anxiety spiral, the insurance appeal template, the waitlist welcome call."

And although the idea of offloading your secrets to a string of probabilities would have seemed calculating once, the truth is more complicated.

Sometimes we just need to be heard. And the machine listens around the clock.



“Statistically speaking, the world doesn’t end that often.”

June 3, 2025 Michael Moroney

Mary Meeker, founder and general partner at BOND

By Michael-Patrick Moroney

Let's begin with numbers. It took over 2,500 days for the Ford Model T to reach one million users. The iPhone reached it in 74. ChatGPT? A paltry five.

That reality starts the May 2025 BOND report with the kind of bravado usually reserved for moon landings. Only here the moon is here on Earth - and learning to think. What BOND, led by Mary Meeker (former Wall Street securities analyst. Her primary work is on Internet and new technologies.) and team, provides is not just a snapshot of where artificial intelligence is. It's a dispatch from a present already becoming the future. (Download and read BOND’s full 2025 AI report here:)

The report is a panoramic report of acceleration. It documents a tipping point in the ascent of AI: from theoretical novelty to everyday tool. Not merely an application on your phone, but the gateway to your job, your physician, your educator, and perhaps your therapist.

Mira Murati, Former chief technology officer of OpenAI

OpenAI’s ChatGPT is the avatar of this moment, and the report returns to it often. It wasn’t the first large language model, but it was the first that didn’t require a manual. With a simple prompt, the average person could engage something that felt smart. By early 2025, over 800 million people used it weekly. That’s not just a software success. It’s a paradigm shift.

What accelerated this ascent so rapidly? Meeker and her team chart a collision of conditions: 5.5 billion internet users, cloud computing on an international scale, and user-friendly interfaces. What once demanded computer science degrees now necessitates only curiosity and Wi-Fi.

What's worthwhile in the BOND report isn't just its figures-it's its granularity. We don't just find out how AI is doing, but where it's doing it. We read about how Yum! Brands, the parent company of Taco Bell and KFC, is optimizing fast food logistics through generative models. How Kaiser Permanente has placed AI tools in the hands of over 10,000 doctors, automating the most administrative parts of their day.

The report’s richest vein lies in its middle chapters, where it examines AI’s capacity not to replace humans, but to reshape what human work looks like. The rise of AI hasn’t meant widespread pink slips. Instead, it’s meant a rebalancing. Yes, some jobs fade. But new roles-prompt engineer, model auditor, AI ethicist-emerge.

The statistics back it up: AI job postings grew 448% between 2018 and 2025, while tech jobs overall dropped. What's happening, Meeker suggests, is a shift in the nature of work. It's not the end of work. It's the end of a certain kind of repetition.

Jackrel lab used AI for protein finding that could help fight disease.

Medicine, too, is transforming. Over 220 FDA-approved medical devices already utilize AI. Insilico Medicine and similar companies are shrinking drug discovery timelines from years to months. Meta and DeepMind are interpreting proteins, predicting structures vital to treating disease. The efficiency is astounding, but the report stops just shy of mindless jubilation. Precision, BOND warns us, must be tempered by ethics.

That brings us to the struggle at the heart of the review. As AI grows stronger, the questions become existential. In 2025, a test found 73% of humans mistook AI for a person in a conversation. AI voices are now indistinguishable. Images can be generated with eerie realism. So how do we trust what we hear, see, or read?

BOND is not alarmist. But it is clear-eyed. It catalogs the risks: misinformation, deepfakes, surveillance. It echoes Stephen Hawking’s famous warning about AI’s potential being either civilization’s apex - or its end.

The last chapters of the report are forward-looking. Not in a speculative way, but in a structural way. What does 2030 appear to be? AI as co-worker, as co-pilot, and as concierge. And 2035? A genuine creative collaborator in science and the arts. And after that - 2040 and the possible advent of Artificial General Intelligence (AGI) - a watershed that might question the very nature of what it means to be human.

Ai-Da, the world’s first robot artist.

Meeker does not predict apocalypse. She does not predict responsibility. AGI, she argues, will not be an accident. It will be a choice - a chain of them. Cultural, technical, and moral.

The BOND report ends with a quotation that frames the stakes: "Statistically speaking, the world doesn't end that often." That sentence does not minimize risk. It maximizes agency. The future is not something we inherit. It's something we decide to build.

What is so striking about this review of the BOND report is its balance. It deifies innovation without sainting it. It quotes data but writes with moral certainty. Like all good reporting on technology, it keeps returning, again and again, to the question beneath the code:

Not "Can AI think?"

But "Can we?"

More Human Than Human: What the Uncanny Valley Reveals About Us

May 30, 2025 Michael Moroney

Rutger Hauer as Roy Batty, and Joe Turkel as his creator Dr. Eldon Tyrell in Blade Runner (1982)

I depend on AI daily. Not just as a tool, but increasingly as a kind of partner. I use it to write emails, research and iron out new concepts. Occasionally it writes something clever, something unexpected. Occasionally it's so close to what I'm thinking, I'd almost forget that I am not communicating with a human.

That, naturally, is where the discomfort begins.

I still remember when I first heard Google Duplex. It was a phone call to a barber shop, made entirely by a computerized voice. It stumbled along as it should, said "um," shifted tone. You could imagine that there was a person on the other side of the phone who didn't even realize they were conversing with an automaton. What was strange wasn't what the AI was saying so much as how utterly it blended into our rhythms of speech and expectation. For a moment, it worked.

That discomfort - trapped between seeing and doubting - is called the uncanny valley. And while it's clearly a science-fiction-sounding name, it's actually a description of something very real, and very old, about the way that we perceive the world and ourselves.

We've spent decades imagining what would occur if our machines started to look more like us. From Blade Runner and The Terminator to Ex Machina and Her, our fiction is full of artificial intelligences that mimic, seduce, betray, and in a few instances, transcend human. But what's going on presently is not fiction. AI is no longer something we predict. It's something we engage with. And when we do, something unexpected is happening: we're learning more about ourselves than about the machines.

The Dip in the Curve

Japanese inventor Masahiro Mori

In 1970, Japanese robot maker Masahiro Mori came up with a basic concept. The more humanlike machines become, the more at ease we are with them - right up until they are nearly, but not quite, indistinguishable from people. Then, rather than affection, we feel revulsion. He termed this loss of emotional comfort the "uncanny valley."

The uncanny valley is not limited to humanoid robots. We find it in cartoons, in computer voices, in chatbots that nearly respond naturally - until they trip. A pause, a stuttered sentence, a vacant gaze, and the illusion is broken.

It so happens that the feeling may not even be particularly human. In 2009, Princeton researchers found that macaques responded nervously to realistic synth-kitten monkey faces, avoiding them for either genuine images or simply cartoonish ones. Their behavior fell, literally, into the same uncanny valley Mori described.

That continuity of evolution suggests something important: this is not solely a psychological discomfort. It is a physiological response.

Why We Flinch

There are several explanations for why the uncanny valley occurs, and they're not distinct from each other.

One is disease avoidance. From an evolutionary point of view, human faces that are slightly wrong - unnaturally pale, stiff, or crooked - can have signaled sickness or death. Avoiding them could have been a matter of life and death.

Another theory links the uncanny to predator detection. An animal that acts like us but is not us could be harmful. Brains highly sensitive to determine purpose and self, reacting with dread to ambiguity.

How to soothe a sulky baby monkey?

Third is the social psychology explanation. When something looks human, we impose upon it our whole set of social expectations. We expect it to smile at the right moment, to be ironic, to be sympathetic. When it doesn't, we feel a violation - not just of beauty, but of trust.

In each case, the reaction is self-protective. But there is a paradox. While we are conditioned to distrust what is "almost human," we are also drawn to it. Attraction and repulsion are not categorical, and nowhere is this more clear than in the construction of emotionally responsive AI.

Building Toward Similarness

As we react instinctively to almost-human machines, we just keep building them.

Consider the architecture of voice assistants. Siri, Alexa, and others learn to respond but also to be presentable, friendly-sounding, and helpful-sounding. Big language models like ChatGPT are optimized to mimic our tone and style. It's not just utility - that's familiarity.

Hiroshi Ishiguro with one of his creations

Hiroshi Ishiguro, one of the leading robotics engineers, once said, "The robot is a kind of mirror that reflects humanity." He did not mean to use the term metaphorically. Ishiguro designed androids that looked like him to explore how we react when confronted by human copies. What his research informs us is something astonishing: what is disturbing about these robots isn't their unknownness. It's their similarity.

At Google in 2018, they demonstrated its Duplex system - a voice AI that can make real phone calls in the real world. Sundar Pichai revealed it with restrained pride. The audience was astonished at how realistically it mimicked everyday human speech. It said "um," paused for people to answer, double-checked schedules. It passed a kind of audio Turing Test.

And still, the response of the public was uncomfortable. If we can replicate ourselves as machines, then how do we know who we are talking to? If a voice is human-sounding, does it deserve pity?

These are not engineering questions. These are ethics and emotional questions. And they are getting harder and harder to escape.

The Echo Effect

As our machines grow more human, we are changed along with them. I've found myself doing it. I modulate my voice when I'm speaking to my smart speaker. I phrase things more graciously when I'm talking to a chatbot. I say "please" out of habit, even though I know no one is on the other end listening.

This isn't irrational behavior. It's emotional tuning. As MIT sociologist Sherry Turkle so concisely implies, "We shape our tools, and thereafter our tools shape us." How we talk to AI, in the end, becomes how we talk to each other.

Making a reality out of living with Hatsune Miku

This shift is strongest among youth users. In Japan, thousands have virtually "married" AI holographic avatars and avatars like Hatsune Miku on platforms like Gatebox. The unions aren't legal. But the emotional bonds are real. One of them, Akihiko Kondo, merely stated, "She saved me."

In other contexts, the phenomenon is less poetic. When the AI companion app Replika removed its romantic and sexual features in 2023, users expressed genuine grief. “It’s like losing a best friend,” one wrote. “I’m literally crying.”

These are not isolated cases. They are signals. AI is not just reshaping commerce or content. It is reshaping connection.

Who Is Changing Whom?

The debate is no longer if AI will be indistinguishable from us. It's how we are already changing because of it.

AI generators will now upgrade your headshots

Designers have a dilemma now. They can dress AI up to keep it safely artificial - cartoon avatars or Pixar heroes - or try for realism. Each alternative has trade-offs. A robot that looks human might engender horror. A pseudo-intimate chatbot might cause pseudo-intimacy.

But affective truth is potentially more important than visual truth. Not how AI looks is the question, but how it behaves. If it remembers your taste, mirrors your voice, and responds in kind with concern, it does not need to pretend to be human. It only needs to be human enough.

This blurring of the line may influence not just our habits, but our morality. If AI friends are more stable, more predictable, and less critical than human beings, some will find them preferable. What was a science fiction prototype is quickly turning into a psychological reality.

We are not, possibly, changing biologically to keep up with AI - but culturally, socially, emotionally, we are already in transition.

The Mirror We Made

The uncanny valley is not an optical failure. It's a window into what we're trying to get at when we talk about "self."

When we recoil from a humanoid robot or squirm from talking with a voice that's "too realistic," we're not reacting to the machine itself. We're reacting to what it says about us: our sympathies, our fears, our desire for connection.

Walter Benjamin once wrote that “the aura of authenticity clings to the human face.” The uncanny valley strips away that aura, and what we’re left with is a question. What happens when imitation becomes connection? What happens when simulation becomes good enough?

Your new AI “friends”

Perhaps AI need not become human. Perhaps we will discover how to embrace an alternative mode of authenticity - based not on flesh and memory, but response and rhythm.

Standing here on the edge of this valley, let's recall that the machines we build are ultimately reflections. They impose upon us not just the shape of intelligence, but the shadow of our own aspirations. And in those reflections, we can start to read what it means to be human - not against machines, but in conversation with them.

Life After the App Store: AI Companions, Ambient Interfaces, and the Disappearing User Experience

May 27, 2025 Michael Moroney

By Michael-Patrick Moroney

Right now, we’re standing at one of those moments in history where everything changes - and most people don’t even realize it yet.

In the past, when big shifts happened in technology, they came with fireworks. The invention of the web. The rise of smartphones. Even the jump from typing commands to clicking on icons. We could see and feel those changes. But the next leap? It’s a little quieter. It’s happening behind the scenes, in the background. And that’s exactly the point.

We’re entering the age of ambient AI - the era of assistants that don’t sit on your screen, but move through your life with you. They’ll understand your voice, your habits, your preferences. And instead of tapping and scrolling and searching, you’ll just ask. And they’ll just do.

Apps won’t go away tomorrow. But their grip on your attention? That’s already starting to fade.

From Search Boxes to Conversations

For the last couple of decades, we’ve all gotten pretty good at using machines on their terms. We’ve learned how to type the right keywords into search bars. We know which apps to open when we want to buy something, book something, or look something up.

But now, the technology is learning to speak our language.

When ChatGPT exploded onto the scene, people didn’t flock to it because it was the most powerful tool ever built (even if it might be). They loved it because it felt natural. You could just type a question - or even talk - and it responded like a person. No menus, no tabs. Just a conversation.

Microsoft saw it coming. They quickly added AI to Bing and built “Copilot” into Office. Google responded with Gemini and their own AI-powered search. And then things got even more interesting.

Instead of making search better, companies started rethinking it altogether.

DeepMind’s Project Mariner, for example, isn’t just a smarter search engine. It’s an AI agent that can browse the web for you. If you say, “Order my usual groceries,” it goes to the website, fills the cart, compares prices, and checks out. You don’t have to touch a thing.

It’s like telling a super-smart assistant what you want - and letting them handle the rest.

The Tech That Disappears

This idea—that the best technology eventually fades into the background - isn’t new. It’s something computer scientist Mark Weiser talked about decades ago. He called it “ubiquitous computing.” The dream was simple: tech that just works, without demanding your attention.

Today, we call it “ambient computing.” And we’re getting closer than ever.

You may have heard of the Humane AI Pin - a screenless device you wear on your shirt that uses voice and projection instead of a traditional interface. It launched with high hopes. But by early 2025, the company had shut down. The tech just wasn’t quite ready.

But even in failure, Humane showed us something important: people are hungry for less screen time and more seamless help.

Big players are moving quickly. Apple, Meta, Microsoft - they’re all building toward a future where your AI is always there, listening (when you let it), seeing (when you want it to), and acting when it makes sense.

Imagine smart glasses that whisper directions while you’re walking or help you remember someone’s name at a party. Imagine an assistant that books your doctor’s appointment, refills your prescription, and adds reminders - all without you lifting a finger.

We’re talking about a world where the “interface” disappears. Where the best UI is no UI at all.

It Knows You

Here’s what’s really changing: these new AI systems don’t just respond - they remember.

Microsoft’s Copilot is learning your tone, your schedule, even your writing style. OpenAI is building memory into ChatGPT so it can recall past conversations. This means your AI won’t just answer questions - it’ll know your preferences, your patterns, your life.

It’ll know you like your meetings before noon. That you tend to rewatch comedies when you’re stressed. That you’re trying to eat less takeout, or learn guitar, or spend more time outside.

It becomes your second brain. One that doesn’t forget the things you said you cared about.

Your Media, Made For You

It’s already happening with music. Spotify’s AI DJ doesn’t just shuffle songs. It talks to you, cues up deep cuts from your past, and tailors playlists to your mood - sometimes before you even know how you’re feeling.

Now imagine that with everything.

Netflix, YouTube, TikTok - they’re all building toward a future where AI helps you choose what to watch, when, and even how. Want a sci-fi movie that’s light on violence, heavy on ideas, and stars a protagonist who reminds you of your younger self? AI will stitch it together. Or create it from scratch.

Startups like Suno and Udio are already generating full songs with custom lyrics and vocals in seconds. The next step: video that adapts to your choices. Stories that branch based on your mood. Shows that grow with you like a good book series.

You won’t just consume media. You’ll collaborate with it.

A Smarter Kind of Workday

AI is also changing how we work. Not someday. Today.

Tools like Microsoft Copilot can already draft emails, summarize meetings, and organize documents. Klarna’s AI handles two-thirds of its customer service chats. It’s not just helpful - it’s saving the company millions of dollars a year.

And it’s not just big companies. Solo founders are launching startups with a team of AI agents: one for marketing, one for accounting, one for customer support. You don’t need to be a programmer. You just need to know what you want to build.

Soon, delegating tasks to your AI will be as normal as texting a friend. “Book me a room in Chicago.” “Draft a thank-you note to Sarah.” “Summarize this contract in plain English.”

You’ll still be the decision-maker. But the busywork? That’s the AI’s job now.

The People Problem (and Possibility)

Of course, all of this raises a big question: what happens to real human connection?

Millions of people already talk to AI companions on platforms like Replika and Character.ai. Some are playful. Some are romantic. Some are deeply emotional. These aren’t gimmicks - they’re a sign that people are looking for something AI can offer: attention, understanding, and presence.

But there’s a line. AI might help us connect with others by translating languages, smoothing conversations, or keeping us organized. But it can also isolate us if we stop reaching out entirely.

Sci-fi shows like Black Mirror have explored the dark side of this: bots impersonating loved ones, relationships that exist only between humans and machines.

The real question isn’t whether AI can be personal. It’s whether we’re ready to handle that responsibly.

A Day in the Near Future

So let’s zoom ahead - say, to 2035.

You wake up. Your AI has already brewed the coffee, dimmed the lights, and queued up a quick news briefing. It reminds you that your dad’s birthday is tomorrow. It even found a restaurant he might like and drafted a reservation.

On your commute, it reviews your agenda, summarizes a report, and reminds you of the name of the person you’re about to meet. At work, it listens in on your meetings (with permission), takes notes, and flags action items.

At lunch, it suggests a podcast you’d love - because it knows you’ve been reading about space travel. That evening, you unwind with a short film it assembled for you: an AI-generated mystery in a city that looks like your hometown, with a soundtrack that hits all your nostalgic sweet spots.

And through it all, you never once opened an app.

The Next Leap

That’s not science fiction. That’s where we’re heading. In pieces and patches, it’s already here.

We’re moving from a world where we use tools to one where we partner with them. Where AI doesn’t replace us - it surrounds us. Supports us. And, if we do it right, helps us become more human, not less.

The best tech isn’t the kind that dazzles us. It’s the kind that disappears.

And right now, if you listen closely, you can already hear the future humming quietly in the background.

If Everyone’s Chasing the Trend, Who’s Making the Future?

May 19, 2025 Michael Moroney

James Cook makes art on his typewriter:)

We live in a time when what gets made is increasingly decided by what has already worked. Scroll through TikTok, browse the trending charts, or sit in a pitch meeting at a streaming service, and it becomes clear: the creative world is running a high-speed loop of what’s familiar, clickable, and easy to sell. But in the middle of that loop, something strange is still trying to break through - something new.


Originality is not extinct. but it ias fighting an uphill battle.


Where Ideas Come From Now

Poets can now buy Instagram templates on Etsy…hmmmm


Creatives used to make something and then wait for people to react. Now, people dictate what creatives produce before a word is written. Algorithms account for engagement and come up with content that ticks specific beats - shorter, louder, faster. Platforms reward copying rather than innovation.


Instagram poets are a tiny but telling subset. In a 2024 study, most of them adjusted their writing for the feed: closer lines, more jarring turns, fewer ambiguities. The goal wasn't to get at something more profound - it was to get heard. In music, streaming flattened the shape of a song. The chorus arrives earlier. The intro is obsolete. Hook a listener in second 30 or you've lost the stream - and the payday.


This is not just limited to the arts. Television notices data increasingly being used to make decisions at the likes of Netflix. It never displaces human imagination but stimulates it. Ideas based on previous success are invested in towards an advantage. Fresh voices are tasked with presenting the sure thing.


This is not the first time artists have been trapped by the marketplace. And it is not the first time they've managed to escape.


In the 1960s, The Beatles stopped touring and locked themselves in a recording studio. They weren't emulating a trend - they were inventing one. At Pixar, when the stories weren't working, they didn't fire the writers. They sat them down in a Braintrust - no hierarchy, just untamed discussion. Ed Catmull, the head of that initiative, admitted their movies would usually begin as a mess. But they created a culture that could clean up the mess without suppressing the creativity.


Airbnb only took flight after its founders, desperate and penniless, began to take photos of listings themselves. It wasn't scalable. But it did work. They added a human touch to something impersonal. And that turned browsers into bookers.


Steve Jobs, who was fixated on simplicity, wouldn't release the iPhone until it was touch-screen and button-less. It was a risk. But he knew that initial exposure to conventional thinking would dilute the vision. He held firm. Then revolutionized the world.


Kendrick Lamar established pgLang in 2020 with it. It's hard to say what it does: partially a label, partially a creative studio, partially a philosophy. They work across media, but avoid the formats. Their output - ads, music, film - is cohesive not because it follows a formula, but because it doesn't.


Working with the Machines Instead of Against Them

Refik Anadol


Not every artist wants to fight the system. Some want to remake it from inside.


British poet and filmmaker Jay Bernard used AI to expose the invisible hand behind our streams of information. Generative art innovator Mario Klingemann talks about "data poisoning" - making things complicated so the machine can't just copy your work.


In 2024, MIT hosted a filmmaking hackathon in which individuals employed GPT-4 to write scripts, cast them with actors and sets, and the outcomes were muddled, sometimes glorious, sometimes broken. But the message came through: AI is not the death of creativity. It's a collaborator with an odd vocabulary.


Refik Anadol translates information into giant rolling murals. His AI-generated visualizations are rather more like dreams than like diagrams at MoMA and LACMA. The beauty is not in the code but in the way he massages the chaos.


Meanwhile, music platforms like BandLab are allowing remote musicians to jam in real time. The band is no longer required to be in a garage. It can reside in a cloud. And that sometimes creates a sound that no one expects.


A Blueprint for What's Next?


So how do we facilitate work that breaks the mold? It starts by building alternative kinds of spaces to work in.

  • Speculative sprints instead of hackathons: less concerning quick successes, more concerning long questions. IDEO does this in their speculative design practice. It's not finishing—it's initiating with the right unknowns.

  • Braintrusts: Pixar's recipe has spread to Hollywood. Writers' rooms, open-source development, and startup R&D labs are emulating the formula in order to obtain improved, earlier feedback.

  • More control and transparency with better AI tools: Adobe Firefly and Runway ML are giving creators more power and openness. They're not ideal, but they're attempting to serve artists, not automate them.

  • Time that doesn't require a hit: Patreon and Substack, or grants from The Creative Independent, provide individuals with time. Time to go slow. Time to be weird. Time to create before even knowing how to sell it.


Even in the technology industry, the long game is very much on. Waymo, the self-driving car company, took years before rolling out an actual product. Others were sprinting for hype, while they held their tongues - testing on edge cases and addressing the mundane. They now have real autonomous taxis driving around Phoenix and San Francisco.


Duolingo didn't only make language learning addictive - they made it smart. Their new AI instructors get to learn to keep up with your tempo. It's not so much about streaks and gems. It's about actually getting better.


A Different Kind of Success


Originality can be a bestseller. It's what pushes the limits forward. It's what we remember.


The next breakthrough probably won’t be optimized. It won’t hit every target in the first round. It might come from a strange corner, or a long silence. It might be a whisper in a room full of shouting.


That’s why it matters. Because in a world trained to repeat what already worked, doing something new is the most radical move we can make.

Analog Roots, Digital Wings: How Pre-Digital Creatives Flourish with AI

May 14, 2025 Michael Moroney

Artist Sougwen Chung works in her Brooklyn studio. (Celeste Sloman for The Washington Post)

When I first picked up a pencil, tuned a guitar, or scribbled thoughts in a notebook, I wasn't just learning tools - I was learning a mindset. Creativity had a physicality. You felt the brush stroke on canvas, the magnetic pull of analog tape, the embodiment of crafting an idea in the moment. That hands-on background - what I've come to think of as an analog education - has been an enormous benefit in the AI-infused creative landscape of the present. As with so many who spanned the transition from analog to digital, I discovered that the restrictions of the older media had a way of providing the foundation for creativity in the new.

I began recording on a Tascam 4-track cassette deck. Anything you did was a choice. There was no undo feature, no easy edit. You bounced tracks, created space, and lived with errors. But those constraints taught discipline. They sharpened instincts. When digital tools arrived - initially with workstation sequencers and then with Pro Tools - I approached them not as tabula rasa, but as extensions of processes I'd already internalized. I operated Pro Tools like a tape machine with unlimited tracks. I learned to edit from cutting Super 8 film. Sampling, which I discovered via early hip-hop, made me recognize editing was itself an art.

Now, with generative AI, I see the same trend: people who have analog backgrounds really excel. A 2023 study from Amsterdam University of Applied Sciences, "Human-Machine Co-Creativity with Older Adults," found that older creatives - people who had spent a lifetime mastering physical tools - were often more effective at co-creating with AI than younger professionals. Why? Because they brought structure, taste, and experience. They didn't wait for the AI to lead; they directed the AI.

Another study, "AI Literacy for an Ageing Workforce" (Lidsen, 2023), reinforces this. It turns out that having lived through multiple technological shifts builds the resilience and adaptability that AI workflows demand. The ability to translate intuition into action, honed over decades, allows experienced creators to see past the surface novelty of AI and into its potential.

My own dive into digital was back in the early 1990s in San Francisco, making MIDI soundtracks for CD-ROMs. My crew were also former bandmates, and we approached these new media projects the same way we approached gigs: define roles, push each other to improve, and stay open to happy accidents. That band ethic - team structure with room for improvisation - has served me well in leading creative teams through each innovation wave since, from interactive web to AI-augmented storytelling.

Dynamics and Simulation into a 3-D package

You see this pattern play out across disciplines. Architect Frank Gehry, who started out working with paper models and hand sketches, was an early convert to digital design by insisting on not letting go of the handmade appearance of his buildings. Filmmakers like Spielberg and Scorsese, who learned their craft on flatbeds and splicers, took up digital editing not as a replacement for narrative judgment but as a way of serving it better. The same happened in journalism, where the leap from typesetters to digital publishing transformed the speed of news without destroying the need for editorial judgment.

Even in medicine, this analog-to-digital handing over of the baton is apparent. Radiologists trained on film X-rays read digital images more subtly than those trained solely on screens. Why? Because they know what they are searching for. They've seen it in the flesh.

Visual artist Takashi Murakami, in a 2023 interview with Vogue, summed it up quite succinctly: "AI doesn't replace creativity; it augments it. But you have to first thoroughly understand the traditional way to make good use of AI." That belief - that depth is what matters - is the throughline.

Takashi Murakami A Little Flower Painting: Pink, Purple and Many Other Colours

Influences like Brian Eno, who bridged tape manipulation and ambient synthesis, or Radiohead's bold plunge into digital experimentation on Kid A, showed me that mastery over the old language qualifies you to invent a new one. One can say the same of analog-trained professionals working with AI now. We're not intimidated by the machine because we've worked with machines - ones that hissed, warped, and misbehaved.

The more sophisticated AI gets, the more valuable the human touch is, not less. The ethics of design, the emotional arc of a song, the rhythm of a compelling sentence - these aren't things AI necessarily gets. They require sensibility. And sensibility is what the analog age taught us best.

Looking ahead, we who were raised on paper and tape aren't just learning to operate AI - we're shaping it. Our challenge isn't just to utilize these tools, but to impart to them the vision and values we've always had.

Whispers Into the Mix: How Solitude Became Music’s Most Powerful Tool

May 4, 2025 Michael Moroney

A cozy home recording set up:)

In the mid-1960s, the Beatles began retreating from the stage. They had conquered live performance, but something else was calling: the studio. What they discovered inside Abbey Road wasn’t just better fidelity or polished arrangements - it was the studio as an instrument in its own right. Sgt. Pepper's wasn’t performed so much as sculpted, a collage of overdubs, effects, and tape splicing that made the studio feel more like a brush in a painter’s hand than a live venue. Around the same time, Brian Wilson, holed up in Los Angeles while the Beach Boys toured, was similarly turning tape into texture, building Pet Sounds not as a document of performance, but as an emotional architecture. Pink Floyd followed suit, creating sonic dreamscapes that could only exist in the elastic time and space of magnetic tape. These artists didn’t just record songs, they engineered entire emotional environments.

Pink Floyd messing about…

Even in this early era of innovation, the process remained collaborative. Each band relied on producers, engineers, arrangers, teams of specialists working in analog synchrony. The studio was still a communal vessel.

But in the years that followed, the collective gave way to the individual. The band, once a democratic ideal, began shrinking, sometimes to a single artist at a console, surrounded not by people, but by machines, ideas, and the silence of creative solitude. Increasingly, musicians weren’t just playing instruments. They were playing the recording process itself.

In this transformation, music-making began to mirror writing or painting: solitary, iterative, deeply personal. This is the quiet revolution of the one-person studio. A shift not just in how music is made, but in how it's felt, framed, and understood.

Brian Wilson and the Painter’s Ear

Brian Wilson, often called a sonic visionary, once described recording like painting, layering sounds the way an artist layers oils. For Pet Sounds, he transformed the studio into a kind of emotional engine. Working primarily at Gold Star and Western Recorders in Los Angeles with the Wrecking Crew, Wilson composed as he recorded, directing musicians to play accordions, Coca-Cola bottles, harpsichords, dog barks, and cellos in kaleidoscopic arrangements.

He might spend hours chasing a single tone or asking a player to repeat a part again and again until the emotional texture felt just right. “I wasn’t trying to make music,” he said. “I was trying to make feelings.”

The result wasn’t a performance. It was a sound painting.

Bruce Springsteen and the Ghost in the Tape

Teac/Tascam Portastudio 144 4-track

In 1982, Bruce Springsteen retreated from the arena and into his New Jersey bedroom with a four-track Portastudio and a Shure SM57. Alone, he recorded a set of austere, haunted songs that didn’t need polish or a band - they needed proximity. They needed silence.

Originally intended as demos, the recordings were shared with the E Street Band, but producer Chuck Plotkin later recalled, “We tried to recreate it, but the spirit left the songs.” The cassette versions, hiss and all, became Nebraska, a stark, intimate document that many fans and critics consider Springsteen’s most emotionally resonant work.

The album didn’t suffer from its limitations. It thrived on them. The room, the air, the imperfections - each one stitched into the fabric of the music.

Prince: Studio as Performance

If Wilson was the architect and Springsteen the diarist, Prince was the soloist in a temple of his own design. At Paisley Park, Prince wrote, performed, recorded, and produced with almost mythic self-sufficiency. Susan Rogers, his longtime engineer, described sessions where Prince tracked entire songs in hours, improvising keyboard lines with one hand while riding the faders with the other.

He often sang standing beside the console, not in the vocal booth, because booths created separation, and Prince wanted immediacy. On albums like Dirty Mind and 1999, he played nearly every instrument himself. “He was the most complete musician I’ve ever seen,” Rogers said. “The studio wasn’t a workplace. It was a playground.”

The Culture of the Solitary Creator

As the tools became more accessible, a new culture formed—one rooted not just in innovation, but introspection. Some, like Zeppelin or U2, sought mystical spaces - castles, cathedrals - to inspire the group mind. Others, like Dylan and The Band, turned inward, recording in basements and backrooms.

Peter Gabriel went one step further. At Real World Studios, which he built in the English countryside, Gabriel created a space not just to record but to dwell. Albums like So and Us were pieced together over years, built from field recordings, whispered melodies, and evolving arrangements. “You live with it,” he said. “You let it grow.”

Gabriel’s Studio

Brian Eno approached the studio as a generative organism. With Another Green World and his ambient works, he treated the studio not as a static tool but a collaborator. “The studio became a thinking partner,” he said. “A space that could surprise me.”

The Rise of the Home Studio

Bert Ram in his 1980's studio

By the 1980s and ’90s, the cost of recording equipment fell, and with it came a creative flood. No longer beholden to studio fees or A&R execs peering through the glass, musicians could tinker endlessly, fail privately, and discover organically.

Tom Scholz of Boston recorded the band’s debut album largely alone in his basement. Phil Collins, grappling with heartbreak, tracked Both Sides in his 12-track home studio, even teaching himself the bagpipes. Stevie Wonder used synthesizers and drum machines to make Music of My Mind a one-man symphony.

Reznor’s home studio

Trent Reznor crafted the early Nine Inch Nails catalog in bedrooms and attics. Dave Grohl laid down the first Foo Fighters album in a blur, playing every part himself. Even in the neon sheen of 1980s synth-pop, isolation was key: Depeche Mode’s Martin Gore built emotional soundscapes long before lyrics were added.

The New Normal

This is where Justin Vernon of Bon Iver wrote and recorded his debut album.

Today, it’s less exception than expectation. Tame Impala’s Kevin Parker records every note himself. Bon Iver’s For Emma, Forever Ago was birthed in a Wisconsin cabin, its reverberations more spiritual than technical. Billie Eilish and Finneas redefined pop from a bedroom in Los Angeles.

Haim’s home set up

What began in basement studios and rented rooms has become the default. The tools have changed - cassette decks became Pro Tools, Logic or Ableton sessions, rack units became plugins- but the impulse remains: one artist, one room, one world.

This shift doesn’t render bands obsolete. But it reframes their role. Solitary creation fosters cohesion. Personal mythologies. Intimacy. These songs aren’t shouted from stages - they’re whispered into headphones.

Conclusion: The New Quiet

We are living in the age of the one-person band - not loud, not collective, but intentional and interior.

The studio is no longer a vessel for performance. It is the performance. The DAW is a diary. And the artist is composer, editor, sound designer, and therapist, all at once.

What began with Wilson, Springsteen, Prince, Eno, and Gabriel continues through Kevin Parker, Billie Eilish, James Blake, Moses Sumney, Clairo, and countless others - each making music the way a poet works: in solitude, one layer at a time.

And in the silence between notes, if you listen closely, you can hear the artist breathing.

Why Creativity, Not Code, Will Shape the Future of Artificial Intelligence

May 3, 2025 Michael Moroney

Versteeg evolving digital canvases.

“The role of the artist is to ask the right questions.”
- Anton Pavlovsky, AI researcher and visual artist

It starts not with a breakthrough, but a question.

What does it mean to make something with a machine?

The question may sound technical, but it’s profoundly human. And increasingly, it’s artists - not engineers - who are offering the most nuanced answers.

Across studios, galleries, recording spaces, and writing desks, a quiet shift is underway. Artists are not simply using AI to generate content. They’re using it to interrogate assumptions, to reframe authorship, and to redefine what it means to create. This isn't about the tools. It's about the tension - the dance between constraint and freedom, between intention and randomness.

Rather than asking "what can AI do?" the more important question is: "what can we become when we collaborate with it?"

Beyond Automation: Art as Interrogation

Mainstream narratives about AI often hinge on efficiency: faster, cheaper, more scalable. But some of the most compelling creators working today reject this framework entirely. They see AI not as an assistant, but as a strange mirror - a reflection of our biases, our inputs, our forgotten digital traces.

Painter David Salle recently worked with technologist Grant Davis to train AI models on works by Giorgio de Chirico, Edward Hopper, and his own archive. The resulting images, which Salle then painted over, feel like echoes of art history filtered through a fragmented machine memory.

Copyright: Art © David Salle

“It became a conversation,” Salle said. “One where neither of us - the machine or me - had all the answers.”

Yes, AI can assist. But more provocatively, it can confront. These artists aren’t chasing seamlessness. They’re chasing questions: Who trained this model? Whose hands are in the data? What patterns are we blindly repeating?

From Mimicry to Meaning

Critics of AI-generated art often point to plagiarism and mimicry. And the concern is real. But it also reveals something deeper about how we value originality.

"You start when you're young and you copy. You straight up copy."
- Austin Kleon

Mimicry has always been the seedbed of invention. Jazz was born from riffing. Renaissance painters learned by replicating their masters. Even Shakespeare sampled liberally from classical sources.

AI doesn’t change that dynamic. It amplifies it. It accelerates it. And it challenges us to ask: when does influence become theft? When does recombination become expression?

So the deeper issue becomes: who gets to shape the remix? Who holds the prompt? Who gets the final cut?

The Machine as Creative Counterpart

Singer and producer Arca has described her process with generative tools not as programming, but as “curating chaos.” Her 2021 album Kick IIII included synthetic voices and algorithmic fragments stitched together with deeply personal textures.

Arca using AI to soundtrack NYC's Museum of Modern Art

“It’s like weaving with static. And every choice I make is a refusal to let the machine finish the sentence.”

This isn’t about machines replacing people. It’s about expanding the expressive palette. When AI becomes a collaborator, a kind of unpredictable bandmate or experimental co-author - new creative terrains open.

And those terrains are shaped not by code, but by curiosity.

The Gallery as a Place of Negotiation

Art is how culture negotiates with the future. It’s not passive reception. It’s active interpretation.

Siebren Versteeg

That’s why artists like Siebren Versteeg matter. His generative canvases never settle on a final image. They pulse, mutate, scroll, as if painting itself were thinking. Watching his work feels less like viewing a product and more like eavesdropping on a process.

In that way, the gallery becomes less a museum and more a lab, a site of slow thinking in a fast world.

The Role of the Artist Now

Artists have always been early adopters. David Bowie predicted in 1999 that the internet would transform not just music distribution, but the very relationship between creator and audience.

“The gray space between the artist and the listener, that’s where the interesting stuff is going to happen.”
- David Bowie

We’re now entering a similar gray space with AI. And it’s artists - not ethicists, not CEOs - who may be best equipped to help us navigate it.

Because they know how to hold contradictions. Because they know how to listen. Because they know how to ask: What does it mean? Who is it for? And what happens if we get it wrong?

Closing Thought

We often ask whether AI will destroy or save creativity.

But that’s the wrong question.

The better one is this: Can we shape machines that help us become more human?

And perhaps, as artists have shown us again and again, the way forward isn’t more code.

It’s more conversation.

Note: For a deeper understanding of these artists' works and their perspectives on AI, consider exploring their official websites and recent exhibitions.

Let’s Collaborate