• Home
  • About Me
  • How We Work
  • AI Guide
  • Musings
  • Projects
  • Contact High Music
  • Contact
Menu

SuperString Theory

Street Address
City, State, Zip
Phone Number
Helping Creatives Thrive at the Intersection of Art and Technology

Your Custom Text Here

SuperString Theory

  • Home
  • About Me
  • How We Work
  • AI Guide
  • Musings
  • Projects
  • Contact High Music
  • Contact

The Rise of Workslop and the Value of Experience - When AI Looks Productive but Isn’t 

September 29, 2025 Michael Moroney

By Michael-Patrick Moroney

“The danger isn’t obvious errors. It’s plausible nonsense that slips through unchecked.”

In the 1980s, as the first wave of personal computers entered corporate offices, the word processor promised to banish wasted time. No more white-out, no more carbon copies. Editing was instant, printing was cheap. Managers assumed this would free employees to focus on higher-order thinking. Instead, inboxes filled with drafts and revisions. Reports proliferated, not because there was more to say, but because there was more capacity to churn. The tool amplified both efficiency and noise.

That pattern is repeating with artificial intelligence. Today, polished but shallow reports, emails, slide decks, and code snippets land in inboxes around the world. They appear professional  -  grammatically correct, formatted cleanly, even persuasive at a glance. But when colleagues try to use them, they discover gaps, errors, or generic filler. The work looks like progress but is not.

A joint study by Stanford University and workplace consultancy BetterUp, published in the Harvard Business Review, gave this phenomenon a deliberately ugly name: workslop. The researchers define it as output that “masquerades as good work, but lacks the substance to meaningfully advance a task.” Their survey of 1,150 desk workers found that employees spend on average $186 a month of their time fixing or redoing such content. The problem, they argue, isn’t just the quality of the AI. It’s that the gloss of plausibility hides the flaws until someone downstream has to clean them up.

The Study’s Warnings

“On average, workers lose $186 a month fixing content that looked useful but wasn’t.”

The Stanford/BetterUp study makes three core claims. First, that workslop is not rare. More than 40 percent of respondents said they had received AI-generated output from colleagues in the past month, and a significant portion of it fell into the “looks good but isn’t” category. Second, that the cost is measurable. If each instance takes two hours to fix  -  reviewing, correcting, or rebuilding from scratch  -  the time adds up quickly. Multiply across teams and companies, and billions of dollars in hidden productivity drain emerges. Third, that reputations suffer. Workers who hand in AI-generated slop are seen as less reliable, less creative, and less capable.

The study struck a nerve because it pushes back against the dominant narrative that generative AI is ushering in a productivity boom. Instead, it suggests that much of what AI currently produces is waste disguised as work.

Yet as with all surveys, the details matter. These numbers come from self-reports, not controlled observation. People may overstate the hours they spend “fixing AI” because the annoyance is fresh in their minds. And the survey doesn’t distinguish between novices and experts, or between domains like marketing copy, coding, or financial analysis. In that sense, the study is more a warning flare than a final judgment.

A Counterpoint: When AI Helps

To balance the picture, it’s worth examining where AI has been shown to improve output.

In one of the most widely cited field experiments, Stanford economist Erik Brynjolfsson and colleagues studied the rollout of a generative AI assistant to more than 5,000 customer service agents. Productivity rose by about 14 percent overall, but the gains were not evenly distributed. Newer, less-skilled agents saw performance jumps of 30 percent or more, as the AI offered suggested responses and summarized company policies. More experienced agents improved only modestly, but they also maintained quality and speed more consistently.

Software engineering has produced similar findings. GitHub’s Copilot tool, tested across firms like Accenture and ANZ Bank, lifted coding output by roughly 26 percent on average. Again, junior developers saw the largest boost  -  in some cases finishing tasks 39 percent faster  -  while senior engineers saw gains closer to 8 to 13 percent. The difference was not laziness or reluctance. It was that senior engineers already knew how to structure tasks, anticipate pitfalls, and double-check code. For them, Copilot shaved minutes off repetitive functions; for juniors, it filled knowledge gaps.

These results show that AI is not inherently a generator of workslop. Under the right conditions, it accelerates. But the right conditions almost always include human judgment  -  and judgment is what experience provides.

Why Experience Matters

“AI is not inherently slop. Under the right conditions, it accelerates.”

What separates useful AI from slop is not the model’s horsepower but the user’s discernment.

Experienced workers know how to frame the task. They provide context in prompts, demand reasoning steps, ask for citations, and specify the format they need. Just as important, they know what not to trust. A senior analyst can read an AI-generated market report and immediately sense when numbers are off, conclusions are generic, or recommendations lack grounding. They don’t pass it along without interrogation.

Less experienced workers, by contrast, may lack the radar to spot these deficiencies. They might treat the AI as an authority rather than an assistant, copying its output wholesale. They believe the polish signals competence, not realizing that competence lies in advancing the work, not in sounding good. That is how workslop proliferates.

This isn’t about generational divides. Some young workers are adept at interrogating AI; some older ones are credulous. What matters is domain knowledge and workflow fluency. Without those, the AI amplifies errors instead of insights.

A Familiar Cycle

“Generative AI doesn’t just make mistakes. It makes mistakes look polished.”

History offers ample reminders that new tools often overshoot their promise.

When photocopiers spread through offices in the 1960s, they were hailed as liberators of clerical staff. Instead, they enabled massive duplication. Memos and reports multiplied not because they were needed, but because it was easy. Managers complained of “information overload” decades before email arrived.

Spell-check and autocorrect in the 1990s created similar paradoxes. They reduced typos, but also eroded confidence in spelling. Worse, uncritical users accepted suggestions that changed meanings  -  “public” into “pubic”  -  embarrassing those who trusted the tool too much.

Even email, once touted as faster and more efficient than paper memos, spawned its own genre of “reply all” disasters and inbox bloat.

Each time, the pattern is the same: a tool lowers the barrier to produce, output surges, and the signal-to-noise ratio collapses. Only after years of adaptation  -  new norms, etiquette, training, and sometimes regulation  -  do organizations learn to reclaim the promise without drowning in the noise.

Workslop is simply the AI era’s version of this cycle.

The Human Factor

The Stanford/BetterUp study highlights the burden of cleanup, but it is just as important to study the people who are not producing workslop. In almost every case, those are the employees with experience, domain mastery, and established workflows. They know that AI can draft an outline but not the final report. They know that AI can suggest code but not guarantee security. They know the difference between fluency and understanding.

In some cases, AI even magnifies their strengths. Experienced copywriters use it to brainstorm variations, then rewrite with voice and intent. Senior engineers use Copilot to skip boilerplate, then focus on architecture. Lawyers ask AI to summarize documents, then bring their judgment to what matters legally.

The contrast suggests a better way to frame the problem. AI is not a shortcut to competence. It is a force multiplier for those who already know what they’re doing. For those who don’t, it risks turning ignorance into polished-looking output.

Implications for Workplaces

If this diagnosis is right, then the challenge is not whether to adopt AI, but how to manage the variation in who uses it well and who produces workslop.

Some organizations are already experimenting with pairings, assigning senior staff to review junior workers’ AI outputs before they circulate. Others are creating explicit training programs in “AI literacy,” teaching employees how to structure prompts, verify sources, and flag uncertain results. A few are even measuring “rework time”  -  the hours spent fixing AI-generated drafts  -  as a key performance metric, just as factories track defective products.

The lesson is clear: AI cannot be left to individual initiative alone. Just as companies once had to invent email etiquette and version-control protocols, they must now invent AI usage standards.

Beyond Hype and Fear

The temptation is to swing between extremes: AI will either unleash a golden age of productivity or drown us in slop. The truth, as always, is more contingent. AI will amplify good habits and good judgment  -  and it will magnify bad ones.

Walter Benjamin, writing in the 1930s about mechanical reproduction, noted that new tools always democratize and debase simultaneously. Photography allowed more people to capture images, but also flooded the world with banal snapshots. AI is democratizing fluency, but it also floods the workplace with plausible nonsense.

The question is not whether AI will create value. It will  -  in pockets, in domains, under the stewardship of those who know how to manage it. The question is whether organizations will invest in the culture and training to prevent workslop from becoming the new background noise of modern work.

Conclusion

Every new tool arrives with hope and backlash. Word processors created clutter before they created efficiency. Spell-check embarrassed users before it empowered them. Email overwhelmed before it organized.

AI will follow that same trajectory. The Stanford/BetterUp study is a warning: the hidden cost of workslop is real, both in lost time and in damaged trust. But it is not the final word. Other studies show real gains when experienced workers use AI thoughtfully.

The difference comes down to judgment. Fluency without judgment produces slop. Fluency with judgment produces leverage.

That is why experience still matters, and why training, oversight, and culture will matter even more. If organizations want AI to be more than a generator of plausible waste, they will need to ensure their people  -  especially the less experienced  -  know not just how to use it, but when to distrust it.

History tells us that the cycle of hype and correction will continue. The challenge now is to shorten that cycle  -  to learn faster, adapt norms sooner, and remind ourselves that technology is only as productive as the humans guiding it.

Watching 2030 Unfold: Two Paths in a Shifting Music Landscape →
Let’s Collaborate