On March 1, 2026, The Washington Post broke a story that should make every student using AI to write essays sit up and pay attention: Cleveland’s Plain Dealer, a 184-year-old newspaper with professional editors, experienced journalists, and institutional resources, has been using AI to draft news articles. The results? Traffic is up. Quality is down. And the newsroom is in near-revolt.
If a professional news organization with trained editors reviewing every word can’t make AI writing work, what makes students think they can submit AI-generated essays and get away with it?
The Cleveland experiment isn’t just a journalism story. It’s a cautionary tale that perfectly mirrors what’s happening in academia right now. And the lessons are sobering.
The Cleveland Plain Dealer Experiment: What Actually Happened
The Plain Dealer didn’t jump into AI writing carelessly. They implemented what they believed was a controlled, editor-supervised system. Articles about ice carving festivals, medical research discoveries, and local news now carry a new byline: the reporter’s name paired with “Advance Local Express Desk”—code for “this was drafted by artificial intelligence.”
The paper’s leadership sold this as efficiency. AI could handle routine local coverage, freeing up human journalists for deeper investigative work. Traffic numbers initially supported the decision—clicks went up. But something else happened that the traffic metrics didn’t capture: quality collapsed, and the newsroom revolted.
Reporters watched AI-drafted articles go live with their names attached. The prose was technically correct but soulless. The coverage was comprehensive but generic. Sources were cited accurately but engaged superficially. Everything looked fine on the surface, but journalists who had spent careers developing their craft could see what readers might not immediately notice: the writing lacked the insight, context, and understanding that makes journalism valuable and the problem was that there was no human in the loop with this AI integration.
This sounds familiar, doesn’t it? It should. It’s exactly what’s happening in college classrooms across the country.
The Student Parallel: Your Professor Sees What You Don’t
Students submitting AI-generated essays often focus on the wrong metrics. Does it meet the word count? Check. Are sources cited? Check. Is the grammar correct? Check. Does it address the prompt? Check.
But professors, like the Plain Dealer journalists, can see what students miss: the writing lacks genuine engagement with ideas. The arguments are generic. The analysis is shallow. The voice is artificial.
The Cleveland experiment proves a crucial point: even with professional editors reviewing AI-generated content, the fundamental problems remain. The Plain Dealer has copy editors, fact-checkers, and experienced journalists reviewing these AI drafts before publication. They can catch errors, verify facts, and polish sentences. But they cannot inject the genuine understanding and original insight that only comes from actual human thinking.
If professional editors can’t fix AI writing, how effective do you think a quick proofread of your ChatGPT essay will be?

Why Traffic Metrics Don’t Tell the Whole Story
The Plain Dealer’s defenders point to increased traffic as evidence that AI journalism works. More people are clicking on articles, so the AI must be doing something right.
This is the same logic students use when they see their AI-generated essays get decent grades: “I got a B, so it must be fine.”
But both arguments miss the crucial long-term consequences. The Plain Dealer is trading institutional credibility and journalist morale for short-term traffic gains. Students are trading actual learning and skill development for short-term grade maintenance.
The traffic might be up today, but the newspaper’s reputation is eroding. Journalists are demoralized. Sources are noticing that coverage has become more superficial. And eventually, readers will recognize that the content lacks the depth and insight they expect from a 184-year-old institution.
Similarly, your AI-generated essays might earn passing grades initially, especially if professors are overwhelmed with grading. But the knowledge gaps compound. Your writing skills don’t develop. Your critical thinking stagnates. And eventually, in comprehensive exams, major papers, or professional work, the absence of genuine capabilities becomes painfully apparent.
The Sophistication Trap: When “Good Enough” Isn’t
One of the most dangerous aspects of modern AI writing is how sophisticated it appears on the surface. The Plain Dealer’s AI-generated articles aren’t obvious garbage. They’re grammatically correct, structurally sound, and factually accurate (mostly). They look like real journalism to casual readers.
This is the AI trap students fall into constantly. ChatGPT can generate essays that look like real academic work to untrained eyes. The sentences flow. The paragraphs transition. The thesis is stated clearly. Everything appears fine.
But appearance and reality diverge dramatically when experts examine the work. Journalists can spot AI journalism because they understand what genuine reporting looks like. Professors can spot AI essays because they understand what genuine engagement with ideas looks like.
The Plain Dealer experiment proves that surface-level sophistication doesn’t equal actual quality. You can have grammatically perfect sentences that convey no real insight. You can have well-structured paragraphs that demonstrate no genuine understanding. You can have properly cited sources that aren’t meaningfully engaged.
This is why editing AI output doesn’t work. The problems aren’t at the sentence level—they’re at the thinking level.

What Professional Journalists Are Saying (And Why It Matters)
The newsroom revolt at the Plain Dealer reveals something crucial: the people who actually understand journalism recognize that AI cannot do what they do. This isn’t Luddite resistance to technology. It’s expert recognition of fundamental limitations.
Experienced journalists understand that good reporting requires:
- Recognizing which details matter and which don’t
- Understanding context that sources might not explicitly state
- Asking follow-up questions that reveal deeper truths
- Synthesizing information in ways that create genuine insight
- Writing with a voice that reflects actual understanding
AI can do none of these things. It can assemble information. It can generate sentences. It can produce text that looks like journalism. But it cannot report.
Academic writing requires parallel capabilities:
- Recognizing which arguments matter in scholarly conversations
- Understanding theoretical frameworks deeply enough to apply them
- Developing original analysis that advances discussion
- Synthesizing sources in ways that create new insights
- Writing with a voice that reflects genuine expertise
The journalists at the Plain Dealer aren’t worried about losing their jobs to AI because they’re resistant to change. They’re worried because they’re watching their bylines get attached to work that doesn’t meet their professional standards. They’re seeing their craft degraded by administrators who mistake efficiency for quality.
Sound familiar? It should. Every student who submits an AI-generated essay is doing the exact same thing—attaching their name to work that doesn’t meet the standards of their field, degrading their own development in exchange for short-term efficiency.
The Editor Problem: Why Human Review Doesn’t Fix AI Writing
Here’s where the Cleveland experiment becomes especially instructive for students: the Plain Dealer has editors reviewing all AI-generated content before publication. These aren’t algorithms checking the work—they’re experienced journalists who should, in theory, be able to catch problems and improve quality.
Yet the quality problems persist. Why?
Because editing can only fix surface problems. An editor can correct grammar, verify facts, adjust tone, and restructure sentences. But an editor cannot add the genuine understanding and original thinking that should have been there from the start.
When AI writes an article about medical research, it might accurately report what a study found. An editor can verify that accuracy. But the AI cannot recognize which aspects of the research are genuinely significant, how they connect to broader medical trends, or what questions experts would want answered. An editor can’t add this understanding after the fact—it has to be there during the reporting and writing process.
This is why students who think they can “fix” AI-generated essays through editing are fundamentally misunderstanding the problem. You can polish an AI essay. You can adjust word choices, vary sentence structures, and add transitional phrases. You can run it through paraphrasing tools or “humanization” software.
But you cannot add the genuine engagement with ideas that should have been there from conception. You cannot retroactively inject the understanding that comes from actually reading and thinking about your sources. You cannot edit your way to authentic intellectual work.
The Plain Dealer proves this at scale. If professional editors can’t fix AI journalism, student self-editing definitely can’t fix AI essays.
The Detection Question: They Know
Students often ask: “But will they catch it?” The Cleveland story reveals how misguided this question is.
The Plain Dealer’s AI-generated articles are explicitly labeled as AI-assisted. Readers know these were drafted by AI. Yet journalists and media critics can still identify specific ways the quality falls short. They can point to exactly where genuine reporting would have gone deeper, asked better questions, or provided crucial context.
In academic contexts, professors might not always know with certainty which essays are AI-generated. But they can see the same quality gaps that journalists see in AI journalism: generic observations instead of original insights, surface-level engagement instead of deep analysis, adequate completion instead of genuine thinking.
More importantly, as detection technology improves—and it’s improving rapidly—the “will they catch it” question becomes irrelevant. Universities are deploying sophisticated detection tools. Professors are getting better at recognizing AI patterns. The window for submitting undetected AI work is closing fast.
But even if detection wasn’t improving, the Cleveland experiment reveals a deeper truth: whether or not you get caught misses the entire point. The journalists at the Plain Dealer aren’t primarily concerned about whether readers can detect AI writing. They’re concerned that AI writing degrades the quality of their work and undermines their professional development.
Similarly, whether or not your professor catches your AI essay, you’re still degrading your own education and undermining your own development. Getting away with it doesn’t make it valuable.
What Students Should Learn From Cleveland’s Mistake
The Plain Dealer experiment offers several crucial lessons for students:
Lesson 1: Professional Resources Don’t Fix AI’s Fundamental Problems
The Plain Dealer has experienced editors, institutional knowledge, and quality control processes. None of these advantages solve AI’s core limitations. Students operating alone, without professional editing support, face even steeper odds of producing quality work from AI drafts.
Lesson 2: Short-Term Metrics Mislead
Traffic numbers don’t capture journalism quality, just as grades don’t always capture learning. Both can create false confidence that masks long-term damage.
Lesson 3: Expertise Matters More Than Ever
AI makes the absence of genuine expertise more obvious, not less. In journalism and academia alike, the people who actually know what they’re doing can immediately spot the difference between AI-generated content and expert work.
Lesson 4: Editing Cannot Replace Thinking
No amount of revision can transform AI-generated text into genuine intellectual work. The thinking has to happen first, during the writing process, not afterward during editing.
Lesson 5: Your Name Matters
When journalists see their bylines on AI-generated articles, they experience a disconnect between their professional identity and the quality of work bearing their name. Students should feel the same way about submitting AI essays. Your name on an assignment represents your thinking, your work, your development. AI-generated content betrays that representation.

The Alternative: What Genuine Expertise Looks Like
The Cleveland experiment is valuable precisely because it demonstrates what doesn’t work. But what’s the alternative?
At the Plain Dealer, the answer is obvious: actual reporting by experienced journalists who understand their beats, know their sources, and can provide the insight and context that makes journalism valuable. This isn’t a revolutionary solution—it’s the traditional approach that built the paper’s 184-year reputation.
In academic contexts, the parallel is equally clear: actual writing by students who read their sources, think through ideas, and develop their own arguments. Or, when students need support, work with genuine experts who can model what quality academic work looks like.
This is where Unemployed Professors enters the picture. Our service exists precisely because we recognize what the Cleveland experiment proves: AI cannot replace genuine expertise, and attempts to use AI as a shortcut inevitably degrade quality.
Our writers are PhD-level scholars who actually understand the subjects they write about. They conduct real research, develop original arguments, and write with the authentic voice that comes from genuine expertise. When you receive an essay from us, you’re not getting polished AI output—you’re getting work that demonstrates the kind of thinking and engagement that professors expect.
More importantly, our essays serve as models that show you what expert-level work looks like. Just as journalism students learn from studying excellent reporting, academic students can learn from studying expertly crafted essays. You see how genuine scholars approach topics, construct arguments, and engage with sources.
This educational function is what AI can never provide. AI can show you what algorithmic text assembly looks like. Expert scholars can show you what actual thinking looks like.
The Long-Term Perspective
The Cleveland Plain Dealer will likely continue its AI experiment for some time. Traffic metrics provide enough cover for administrators to justify the approach. But the long-term costs are accumulating: institutional credibility, newsroom morale, coverage quality, and professional development.
Students face parallel long-term costs when they rely on AI: knowledge gaps, underdeveloped skills, professional unpreparedness, and compromised intellectual development.
The irony is that in both cases, the short-term efficiency gains are illusory. The Plain Dealer isn’t actually saving money if the quality degradation eventually costs them readers and reputation. Students aren’t actually saving time if their lack of development eventually costs them career opportunities and professional success.
Real efficiency comes from genuine expertise working effectively. A skilled journalist can report a story more efficiently than AI plus editors because the journalist understands what matters from the start. A knowledgeable student can write an essay more efficiently than AI plus editing because the student actually understands the material.
And when students don’t yet have that expertise, working with actual experts—not AI—provides the learning foundation for developing it.
Conclusion: Choose Differently
The Cleveland Plain Dealer made a choice: prioritize traffic metrics and short-term efficiency over quality journalism and professional development. The consequences are playing out in real-time, and they’re instructive.
Students face the same choice every time they consider using AI for academic work. You can prioritize grades and short-term efficiency over actual learning and skill development. The Cleveland experiment shows where that path leads.
Or you can choose differently. You can engage genuinely with your work. When you need support, you can work with actual experts who demonstrate what quality looks like rather than AI that simulates it superficially.
The journalists at the Plain Dealer understand that real reporting requires real reporters. It’s time for students to understand that real academic work requires real thinking—either your own, or modeled by genuine experts who can teach you what quality looks like.
The Cleveland experiment proves that institutional resources, professional editing, and careful oversight cannot make AI writing work at a quality level. If a 184-year-old newspaper can’t solve this problem, individual students definitely can’t.
The solution isn’t better AI tools or cleverer editing. The solution is genuine human expertise. Choose accordingly.