The Washington Post Says Schools Are Teaching AI Wrong—But They’re Missing What Students Actually Need

On March 10, 2026, The Washington Post published an opinion piece with a provocative claim: “Schools are teaching AI all wrong.” Authors Jenny Anderson and Rebecca Winthrop argue that educational institutions focus on teaching students how to use AI tools rather than helping them develop agency over the technology itself.

They’re absolutely right about the problem. But they’re proposing an institutional solution—curriculum reform, better frameworks, teacher training—that will take years to implement. Meanwhile, students facing AI-related decisions today are left without practical guidance.

This is the gap Unemployed Professors exists to fill. While educators debate optimal AI pedagogy, students need actionable answers right now: How should I actually use AI for academic work? What’s responsible versus irresponsible? Where’s the line?

Let’s break down what the Washington Post got right, what they missed, and what students actually need to navigate AI in education today.

What the Washington Post Gets Right

Anderson and Winthrop correctly identify a fundamental flaw in current AI education: schools teach students to be AI users, not AI critics. They want students to develop “agency over the technology, not just how to use it.”

This is a valid concern. When schools teach AI literacy, they often focus on:

  • How to prompt ChatGPT effectively
  • Which AI tools exist for different tasks
  • Basic understanding of how large language models work
  • Tips for detecting AI-generated content

What they rarely teach:

  • Critical evaluation of AI limitations
  • Ethical frameworks for AI decision-making
  • Understanding when AI use undermines learning
  • Developing judgment about appropriate AI deployment

The authors are right that this creates a problem. Students learn technical skills for using AI without developing the critical thinking needed to make wise decisions about when and how to use it.

A student who knows how to write effective prompts but doesn’t understand when using AI sabotages their own education is a student set up for failure—both academically and professionally.

Where the Washington Post Analysis Falls Short

The op-ed’s diagnosis is correct, but the implied solution is institutional: schools need to teach AI differently. Develop better curricula. Train teachers in AI literacy. Create frameworks for critical AI engagement.

All worthy goals. All requiring years of implementation. All leaving current students without guidance.

Consider the timeline:

  • Curriculum committees convene: 6-12 months
  • New frameworks developed: 1-2 years
  • Teacher professional development: Ongoing
  • Full implementation across schools: 3-5 years minimum

Meanwhile, students face AI-related decisions every single day:

  • Should I use ChatGPT to draft this essay?
  • Can I use AI for research organization?
  • Is it okay to have AI check my grammar?
  • Where’s the line between AI assistance and AI replacement?

The Washington Post addresses educators and policymakers. But students can’t wait for institutions to catch up. They need practical frameworks for responsible AI use today, not theoretical models for future curricula.

The Reality Students Face Right Now

Let’s ground this in the actual student experience. A sophomore writing a research paper on climate policy faces these decisions:

Scenario 1: AI as Research Assistant Uses ChatGPT to understand complex climate models, then reads actual scientific papers, develops her own argument, and writes the essay herself.

Scenario 2: AI as Ghostwriter
Prompts ChatGPT to “write a 2000-word essay on climate policy challenges,” makes minor edits, submits it as her own work.

Most students intuitively understand these scenarios represent different ethical territories. But the space between them contains enormous gray area:

  • Using AI to generate an outline?
  • Having AI suggest thesis statements to evaluate?
  • Asking AI to improve sentence clarity in your own writing?
  • Using AI to explain source material you’ll then engage with yourself?

Schools teaching “AI agency” might eventually help students navigate these questions. But that future curriculum doesn’t help the sophomore with a paper due Friday.

What Students Actually Need: A Practical Framework

Rather than waiting for institutional AI education reform, students need a practical decision-making framework they can apply immediately. Here’s what that looks like:

The Core Principle: AI Should Enhance Your Thinking, Not Replace It

Every AI use decision should be evaluated against this standard: Does this AI use help me think better, or does it prevent me from thinking at all?

AI Enhancing Thinking:

  • Explaining complex concepts so you can study them more effectively
  • Organizing research notes so you can see patterns and connections
  • Providing feedback on drafts you’ve written so you can improve
  • Suggesting approaches to problems you then evaluate critically

AI Replacing Thinking:

  • Generating arguments you didn’t develop
  • Writing analysis you haven’t done
  • Creating content you don’t understand
  • Producing work that bypasses the learning process

The Agency Question: Who’s Making Decisions?

The Washington Post authors emphasize student agency. Here’s how to apply that practically:

Ask yourself: In this AI interaction, who is making the actual intellectual decisions?

You Have Agency:

  • You decide which AI suggestions to accept or reject
  • You evaluate AI output against your own understanding
  • You use AI as one input among many in your thinking process
  • You maintain ownership of your final work’s arguments and analysis

AI Has Agency (Problem):

  • You accept AI output without critical evaluation
  • You can’t explain or defend the work AI produced
  • You’re outsourcing judgment to the algorithm
  • You don’t understand the content well enough to improve it

The Learning Test: Can You Reproduce This?

Here’s a practical question that cuts through ambiguity: If you used AI to help with this assignment, could you produce similar quality work without AI next time?

If yes, the AI helped you learn. If no, the AI prevented learning.

Passes the Learning Test:

  • Using AI to understand difficult theories, then applying those theories yourself
  • Having AI organize your research notes, then writing your own analysis
  • Getting AI feedback on your draft, then implementing improvements you understand
  • Using AI as a tutor to strengthen weak areas so you’re better equipped independently

Fails the Learning Test:

  • Having AI write your essay because you can’t write at that level yourself
  • Using AI to complete assignments in topics you don’t understand
  • Relying on AI to produce work you couldn’t evaluate as correct or incorrect
  • Creating dependency on AI rather than building your own capabilities

Why Institutional Solutions Are Necessary But Insufficient

The Washington Post is right that schools need better AI education. The current approach—teaching students to prompt ChatGPT without teaching them to evaluate when they should—is inadequate.

But institutional reform has limitations even when it happens:

Institutions Move Slowly By the time comprehensive AI literacy curricula are widely implemented, the AI landscape will have changed dramatically. The frameworks being developed now address ChatGPT-3.5 and GPT-4. What about GPT-7 or whatever comes next?

Institutions Focus on Policies, Not Judgment Schools can establish rules: “AI allowed for X, prohibited for Y.” But education requires judgment that rules can’t fully capture. Students need to develop ethical reasoning, not just rule-following.

Institutions Can’t Address Individual Contexts A school-wide AI policy applies to all students equally. But a student who struggles with writing needs different AI boundaries than a student who’s a confident writer. Blanket policies can’t account for individual learning needs.

Institutions Can’t Monitor Everything Even with clear AI policies, much student work happens outside institutional oversight. Students make dozens of AI-related decisions that teachers never see. They need internalized ethical frameworks, not just external rules.

This is where expert human guidance becomes essential—and it’s precisely what Unemployed Professors provides.

The Human Expertise Alternative

While institutions develop better AI curricula, students need access to genuine human expertise that demonstrates what AI fundamentally cannot provide: actual thinking, real understanding, and authentic engagement with ideas.

This is why our service model matters. We don’t help students use AI better. We provide what AI cannot: work from genuine scholars who actually understand their subjects.

When you work with Unemployed Professors:

  • You receive essays written by experts with real expertise in your field
  • You see how genuine scholars approach topics, construct arguments, and engage sources
  • You study model work that demonstrates authentic intellectual engagement
  • You learn what quality actually looks like—something AI cannot teach you

Why This Complements Responsible AI Use

Students using AI responsibly still face a challenge: they lack models of genuine expertise to aspire to. AI can explain concepts, but it can’t demonstrate how experts think.

Studying work from actual scholars shows you:

  • How genuine understanding shapes writing
  • What authentic engagement with sources looks like
  • How original arguments develop from deep knowledge
  • The difference between AI-assembled text and expert-crafted analysis

This educational function is what AI literacy curricula aim for but can’t fully achieve. You can learn about critical thinking through instruction, but you learn critical thinking by seeing it modeled.

What 85% of Students Are Already Doing

The Washington Post’s concerns aren’t hypothetical. According to recent surveys, approximately 85% of undergraduates are using AI for coursework—for brainstorming, outlining papers, and studying for exams.

This widespread adoption happened without institutional guidance. Students are making AI decisions on their own, often without clear frameworks for responsible use.

This creates several problems:

Problem 1: Students Think They’re Using AI Responsibly When They’re Not

Many students using AI for “brainstorming” are actually having AI generate their core arguments, which they then adopt uncritically. They believe this is responsible use because they’re not copying AI text verbatim, but they’re still outsourcing the crucial intellectual work.

Problem 2: Students Don’t Recognize Long-Term Consequences

Short-term AI use might earn passing grades while creating knowledge gaps that compound. A student using AI to complete organic chemistry problem sets might pass the class but fail spectacularly when that knowledge is prerequisite for advanced courses.

Problem 3: Students Lack Access to Better Alternatives

Students turn to AI because they don’t know what else to do. They struggle with assignments and AI is immediately available. They don’t consider that genuine human expertise—either developing their own capabilities or consulting expert models—would serve them better.

The Policy Vacuum Problem

The Washington Post article exists within a broader context: schools are largely operating without clear AI policies. Recent research shows that while many states provide guidance, most leave actual policy to individual districts or schools.

This creates enormous variation in how AI is addressed:

  • Some districts ban AI entirely
  • Others embrace it fully without guardrails
  • Most are somewhere in the confused middle
  • Individual teachers create their own conflicting policies

Students navigating this inconsistency need personal ethical frameworks that work regardless of institutional policy. They need to make good decisions about AI use whether their school has clear rules or not.

What Students Can Do Right Now

While waiting for better AI education from institutions, students can take these practical steps:

1. Develop Your Own AI Use Policy

Rather than waiting for your school to tell you what’s allowed, create personal guidelines based on what serves your learning:

  • I will use AI to enhance my understanding, never to replace it
  • I will only accept AI suggestions I can explain and defend
  • I will ensure any AI assistance strengthens rather than weakens my capabilities
  • I will maintain ownership of my intellectual work

2. Test Your AI Use Against Learning Outcomes

After using AI for an assignment, ask:

  • Do I understand the material better than before?
  • Could I complete similar work without AI next time?
  • Can I explain everything in what I submitted?
  • Did this process develop my capabilities?

If the answer to any question is “no,” your AI use was problematic.

3. Seek Human Expertise When You Need Models

When you need to understand what quality work looks like, consult actual human experts rather than relying on AI:

  • Study example essays from genuine scholars
  • Seek feedback from professors on your work
  • Consult professional writing services that employ real experts
  • Learn from humans who actually understand your field

4. Practice Transparent AI Use

Be honest with yourself and others about what AI did:

  • If AI helped you understand something, that’s legitimate—acknowledge it
  • If AI generated content you adopted, that’s problematic—don’t pretend it’s yours
  • If you’re unsure whether your AI use was appropriate, err on the side of disclosure

5. Invest in Developing Actual Capabilities

The best defense against AI dependency is genuine capability:

  • Do the reading even when AI could summarize it
  • Write the draft yourself even when AI could generate it
  • Develop understanding even when AI could explain it
  • Build skills even when AI could compensate for their absence

The Long-Term Perspective

The Washington Post authors are right that better AI education is essential. Students do need to develop agency over technology rather than just technical skills for using it.

But they’re wrong to frame this primarily as an institutional challenge. While schools work on better curricula, students need practical guidance for navigating AI decisions today.

More importantly, the solution isn’t just better AI education—it’s ensuring students have access to genuine human expertise that demonstrates what AI cannot provide.

This is why services like Unemployed Professors matter. We’re not teaching you to use AI better. We’re showing you what genuine expertise looks like, what authentic thinking produces, and why human understanding matters.

When you study work from our scholars, you’re not learning about AI. You’re learning what AI fundamentally cannot do: think, understand, and engage with ideas in ways that create actual knowledge.

The Bottom Line

The Washington Post says schools are teaching AI wrong. They are. Schools focus on teaching AI use rather than critical AI literacy. This needs to change.

But students can’t wait years for institutional reform. You need frameworks for responsible AI decisions today.

Here’s what that means practically:

Use AI to enhance thinking, never replace it. When AI helps you understand material you’ll then engage with yourself, that’s responsible use. When AI does your thinking for you, that’s problematic.

Maintain agency in all AI interactions. You should be making the actual intellectual decisions. AI can provide input, but you must evaluate, judge, and ultimately own your work.

Apply the learning test. If AI use leaves you more capable afterward, it’s probably appropriate. If it creates dependency or prevents skill development, it’s not.

Seek human expertise when you need models. AI can explain concepts, but genuine experts demonstrate how real understanding works. Study their work to learn what quality looks like.

Develop your capabilities even when AI offers shortcuts. The goal isn’t just completing assignments—it’s becoming educated. That requires doing the work that builds understanding.

The Washington Post is right that AI education needs reform. But you don’t need to wait for institutions to figure this out. You can make responsible AI decisions right now using clear ethical frameworks.

And when you need to see what genuine expertise looks like—what authentic thinking produces, what real understanding enables—that’s where human experts become essential.

AI is a tool. Human expertise is irreplaceable. Know the difference, and you’ll navigate this landscape successfully regardless of what your school teaches.

Choose Real Work! Choose Unemployed Professors!

Scroll to Top