Professors don't trust your writing anymore header image.

Professors Don’t Trust Your Writing Anymore. Now What?

Professors Don’t Trust Your Writing Anymore. Here’s What That Means for You.

Something is happening in college classrooms right now that should get every student’s attention.

Professors are giving back perfect papers — and then asking students to explain what they wrote. And what they are finding, at universities from Cornell to NYU to Penn, is the same thing over and over: students who submitted flawless work cannot explain a single sentence of it. They stare. They stumble. They go blank.

The Associated Press published an investigation today documenting this phenomenon. The headline says everything: “Perfect homework, blank stares.” Professors across the country are watching their students hand in work that reads like a McKinsey memo that went through three rounds of editing — and then sit across from them unable to articulate a single argument from the essay they supposedly wrote.

Panos Ipeirotis, a professor at NYU’s Stern School of Business, put it plainly in the piece: “I don’t trust written assignments anymore to be the result of actual thinking.”

Read that again. A professor at one of the most prestigious business schools in the country does not trust written assignments anymore.

He is not alone. Not by a long shot.

A light purple infographic from Unemployed Professors titled "The AI Academic Integrity Crisis Has Reached a Tipping Point," tagged "March 2026 — By the Numbers." Three large statistics at the top show that 92 percent of faculty are concerned about AI academic dishonesty according to the College Board, Georgia Tech AI integrity referrals have tripled since 2023-24, and at least 12 major universities including Yale, Johns Hopkins, and Vanderbilt have disabled Turnitin's AI detection. Below the statistics, five data rows present additional faculty survey findings: 90 percent believe AI will diminish critical thinking, 78 percent say cheating has increased since AI became available, 74 percent believe AI will diminish the value of degrees, 51 percent of University of Georgia dishonesty cases now involve alleged AI use, and blue book sales at Georgia universities jumped 125 percent. A deep purple pull quote from Georgia Tech professor Amy Bruckman closes the infographic: "I don't think the general public understands how desperate the situation is."

The Oral Exam Is Back — And It’s Coming for Every Campus

Universities are responding to the AI flood in the only way that actually works: they are going back to basics. Oral exams. Face-to-face defenses. Professors looking students in the eye and asking: do you actually know this?

At Cornell, biomedical engineering professor Chris Schaffer now requires 20-minute oral defense sessions after every written problem set. He no longer grades the written work at all. The grade comes entirely from the oral defense. His reasoning was blunt: “You won’t be able to AI your way through an oral exam.” Cornell has launched a formal Oral Assessment Workshop through its Center for Teaching Innovation. Another Cornell engineering professor is conducting four-minute mock interviews with each student in a class of 180. The math alone tells you how seriously they are taking this.

At the University of Pennsylvania, a growing number of faculty now pair oral exams with written papers. Bruce Lenthall, executive director of Penn’s Center for Teaching and Learning, confirmed that the institution has seen a massive shift toward in-person assessments, with faculty workshops on oral exams becoming standard.

At NYU, Professor Ipeirotis went even further. He built an AI-powered oral exam system — a cloned professor’s voice that questions students about their submissions with adaptive follow-up questions. He calls it fighting fire with fire. In pilot testing with 36 students, the results were striking. The written work looked suspiciously good. The oral follow-ups revealed that many students who submitted apparently thoughtful work could not explain basic decisions in their own submissions after just two follow-up questions.

This is not a fringe movement. The AP story was syndicated to Fortune, ABC News, the Philadelphia Inquirer, the Boston Globe, NBC stations coast to coast, and dozens of regional outlets. The oral exam comeback is now mainstream news.

The question for every student reading this is: what does that mean for you?

The Numbers Behind the Crisis

The oral exam movement did not emerge from nowhere. It is the direct response to a crisis that has been building since November 2022 and has now reached a point where professors across disciplines are telling researchers, journalists, and each other that the system is breaking down.

A College Board survey released in February 2026 found that 92 percent of faculty are concerned about AI-facilitated academic dishonesty. A separate national survey by the American Association of Colleges and Universities found that 90 percent of instructors believe AI will diminish students’ critical thinking skills, and 74 percent believe it will diminish the value of academic degrees.

At Georgia Tech, AI-related academic integrity referrals have more than tripled since the 2023-24 academic year. At the University of Georgia, 51 percent of all academic dishonesty cases now involve alleged AI use. Emory University professor Catherine Nickerson gave blue book exams for the first time since 1999. Blue book sales at Georgia university bookstores jumped 125 percent.

Georgia Tech professor Amy Bruckman told the Atlanta Journal-Constitution: “I don’t think the general public understands how desperate the situation is.”

Meanwhile the detection arms race is producing casualties on both sides. Turnitin has identified more than 150 dedicated “humanizer” tools students use to make AI output look human. At the same time, the same detection technology is generating false accusations that are destroying innocent students’ records. At least 12 major universities — including Yale, Johns Hopkins, and Vanderbilt — have disabled Turnitin’s AI detection entirely because the false positive rate is simply too high to be trusted for academic integrity decisions.

The entire detection-and-evasion framework that universities spent three years building is collapsing under its own weight. The oral exam is the response. And it reveals something important about where this is all heading.

What the “Gen Z Stare” Actually Tells Us

The phenomenon that professors are describing — the blank look from a student who submitted polished work but cannot engage with it verbally — has a name now. The AP story called it the “Gen Z stare.” It is not a generational failure. It is a structural one.

When a student uses AI to generate an essay, they do not receive an education. They receive a document. The document may be grammatically sophisticated, may use the right vocabulary, may cite the right sources in the right format. But the student who submitted it has not done the thinking that the assignment was designed to develop. There is no argument in their head that they can defend, because they did not develop the argument. There is no analysis they can explain, because they did not conduct the analysis.

The Gen Z stare is what happens when a student has a document but not an education.

And here is what the oral exam movement is really telling us: professors are not primarily worried about cheating. They are worried about what cheating is doing to students. University of Pennsylvania associate professor Emily Hammer said it directly in the AP story: “It comes across as if we’re trying to prevent cheating. That’s not why we’re doing this. We’re doing this because students are actually losing skills, losing cognitive capacity and creativity.”

She is right. The AI flood is not just an academic integrity problem. It is a learning problem. Students who outsource their thinking to a language model are not developing the intellectual capabilities that a university education is supposed to produce. They arrive at the oral exam — or the job interview, or the graduate school discussion, or the professional conversation — with nothing to say.

This is the real stakes. Not the grade. The development.

 
A dark-themed infographic from Unemployed Professors on a deep navy background, titled "AI Help vs. Human Expert Help." Two side-by-side columns with contrasting color borders — rose-red for AI help and teal-green for human expert help — each list four traits. The AI help column notes that AI has no genuine understanding, cannot answer follow-up questions because there is no understanding behind its output, offers nothing to learn from, and produces the Gen Z stare: a perfect paper but a blank face when asked to explain paragraph one. The human expert help column, branded as Unemployed Professors since 2010, notes that work is produced by someone with real disciplinary knowledge, gives the student work they can engage with and ask questions about, models how an expert thinks about the subject, and survives the oral exam because it is grounded in real understanding. A darker panel at the bottom explains that professors at Cornell, NYU, and Penn are moving to oral exams not to catch cheaters but because students are losing skills, cognitive capacity, and creativity — and that gap only closes with genuine human expertise.

The Problem Is Not Getting Help. It’s Getting the Wrong Kind of Help.

Here is where we need to make a distinction that the current conversation around AI and academic integrity almost always misses.

The problem is not that students get help with their academic work. Students have always gotten help. Writing centers, tutors, study groups, editors, mentors — the history of academic support is as long as the history of academic work. Ghostwriting and professional academic writing services have existed for decades. There is nothing new about a student working with a more knowledgeable person to produce academic work.

The problem is that AI is not a more knowledgeable person. It is a pattern-matching system that generates statistically probable sequences of words without understanding a single one of them. When a student gets help from a human expert — a professor, a tutor, a professional academic writer — that expert produces work that reflects genuine understanding of the subject. The student can read it, engage with it, learn from it, ask questions about it. It is a model of actual expertise.

When a student gets “help” from ChatGPT, they get text that sounds like expertise without containing any. The AI does not know what it is talking about. It cannot answer follow-up questions because there is no understanding behind its answers. It cannot help the student develop because it has no understanding to share. The Gen Z stare is the inevitable result: polished output, zero understanding, nothing to defend.

This distinction matters enormously for what students actually do next.

If you are a student who needs help with academic work — and plenty of students genuinely do, for entirely legitimate reasons — the question is not whether to get help. It is what kind of help actually serves your interests.

AI help produces documents you cannot defend and from which you learn nothing. It creates exactly the situation the oral exam movement is designed to expose.

Human expert help — from someone who actually understands your subject — produces work you can engage with, learn from, and use as a model for developing your own capabilities. It gives you something you can actually talk about.

Where Unemployed Professors Fits In

Unemployed Professors has been employing genuine human experts to produce academic work since 2010. That was four years before ChatGPT existed, twelve years before the current crisis, and sixteen years before professors at Cornell and NYU started standing in front of students asking them to defend their work.

Our model has never changed: real academics, verified credentials, subject-matter expertise, authentic human scholarship. Every piece of work produced through Unemployed Professors is written by a human being who actually knows the subject — someone with the disciplinary formation, the genuine understanding, and the authentic scholarly voice that no AI can produce.

The oral exam movement does not threaten that model. It validates it.

When an Unemployed Professors writer produces an essay on postcolonial theory, they produce it because they understand postcolonial theory. When they write about behavioral economics, they are applying genuine knowledge of behavioral economics. The work reflects authentic intellectual engagement with the subject — the kind of engagement that a student can read, study, and draw from in their own development.

That is fundamentally different from what ChatGPT produces. And it is the difference that matters most now that professors are sitting across from students and asking: do you actually know this material?

Work produced by a genuine expert gives you something to know. Work produced by a language model gives you something that looked like something to know.

The choice between those two things has never been more consequential.

What Students Should Do Right Now

The oral exam movement is accelerating. It is now at Cornell, NYU, Penn, and dozens of other institutions, and the AP wire story that went out today means it will be at more institutions by the time the next semester begins. The Chronicle of Higher Education, Inside Higher Ed, and every major academic outlet in the country are covering this shift. Faculty workshops on oral assessment are filling up. The direction of travel is unmistakable.

Students who have been relying on AI to generate their written work are going to find themselves in an increasingly exposed position. The oral exam is not the only pressure point — professors are also simply getting better at recognizing AI-generated prose on its own terms, without any software. The characteristic patterns, the generic register, the absence of genuine argumentative development — experienced readers identify these things the way they identify bad writing. No detection software required.

The students who are going to navigate this environment successfully are the ones who either genuinely develop their own academic capabilities or work with genuine human experts who produce work that reflects real intellectual engagement — work they can learn from, work they can discuss, work that helps them develop rather than leaving them staring blankly at a professor who is waiting for an answer.

That is what Unemployed Professors offers. And it has never been more relevant than it is today.

A warm rust and amber infographic from Unemployed Professors on a cream background, titled "The Gen Z Stare: What It Is and Why It Happens." A four-step vertical chain diagram explains how the phenomenon occurs: first, a student prompts AI for a complete essay and receives polished text described as reading like a McKinsey memo; second, the student submits without conducting research or developing genuine understanding; third, the professor asks them to explain their work in an oral exam or follow-up discussion; fourth, the Gen Z stare — the student cannot explain paragraph one because there was no genuine thinking to draw from. Below the chain, a four-item grid titled "What the Oral Exam Movement Proves" covers four points: detection software is not the answer because one follow-up question exposes the gap; the real loss is the learning, not just the grade; elite universities including Cornell, NYU, and Penn are leading a rapid shift toward oral assessment; and a document is not the same as an education. A dark rust pull quote from NYU Stern professor Panos Ipeirotis closes the piece: "I don't trust written assignments anymore to be the result of actual thinking."

The Bottom Line

Professors at elite universities are telling the Associated Press — and by extension the entire country — that they do not trust written assignments anymore.

That is a significant statement. It reflects three years of watching AI flood their classrooms with hollow, polished, machine-generated prose that looks like thinking and contains none of it. It reflects the complete failure of detection tools to reliably distinguish AI output from human work. It reflects the discovery, repeated in office after office and classroom after classroom, that students who submit perfect papers cannot explain what they wrote.

The answer is not to abandon written work. The answer is to ensure that written work actually reflects genuine human thinking — either the student’s own, or that of a human expert who actually knows the subject and produces work the student can genuinely engage with and learn from.

That is the model Unemployed Professors has operated on since 2010. It is the model that the oral exam movement is now — inadvertently but unmistakably — validating.

The “Gen Z stare” is the consequence of getting help from a machine. Working with Unemployed Professors gives you something better than a document you cannot defend. It gives you access to genuine expertise from someone who actually understands your subject.

POST YOUR PROJECT today and work with a verified human expert who actually knows what they are talking about.

Scroll to Top