Accused of Cheating by AI: One Student’s Fight for Justice Exposes a Growing Campus Controversy
In a case that has sent shockwaves through higher education, a student at Adelphi University has emerged victorious in a battle against what he called a baseless accusation of using artificial intelligence to plagiarize an essay. The ruling not only clears his name but also raises critical questions about how colleges are handling the rise of AI in academia. But here’s where it gets controversial: Are universities relying too heavily on flawed AI detection tools, potentially ruining students’ academic careers in the process?
Orion Newby, a 20-year-old history major at Adelphi, found himself at the center of this debate when the university accused him of using AI to write a paper on Christianity and Islam. The accusation, which Newby vehemently denied, led to a disciplinary action that could have jeopardized his academic standing. Determined to prove his innocence, Newby took the university to court—and won. A Nassau County Supreme Court judge ruled that Adelphi’s findings were ‘without valid basis and devoid of reason,’ ordering the school to expunge Newby’s record. ‘It feels incredible to finally have my name cleared,’ Newby said. ‘Winning this case is a huge weight off my shoulders.’
The Accusation and the Aftermath
The saga began in fall 2024, during Newby’s first semester at Adelphi. A third-degree black belt in tae kwon do and an ocean lifeguard, Newby has long worked to overcome learning and neurological disabilities, including language and auditory processing disorders and ADHD. He spent hours refining his essay with the help of tutors, including those from the university’s Bridges program, which supports students with disabilities. Yet, Assistant Professor Micah Oelze gave the paper a zero, suspecting it was AI-generated. Oelze later filed a violation report, citing the AI detection tool Turnitin, which flagged the essay as 100% AI-written.
Newby was stunned. ‘I thought at first I was going to get arrested,’ he recalled. He submitted his paper to two other AI detection tools, both of which confirmed it was human-written. But Adelphi’s academic integrity officer upheld the accusation, leaving Newby and his family with no choice but to sue. ‘The same thing could have happened again and resulted in expulsion,’ said Candace Newby, Orion’s mother. The family spent six figures on legal fees, underscoring the high stakes of such accusations.
The Bigger Picture: AI on Campus
Newby’s case is just one example of a broader issue. As AI use explodes on campuses, colleges are grappling with how to address its role in academic integrity. According to the 2025 AI in Education Trends Report, nearly 90% of college students admit to using AI in their academic work. Meanwhile, professors are increasingly relying on AI detection tools like Turnitin, which claims 96% accuracy. But is that enough?
And this is the part most people miss: A growing number of educators argue that these tools are not reliable enough, especially when students’ academic futures hang in the balance. ‘In most cases, it’s difficult to say with a sufficiently high degree of certainty that a person has used AI,’ said Jim Samuel, an AI researcher at Rutgers University. Yet, doing nothing could normalize cheating, he warned.
The Human Cost of AI Policies
Newby’s attorney, Mark Lesko, believes the ruling should prompt universities to overhaul their AI policies. ‘This is a bellwether example of why universities need to be very careful and protective of their students,’ Lesko said. He’s not alone in his concerns. A recent survey found that 90% of college professors believe AI will harm students’ critical thinking skills, while 73% have dealt with academic integrity issues involving AI.
Some schools are taking a more collaborative approach. The University of Virginia, for instance, is forming a student council to advise on AI policies. ‘Students must be at the table,’ said Mona Sloane, a professor at the university. Others suggest that professors should focus on teaching students the value of critical thinking over AI reliance. ‘We don’t need to think about our students as adversaries,’ said Britt Paris, chair of the American Association of University Professors’ AI committee.
Where Do We Go From Here?
As universities navigate this uncharted territory, one thing is clear: The current approach isn’t working. ‘We can go back to the way things used to be, or we can go forward to the way things could be,’ said James Brusseau, a philosophy professor at Pace University. ‘But the way things are right now obviously doesn’t work.’
Newby’s victory is a wake-up call for higher education. It forces us to ask: Are we sacrificing fairness for convenience? And at what cost? What do you think? Should universities rely more on human judgment than AI tools when accusing students of cheating? Let us know in the comments—this is a conversation we all need to have.