A senior at Northeastern University has stirred debate over academic transparency after filing a formal complaint and demanding a tuition refund upon discovering her professor was using AI tools—including OpenAI’s ChatGPT—to create lecture materials. The incident highlights a shifting dynamic in higher education, where concerns over AI use are no longer limited to student misconduct but now extend to faculty behavior as well.
Ella Stapleton, a recent business graduate, became suspicious of her professor’s lecture notes when she noticed peculiar patterns: typos mimicking AI output, AI-generated images with distorted features, and even a direct “ChatGPT” citation. “He’s telling us not to use it, and then he’s using it himself,” Stapleton told The New York Times.

Her formal complaint to the business school included a refund request of over $8,000 for the course. However, Northeastern University rejected the claim after internal discussions. The professor in question, Rick Arrowood, admitted to using several AI platforms—ChatGPT, Perplexity AI, and Gamma—to prepare class content. He acknowledged his oversight, stating, “In hindsight…I wish I would have looked at it more closely,” and emphasized the importance of transparency when integrating AI into academia.
While some students embrace AI tools for efficiency, others, like Stapleton, argue that paying high tuition should guarantee human-led instruction. Universities, including Northeastern, are still navigating the ethics of AI in education. Their policy mandates proper attribution and accuracy checks when using AI-generated content.
The incident reflects a growing concern among students that AI, once seen as their shortcut, is now potentially diminishing the value of their education when used—without disclosure—by those meant to teach them.