AI in Academia: Students Challenge Rules as Professors Use ChatGPT
Students are turning to ChatGPT to write their homework—and professors are using it to grade and create it.
In a growing number of classrooms, no one may actually be reading—or writing—anything themselves.
Within two months of ChatGPT’s launch, surveys found 90% of college students were already using AI for academic support.
Now, instructor usage is accelerating too, with adoption nearly doubling in just a year.
But as both sides increasingly rely on generative AI, tensions are surfacing over fairness, transparency, and educational value.
In February, Ella Stapleton, then a senior at Northeastern University, noticed something unusual while reviewing notes from her Organisational Behaviour class.
Midway through a document her professor had shared was a prompt: “expand on all areas. Be more detailed and specific”—a direct instruction to ChatGPT.
The lesson included generic definitions of leadership traits, poorly formatted bullet points, and distorted stock photos—classic signs of AI-generated content.
She noted:
“He’s telling us not to use it, and then he’s using it himself.”
Frustrated by what she saw as a decline in quality—and a double standard—Stapleton filed a formal complaint with the business school.
The professors are using ChatGPT, and some students aren’t happy about it https://t.co/LPcX3duf0L
— Boston.com (@BostonDotCom) May 14, 2025
She cited the undisclosed use of AI, inconsistencies in teaching, and requested a tuition refund for the class, which accounted for over $8,000 of her semester fees.
The course’s own syllabus explicitly banned the unauthorised use of chatbots.
What began as a panic over students using AI to cheat has flipped.
Now, students are calling out professors for relying too heavily on the same tools.
On forums like Rate My Professors, complaints include AI-generated lectures filled with generic language and overused buzzwords like “crucial” and “delve.”
Some argue that if they are paying high tuition, they deserve human-led instruction—not content they could generate themselves for free.
Professors, however, say AI helps manage demanding workloads, offering efficiencies as automated teaching assistants.
«Now students are complaining on sites like Rate My Professors about their instructors’ overreliance on A.I. and scrutinizing course materials for words ChatGPT tends to overuse, like “crucial” and “delve.”» pic.twitter.com/o676jOcWgD
— João Cancela (@joaoc) May 14, 2025
A growing number are using these tools: in a national survey of more than 1,800 higher education instructors, frequent use of generative AI jumped from 18% to nearly 35% in just one year, according to consulting firm Tyton Partners.
Tech firms are eager to capitalise on this shift.
OpenAI and Anthropic have launched enterprise versions of their chatbots tailored to universities, signalling that AI is becoming a fixture of modern education.
Still, schools are struggling to adapt.
As institutions rush to define the role of AI in the classroom, it is now the educators who are navigating a learning curve—often under the critical gaze of their students.
Educators Advocate for Correct AI Use
At Virginia Commonwealth University, business professor Shingirai Christopher Kwaramba sees ChatGPT not as a threat, but as a partner.
What once took days—such as building lesson plans—now takes hours.
He uses the tool to generate fictional datasets for mock retail chains, allowing students to explore statistical concepts through hands-on exercises.
Kwaramba stated:
“I see it as the age of the calculator on steroids.”
The time saved, he notes, has allowed him to expand student office hours and provide more direct support.
i am taking a class right now with a professor who openly uses chatgpt for fucking everything. he encourages us to use it and he uses it to grade our work. like you put my assignment in chatgpt and had it spit out some evaluation. exactly wtf are you even getting paid for
— 020276.103174 (@020276_10387141) May 16, 2025
Others are embracing similar efficiencies.
At Harvard University, computer science professor David Malan has integrated a custom-built AI chatbot into his introductory programming course.
Designed to assist with coding assignments, the chatbot offers guided help—without giving away answers.
Malan continues to fine-tune its responses to ensure pedagogical value.
According to a 2023 survey, the majority of the course’s 500 students found the tool helpful.
Freed from routine troubleshooting during office hours, Malan and his teaching team now focus on deeper engagement through activities like weekly lunches and student-led hackathons—what he describes as “more memorable moments and experiences.”
Meanwhile, at the University of Washington, communication professor Katy Pearce has trained an AI chatbot on her past grading feedback.
The result: a tool that mirrors her tone and comments, offering students personalised writing critiques at any hour.
For many, especially those reluctant to seek help directly, it has opened new doors for learning.
Together, these examples suggest a shift in how educators are reimagining their roles—leveraging AI not to replace themselves, but to enhance how, when, and where students learn.
She quipped:
“Is there going to be a point in the foreseeable future that much of what graduate student teaching assistants do can be done by AI? Yeah, absolutely.”
Guiding Innovation or Enforcing Contradiction?
After filing her complaint, Stapleton met multiple times with Northeastern’s business school officials.
However, the day after her graduation in May, she was informed that her tuition reimbursement request had been denied.
Rick Arrowood, her professor, expressed regret over the situation.
An adjunct faculty member with nearly two decades of experience, Arrowood explained that he had uploaded his course materials to AI tools—including ChatGPT, the AI search engine Perplexity, and the presentation generator Gamma—to refresh and enhance them.
At first glance, the AI-generated content appeared polished.
He said:
“In hindsight, I wish I would have looked at it more closely.”
The worst thing about ChatGPT for my teaching: ChatGPT, in giving my students an alternative to skill-building, hurts their ability to learn, but more than that, it kills the trust that any teaching relationship depends on. https://t.co/w7Xue5AK4D via @slate
— anthropologyworks (@anthroworks) May 15, 2025
He made these materials available online for students’ reference but clarified that he did not use them during class, preferring a discussion-based teaching style.
It was only after the school questioned the documents that he recognised their shortcomings.
The episode prompted Arrowood to acknowledge the need for greater caution when integrating AI into teaching and the importance of transparency about its use.
Northeastern recently introduced a formal AI policy requiring faculty to attribute AI-generated content and verify its accuracy and suitability.
A university spokesperson affirmed that Northeastern supports using AI to advance teaching, research, and operations—while emphasizing responsible implementation.
Arrowood expressed:
“I’m all about teaching. If my experience can be something people can learn from, then, OK, that’s my happy spot.”