While it might not make the dean’s list, the chatbot ChatGPT has been thoroughly tested. So far, it’s proven to be a competent writer, test taker, and study tool. But, for a good reason, educators appear to be conflicted about the artificially intelligent chatbot.
To test the technology, Wharton professor Christian Terwiesch used questions from his operations management final exam, which was previously required coursework for all MBA candidates. He then published the results.
ChatGPT completed law tests at the University of Minnesota and one at the University of Pennsylvania’s Wharton School of Business, according to CNN. It received a C+ on UM examinations after answering 95 multiple-choice questions and 12 essay questions.
It received a B to B- in a business management course. In addition, Terwiesch stated in a paper explaining its performance that it did “an amazing job” answering simple questions on operations management and process analysis.
Terwiesch also claimed that ChatGPT made errors when just sixth-grade math computations were involved. Terwiesch further mentioned that the bot struggled to answer more challenging tasks that required the interrelationship of multiple inputs and outputs.
ChatGPT also passed the AP English essay, according to the WSJ. One WSJ columnist returned to high school for one day to see if the chatbot could withstand a 12th-grade AP literature class. After using it to write a 500- to 1,000-word essay composing an argument “that attempts to situate Ferris Bueller’s Day Off as an existentialist text,” she obtained a grade that fell into the B-to-C range.
On another occasion, a philosophy professor at Furman University found a student submitting a “well-written misinformation” essay after noticing it, according to Insider.
“Word by word, it was a well-written essay,” the professor told Insider. As he took a more careful look, however, he noticed that the student claimed the philosopher David Hume that “made no sense” and was “just flatly wrong,” Insider reported.
In a January interview, Sam Altman, CEO of OpenAI, said that while the company will develop methods to help schools find plagiarism, he cannot promise perfect detection.
On the other hand, Bloomberg podcaster Matthew S. Schwartz tweeted that the “take home essay is dead.” He explained how ChatGPT “responded *instantly* with a solid response” after he submitted a law school essay prompt.
ChatGPT also passed a clinical reasoning final at Stanford Medical School in another scenario. In addition, ChatGPT passed a clinical reasoning exam with an overall score of 72 percent, according to a YouTube video released by Eric Strong, a clinical associate professor at Stanford.
That’s not all.
ChatGPT also passed a Google coding interview for a level three engineer, according to PC Mag. Google teams supplied the chatbot questions designed to assess a candidate’s skills for a level three engineer position.
“Amazingly, ChatGPT gets hired at L3 when interviewed for a coding position,” according to an internal document.