Harvard University has decided to use generative AI as an official learning tool in its main coding course. Starting this fall, students taking Computer Science 50 (CS50) will be encouraged to use AI to help them with coding problems, receive feedback on their designs, and get answers to questions about error messages and unfamiliar lines of code.
“Our own hope is that, through AI, we can eventually approximate a 1:1 teacher [to] student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually,” says CS50 professor David J. Malan, as reported by The Harvard Crimson.
This is a significant change from the previous school year when Harvard did not have a policy regarding AI. However, the university has decided not to use popular AI tools like ChatGPT or GitHub Copilot because they are considered “too helpful” at the moment.
Instead, Harvard has developed its own large language model, a “CS50 bot” that will be “similar in spirit,” but will focus on “leading students toward an answer rather than handing it to them,” he says.
CS50 is also available for non-Harvard students to take on the online platform edX(Opens in a new window). The new AI policy will extend to the edX version. “Even if you are not a student at Harvard, you are welcome to “take” this course for free by working your way through the course’s eleven weeks of material,” says the site. Teachers at other institutions can also license the material for their own courses.
“Providing support that’s tailored to students’ specific questions has long been a challenge at scale via edX and OpenCourseWare more generally, with so many students online, so these features will benefit students both on campus and off,” Malan says.
Harvard’s decision to incorporate generative AI into its curriculum adds a new aspect to the use of tools like ChatGPT in academic settings. Since the launch of ChatGPT in November 2022, teachers and professors have faced challenges with students submitting work generated by AI.
In fact, one professor at Texas A&M University-Commerce even refused to grade work that he believed was created using ChatGPT. Although there are tools available to detect AI-generated text, there is currently a lack of similar tools for programming languages.
“We’ll make clear to students that they should always think critically when taking in information as input, be it from humans or software,” Malan says. “But the tools will only get better through feedback from students and teachers alike. So they, too, will be very much part of the process.”