A hot potato: Once again, the problem of AI being used by students to cheat on exams has reared its head. On this occasion, the University of Waterloo's Canadian Computing Competition (CCC) has decided not to publish its official results, which it usually does, over the belief that AI was used by some participants to write code.
Those who do well in the University of Waterloo's CCC are often accepted into the University's prestigious computing and engineering programs, or are even selected to represent Canada in international competitions, student Juan Marulanda De Los Rios told The Logic. It can also help when applying for internships, jobs, or work experience programs.
The University normally releases students' CCC scores every year, but co-chairs J.P. Pretti and Troy Vasiga said in a statement that the 2025 results won't be made public. The reason for this decision is that it is "clear" many students submitted code they did not write themselves, relying instead on "forbidden external help."
The co-chairs add that because of the cheating, the reliability of ranking students would not be equitable, fair, or accurate.
Those forbidden external tools include the use of AI. But it seems some students didn't adhere to these rules. University of Waterloo spokesperson David George-Cosh declined to comment about how many people cheated during the competition or which AI tools were used.
Teachers usually supervise screens during the test as a way of preventing cheating, but one teacher sometimes has to monitor several students at once, and there aren't strict barriers to bringing code into the competition or a way of restricting access to websites and applications, said one person. And thanks to the integration of Copilot into GitHub, students are now able to cheat using AI without closing their programs.
Other coding competitions are struggling to prevent generative AI from being used, especially those that allow participants to access different websites or even take the competition home.
The University of Waterloo says it will introduce additional measures to safeguard future competitions, which includes improved technology, supervision, and clearer communications for students and teachers.
Students using AI to cheat isn't something new – the problem was described as endemic last August. There have also been instances where parents have sued schools who punished their children for using AI in exams.