(5 Nov 2025) Recently, ABC News reported that a Murdoch University student was taking legal action over what he claims are false allegations of unauthorized AI use in an assignment. In subsequent weeks, another report from the same outlet revealed that several Australian universities had been using AI to detect AI in student work, sparking debate over the reliability of such tools and their role in upholding academic integrity.
We are not commenting on individual cases; rather, we see these reports as symptomatic of a broader systemic tension between existing frameworks for academic integrity and the realities of assessment in the age of generative AI (GenAI).
ChatGPT and similar tools possess the unparalleled ability to quickly generate original/novel writing of increasing accuracy and depth in response to a prompt. For all the benefit this brings, there is a dark side: students can use GenAI to generate responses to myriad assessment prompts instead of engaging with formative and summative elements of a task that are essential to student achievement.
GenAI isn’t, however, simply another turn in the cat-and-mouse game over misconduct. It is a powerful tool that can be used for good or ill. Its increasing use forces us to confront long-standing limitations not only in academic integrity itself, but also in how assessment is designed and how policy connects to practice.
These weaknesses predate GenAI, yet the technology exposes them more starkly. Since GenAI is not going away, universities must shift from piecemeal fixes to holistic approaches in which policy, assessment design and AI literacy work together to scaffold better outcomes for students and staff.
Read more here.



