With the growing increase of AI use at the college level, instructors must be aware of the advantages and pitfalls of interacting with this technology. One area that is often confusing and difficult to navigate is using AI detection tools on students’ work. Currently, when assignments are submitted through Turnitin in MyFIRE, an AI detection score is generated and made available to instructors. It is important to recognize that AI detection tools such as Turnitin’s do not have a 100% success rate for detecting AI generated content and often produce false positive results.


In a recent study done on ”whether existing detection tools can reliably differentiate between human-written text and ChatGPT-generated text” (Weber-Wulff et al., 2023), the researchers concluded that “the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text” (Weber-Wulff et al., 2023). Among the detection tools included in the study were Turnitin, DetectGPT, GPT Zero, ZeroGPT, and many others. The researchers studied whether these detection tools could reliably detect human-written and ChatGPT-generated text, if machine translation affected the detection of human-written text, if manual editing or machine paraphrasing affect the detection of ChatGPT-generated text, and how consistent the results obtained by different detection tools for AI-generated text are. According to the researchers, “Our findings strongly suggest that the ‘easy solution’ for detection of AI-generated text does not (and maybe even could not) exist” (Weber-Wulff et al., 2023).


Because AI detection tools cannot be counted on to provide consistent, accurate information as to whether or not students are using AI, it is best to not use AI detection tools as the sole deciding factor on whether or not a student should be penalized for plagiarism. AI detection tools should instead be used, per SEU’s Academic Policies, as “a basis for discussion with the student rather than a definitive finding.”

 

Below is a statement from the Chief Product Officer of Turnitin:

We’d like to emphasize that Turnitin does not make a determination of misconduct even in the space of text similarity; rather, we provide data for educators to make an informed decision based on their academic and institutional policies. The same is true for our AI writing detection—given that our false positive rate is not zero, you as the instructor will need to apply your professional judgment, knowledge of your students, and the specific context surrounding the assignment. Making that decision should include alignment with institutional policies, the expectations you have set for your course or assignment, and an understanding of exactly what you are seeking to evaluate through the assignment. (Chechitelli, 2023)


Because AI detection tools do have a high rate of false positives, we recommend giving students the benefit of the doubt in circumstances where AI is detected. We recommend comparing a student's work that was reported as AI-generated to previous work submitted by the student to see if the writing styles are similar, and then using those findings to begin a conversation with the student to determine if any further action is needed.

 

Here are some tips provided by Turnitin for addressing false positives:

  1. Know before you go—make sure you consider the possibility of a false positive upfront and have a plan for what your process and approach will be for determining the outcome. Even better, communicate that to students so that you have a shared set of expectations.
  2. Assume positive intent—in this space of so much that is new and unknown, give students the strong benefit of the doubt. If the evidence is unclear, assume students will act with integrity.
  3. Be open and honest—it is important to acknowledge that there may be false positives upfront, so both the instructor and the student should be prepared to have an open and honest dialogue. If you don’t acknowledge that a false positive may occur, it will lead to a far more defensive and confrontational interaction that could ultimately damage relationships with students. (Chechitelli, 2023)

While the use of AI detection tools in particular and AI technologies in general are still novel, it is imperative that both students and instructors develop an essential understanding of them. Because of this, SEU is currently working on redefining its AI policies and providing more training opportunities for instructors on how to work with AI and AI detection tools. We believe being able to provide instructors with clear policies and expansive instruction will empower instructors to feel confident when interacting with AI tools in their courses.

While no AI detection tools are perfect, here are a few that have tested the highest among free AI checkers that can help you learn more about AI detection and help you begin conversations with students about AI use.

  1. QuillBot - Scan up to 1200 words per check and perform an unlimited number of checks.

  2. Scribbr - Scan up to 500 words per check and perform an unlimited number of checks.

  3. Sapling - Scan up to 2,000 characters per check and process up to 5 million characters a month.


References:

Chechitelli, A. (2023, March 16). Understanding false positives within our AI writing detection capabilities. Turnitin. June 24, 2024, https://www.turnitin.com/blog/understanding-false-positives-within-our-ai-writing-detection-capabilities

Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Lorna, L. (2023, July 10). Testing of Detection Tools for AI-Generated Text. Arxiv. https://arxiv.org/abs/2306.15666