The Question
Occasionally, I’m asked to recommend an AI detection program. Usually, I’m asked by our faculty because they are fed up, worried, and they want something to help them manage what they suspect is AI cheating in their courses. They ask me, I hope, because they anticipate that I’m paying attention to the research and can tell them which ones are the best. I am, but I can’t.
Here’s why:
It’s true that I have been following these programs. That’s part of my job now, but it’s also been an interesting (and sometimes entertaining) story.
For example, I could tell you the story about the AI detection program that tested very high and, then months later, decided there was no money in all of this and became an AI company instead.
I could tell you about the AI detection company that tested well and then decided to partner with a notorious cheating company.
What can I say? It’s a heady time.
Officially, no one at CSU has told me that I can’t recommend a program. I just don’t do it.
First, I work for CSU and I take being a state employee seriously. That means that I appreciate a good queue and I take my sweet time filling out paperwork, but it also means that I know that the line between endorsements and professional recommendations is a thin one.
But I also know that the people asking me are humble state employees too and, frankly, none of them should be
- Using their take-home pay to purchase a subscription to a service just so they can do their jobs.
- Expending more labor than is necessary to identify cheating.
At the same time, we’re trying to avoid a dysfunctional situation from occurring on our campus: thousands of faculty members using a dozen or so different programs to identify generative AI usage in student work. Then, consider the consequences of sending all of those students through a conduct hearing. If you can think of a better illustration of inequity, I’m all ears.
What can faculty do instead?
In the absence of a definitive report or detection tool, I have been coaching faculty to describe what they see and how they think generative AI was used.
For example:
This student’s response describes a process I did not teach nor discussed in the course reading. When I asked the student to explain the process (when we met to discuss this assignment), they were unable to share any information that would lead me to believe that they knew what it was.
Is it perfect? No, but it’s a start.
This approach can look a hundred different ways, but it usually just involves (1) a conversation with the student that (2) emphasizes or (eh hem, cough) reminds them that (a) you’re paying attention and (b) how their work should be completed in your class.
Doesn’t that feel better than handing your hard-earned money to the world’s lamest streaming service?

