Meet the Other Phone. A phone that grows with your child.

Meet the Other Phone.
A phone that grows with your child.

Buy now

Please or to access all these features

University staff common room

This board is for university-based professionals. Find discussions about A Levels and universities on our Further education forum.

Turnitin AI Detection?

23 replies

TurnitinDilemma · 29/02/2024 21:52

I am posting here for some advice about a group project I am due to submit next week. I am currently a final year undergraduate student studying for a science degree and the module in question is a group project with three other students. We all have to provide a chapter of the report, while the abstract, introduction, analysis, and conclusion sections are meant to be produced together.

I have suspicions that two of the other students are using AI to generate their work. I have read that Turnitin has an AI detector and am concerned whether this is going to impact my grade. Can any university staff advise how this works? Will the report be given an overall percentage score, or will it highlight the work that is suspected to be AI-generated? I am worried that if it doesn’t specify which parts are AI-generated then my grade may be affected.

I did manage to find an online AI detector (Scribbr) and I know this may not be particularly accurate, but I put my chapter through (500 words at a time) and the most a single section got was just under 10%. I put the chapters of the other two students through and both scores were 100% for a 3000-word chapter.

Any advice would be greatly appreciated.

OP posts:
Hiddenvoice · 29/02/2024 21:55

It’s been a while since I was at uni but Turnitin used to highlight the sections it felt was plagiarised and then give an overall score.
If you’re working as a group and all putting your names against the report will the university separate your work to give an individual mark or will be it group marked?

Personally I would speak to the other group participants and highlight the issue. Make them aware that it impacts all of you.

TheSmallAssassin · 29/02/2024 22:02

I think you need to bring this up with the other members of the group, part of the point of having to do group assignments is having to learn how to deal with difficult situations like this. You will have a great answer for job interview questions in the future!

dimples76 · 29/02/2024 22:03

The AI detector report on Turnitin highlights the sections it thinks are generated by AI in blue and then gives an overall percentage. My colleagues and I are treating it with caution as I have seen false positives and negatives. We also look at other issues, eg this year I have seen made up legislation and case law and wildly irrelevant references (which could not be the result of human error) as well as the use of language which feels inauthentic.

This must be stressful for you. It might be worth chatting to someone at the Students Union or Student Support about.

TurnitinDilemma · 29/02/2024 22:21

Thank you all so much for your advice.

@Hiddenvoice We are marked individually on our chapters and given a group mark for the rest of the work. The other two students have also written sections in the group work which also shows as 100% AI-generated. I will definitely speak to them all at our next meeting.

@TheSmallAssassin I think you are right. I wanted to have my facts straight about Turnitin before mentioning things to the rest of the team. Overall this project has been very challenging and I can't wait for it to be over.

@dimples76 Thank you, it is such a relief to know that the relevant sections will be highlighted. What would usually happen if a student's work was completed flagged as being AI-generated? (I know different universities will have different rules but it might help me get my point across to the other students when I next speak to them).

OP posts:
NCnora · 29/02/2024 22:58

Turnitin have a good FAQ section on their website re their AI detection tool. It's totally separate from the plagiarism-checker (and less robust/reliable). Your institution should have guidance for students by now.

dimples76 · 01/03/2024 10:56

Turnitindilemma the policy at my uni is that academic misconduct proceedings cannot purely be based on the Turnitin AI report. As mentioned in my previous post there are often other signs that something is amiss. If there were other signs I would report it as suspected academic misconduct- the student would be sent the evidence then asked if they wanted to admit AM or deny it. If they denied it then they would normally be asked about the research and writing process, for drafts etc. If I did not think that I had any other evidence I would send the student a copy of the AI report and invite them to discuss it with me. To be honest at my institution most academics I speak to would still like more guidance on how to handle these things.

I would also say that even where AI has not been proven the quality of the submission has not been good.

Acinonyx2 · 01/03/2024 20:39

In practice - we all tend to know AI written text when we see it - especially if we've also talked to the student - but actually proving it is a nightmare. I don't how we can really deal with this without more oral examination. Not sure how you are able to get 100% AI generated report - can't see how that is possible.

Flockameanie · 01/03/2024 20:43

Like PP we can’t use AI detectors because they aren’t accurate. It’s normally pretty obvious when something is AI generated. If we have suspicions we invite the student in question to a meeting to discuss this. We then ask the student to talk us through their research process, selection of sources, etc (this is in the humanities). Your uni should have a published academic misconduct policy which explains their approach as different places have different policies.

damekindness · 01/03/2024 21:43

We don't rely on AI detectors because as PP have said it's notoriously unreliable. However it's generally easy to spot and if we do we can ask the student for a viva presentation.

Consistently also seeing students using spinbots to evade Turnitin - but again easy to spot

Acinonyx2 · 02/03/2024 16:54

@damekindness what do you actually do if the viva doesn't reflect the submitted work? We are really struggling with this as there is no precedent or ruling about putting someone down (as opposed to up) on a viva. And students who claim anxiety and other needs make this hard apply too.

damekindness · 02/03/2024 17:41

Acinonyx2 · 02/03/2024 16:54

@damekindness what do you actually do if the viva doesn't reflect the submitted work? We are really struggling with this as there is no precedent or ruling about putting someone down (as opposed to up) on a viva. And students who claim anxiety and other needs make this hard apply too.

If the student clearly can't describe the process they used to write the assignment (that is what their evidence base was, how they located it etc) and their understanding of the content. If on the balance of probabilities we believe they have used Al we can apply a mark of zero and an opportunity to resubmit in year for a capped mark.

marrieleefawn · 22/08/2025 10:08

This reply has been deleted

This has been deleted by MNHQ for breaking our Talk Guidelines.

ParmaVioletTea · 22/08/2025 10:34

So basically, you cheat @marrieleefawn Go you!

Yesitisred · 22/08/2025 21:12

I've called people in for a discussion with notice that it can become a disciplinary and advised people to seek advice from su. If they can't tell me how they researched it, what they've read, show me various drafts they've worked on etc it's probably going to AM. One thing I have noticed a lot of - fictitious references or weird references that don't relate to the topic.

Aldilidl · 22/08/2025 21:18

This reply has been deleted

This has been deleted by MNHQ for breaking our Talk Guidelines.

Reported.

CleverKnot · 23/08/2025 18:23

Friend is a (prestigious) Uni lecturer & told her students to knock selves out using an AI to write their assignment (humanities). Friend is convinced the actual quality suffers, she'll simply end up marking them down because of the shallow generalness of the content and lack of inherent understanding or logic.

Maybe just telling the students she's not bothered if they want a lower grade is enough to make them sweat over their write up after all!

ParmaVioletTea · 24/08/2025 02:53

That's sort-of my attitude as well @CleverKnot - if they want to devalue their own degrees and learning, it's their loss. And AI written essays are generally pretty bland and voiceless.

The problem is (as my colleagues remind me!) is the impact on other students who are honest and don't cheat with AI.

And I tend to see any use of AI as cheating. I know that's an old-fashioned view, but why would anyone farm out their abilities to think & reflect, developed over millions of years of evolution? I despise students who do that, frankly. I think they're stupid.

They're just cheating themselves.

Friendlygingercat · 24/08/2025 03:29

I was once in a situation at uni where I felt the work of the group was potentially pulling my grade down. The group had decided to abandon a whole section of the data. I processed the abandoned data which gave my analysis extra range and depth. I also had some expertise in interviewing which allowed me to write a long and detailed analysis of the interview process. As a result I opted out of the group report and submitted my own. The result was a complete grade between my mark for the course and theirs. Not relevent to AI but sometimes you have to show the guts to make a decision which will make you unpopular. I was working for a 1st, not a 2/1.

PlantDoctor · 24/08/2025 03:36

I work in scientific publishing, and we get a lot of reviews clearly written by AI. The style and formatting are the same, the comments are quite vague, and they really don't add much.

It's also completely immoral. Peer review should be an assessment of a study by people in the same field, not a random computer program. Also the reviewer is uploading confidential and unpublished work. It creates a tonne of work for me as I have to inform the authors of the confidentiality breach and ban the person from reviewing again.

Sorry to jump on the thread. An automated AI detection tool does sound useful but I expect essay assessors will be able to spot it as low-quality writing using the same flowery language anyway.

ParmaVioletTea · 24/08/2025 13:44

@PlantDoctor id never thought of peer reviewers using AI. That’s completely unprofessional!

And yes, I can usually spot undergrad use of AI, but it’s almost impossible to prove. You can viva them but the worst of them just flat out lie. IME, that is. It’s despicable but I remind myself they’re only cheating themselves. At some point, their lack of thinking will trip them up - and they won’t progress in their careers. Karma

Flockameanie · 24/08/2025 14:43

PlantDoctor · 24/08/2025 03:36

I work in scientific publishing, and we get a lot of reviews clearly written by AI. The style and formatting are the same, the comments are quite vague, and they really don't add much.

It's also completely immoral. Peer review should be an assessment of a study by people in the same field, not a random computer program. Also the reviewer is uploading confidential and unpublished work. It creates a tonne of work for me as I have to inform the authors of the confidentiality breach and ban the person from reviewing again.

Sorry to jump on the thread. An automated AI detection tool does sound useful but I expect essay assessors will be able to spot it as low-quality writing using the same flowery language anyway.

Do you pay academics to peer review?

Does your publication profit off the research of academics who are not paid any share of the publication subscription costs?

I mean, I'm not disagreeing that using AI to peer review is immoral. But the whole academic publishing set up is deeply flawed and, essentially, relies on the unpaid labour and intellectual property of academics (who are already time-squeezed and therefore struggling to fit peer reviewing in)

PlantDoctor · 24/08/2025 14:52

This reply has been withdrawn

Withdrawn at authors request

CleverKnot · 24/08/2025 19:21

Increasingly I suspect AI has (sometimes) 'reviewed' my articles. Which infuriates me.
I can fit my efforts to review articles within the time I am paid for. My time to review is not unpaid. There is never decent justification to use LLMs to act as article reviewers.

I don't mind a model where referees are paid but in meantime, if I want to publish I am relying on someone to referee for free and thus I am willing to referee "for free" in kind for others. It's the system I rely on too.

If someone can't make time to engage their brain for a few hours to read & comment, then just decline. Don't send back useless comments which is all an AI can contribute. And then I have to argue with uselessly unqualified 'editors' who have no idea what a real human-written review should be like or how to judge if revisions are adequate much less if the 'reviewer' comments made any sense. So infuritating...

New posts on this thread. Refresh page
Swipe left for the next trending thread