Meet the Other Phone. Protection built in.

Meet the Other Phone.
Protection built in.

Buy now

Please or to access all these features

Higher education

Talk to other parents whose children are preparing for university on our Higher Education forum.

UCAS Personal Statements and AI

98 replies

Ceramiq · 06/04/2026 17:45

I don't really understand how universities are going to be able to use Personal Statements in future: it is so easy to write and/or improve a fantastic PS in minutes using Claude. Any thoughts?

OP posts:
Kiminki · 08/04/2026 21:20

In terms of universities just wanting to attract numbers, the situation is different for Scottish pupils in Scotland. The number of places are capped and universities have no incentive to lobby to allow more places as fees (£1820, paid for for most students) and money from government don’t cover costs. Hence universities may have spaces for students from the rest of the UK (at UK fee levels) but not Scottish students. Political pressure means that their over subscription criteria is mostly prioritising contextual students. This is principally measured by post code and thus housing density which favours SNP supporting areas. A couple of years ago to went to the extent of no non-contextual students even being considered for nine courses at Edinburgh University, including Law and Economics.

Ceramiq · 08/04/2026 21:24

Kiminki · 08/04/2026 21:12

Though course work is vulnerable to use of AI - not necessarily to write it directly but to neatly present all the information required to write it without any particular skill or knowledge.

If the information is available online. My DC fed all coursework done over the course of their degree into Claude and asked Claude to grade it according to the (a) university grading criteria (b) university grading criteria AND the AI-proofness of the research. This led to a very productive discussion about grading criteria and whether they were still pertinent in the age of AI or should be revised.

OP posts:
Kiminki · 08/04/2026 21:28

Ceramiq · 08/04/2026 21:24

If the information is available online. My DC fed all coursework done over the course of their degree into Claude and asked Claude to grade it according to the (a) university grading criteria (b) university grading criteria AND the AI-proofness of the research. This led to a very productive discussion about grading criteria and whether they were still pertinent in the age of AI or should be revised.

How do you get around AI inherent bias towards agreeing with you?

poetryandwine · 08/04/2026 21:38

An excellent question.

Yes, the very explicit marks scheme, tightly prescriptive syllabus and the way these reinforce each other as you, @WW3 and @fairyring25 have said seems to me the heart of the matter. I agree this increases grades without increasing attainment.

There may also be other reasons. Teachers are under pressure to get ever-improving results, which isn’t sensible. Trust has broken down. I agree with @fairyring25 that varying the exam content more could help a lot, but there would be an uproar if the level of difficulty perceptibly varied.

I see this as a reason to support somewhat more difficult exams, so that grades would spread naturally and norm referenced grades could be used without inducing tight banding. I don’t see that YP are any happier than they were when grade C was useful, B was good, and a 2.2 degree was a good achievement. Thinking that everyone needs to be validated with higher levels of awards hasn’t helped individuals or society.

Basically we seem on a knife edge and people have no tolerance for feeling disadvantaged. (Often they really are, which is awful, but grade creep is something else entirely).

poetryandwine · 08/04/2026 21:52

Kiminki · 08/04/2026 21:12

Though course work is vulnerable to use of AI - not necessarily to write it directly but to neatly present all the information required to write it without any particular skill or knowledge.

You are correct; this is a big problem.

Though I agree with @Ceramiq that CW is potentially a much deeper and more useful assessment tool than examination, our guidance on AI usage (basically that it is an acceptable editing tool/critical friend) is subject to so much abuse that we cannot assess CW as heavily as we would like.

Ceramiq · 08/04/2026 22:15

Kiminki · 08/04/2026 21:28

How do you get around AI inherent bias towards agreeing with you?

By framing the requests neutrally. "Grade these essays according to the university's criteria" doesn't ask AI to agree with anyone.

OP posts:
Kiminki · 08/04/2026 22:19

Ceramiq · 08/04/2026 22:15

By framing the requests neutrally. "Grade these essays according to the university's criteria" doesn't ask AI to agree with anyone.

There is a pp who stated they did this and explained how it responded that a couple of sentences were weak, but once they explained why they used them AI agreed with them that they were strong. This is exactly the sort of ‘agreement bias’ I am referring to

Ceramiq · 09/04/2026 08:22

Kiminki · 08/04/2026 22:19

There is a pp who stated they did this and explained how it responded that a couple of sentences were weak, but once they explained why they used them AI agreed with them that they were strong. This is exactly the sort of ‘agreement bias’ I am referring to

That's me but it really wasn't as straightforward as that - the AI conversation took several rounds of questions and long answers for AI to understand the position from which the PS had been written. My point was that Claude challenged the PS in a generic way to start with (and this is what's so dangerous for young people) and a conversation enabled the AI to understand the underlying assumptions.

Similarly with essays: Claude graded them and commented on them according to the university's stated criteria. When Claude was given the grade actually achieved by the student it offered hypotheses as to why the grades were quite so different to the grading versus the stated criteria. The conclusion was that TFs and TAs were overvaluing primary research that is AI proof versus the stated grading criteria which had been published in 2021 and not updated since ie before AI.

FWIW I disagree strongly that Claude, in particular, tends to agree with the user. My DC says that you need to be really robust to be able to cope with the criticism and I rather agree with that!

OP posts:
Kiminki · 09/04/2026 08:42

a conversation enabled the AI to understand the underlying assumptions

This is a big misunderstanding about AI; AI doesn’t understand anything. There is no ‘intelligence’ to AI. All data is considered valid. So in both your examples Claude was using your additional prompts to construct a response that fitted with them. That is very different to having an understanding.

Ceramiq · 09/04/2026 08:53

Kiminki · 09/04/2026 08:42

a conversation enabled the AI to understand the underlying assumptions

This is a big misunderstanding about AI; AI doesn’t understand anything. There is no ‘intelligence’ to AI. All data is considered valid. So in both your examples Claude was using your additional prompts to construct a response that fitted with them. That is very different to having an understanding.

Sure, I know what you mean and my wording was poor. But the point is that Claude is critical and a long way from agreeing with the user.

OP posts:
Kiminki · 09/04/2026 09:34

Ceramiq · 09/04/2026 08:53

Sure, I know what you mean and my wording was poor. But the point is that Claude is critical and a long way from agreeing with the user.

Yet in your example you say it did, twice. I don’t mean it is sycophantic, I mean it will incorporate your responses which will always tend to support your position. If your position is that you agree something needs improving on, it will incorporate this into its response.

Ceramiq · 09/04/2026 09:48

Kiminki · 09/04/2026 09:34

Yet in your example you say it did, twice. I don’t mean it is sycophantic, I mean it will incorporate your responses which will always tend to support your position. If your position is that you agree something needs improving on, it will incorporate this into its response.

If you ask Claude completely neutral questions (grade and rank these essays vs institutional criteria, then give Claude the grades the essays actually received and ask Claude to hypothesize on the discrepancy) you aren't giving a position of your own to agree with. You can also ask Claude to rank the essays on how AI proof they are.

OP posts:
Kiminki · 09/04/2026 09:55

Ceramiq · 09/04/2026 09:48

If you ask Claude completely neutral questions (grade and rank these essays vs institutional criteria, then give Claude the grades the essays actually received and ask Claude to hypothesize on the discrepancy) you aren't giving a position of your own to agree with. You can also ask Claude to rank the essays on how AI proof they are.

I am not saying it is not useful but it is something you need to bear in mind.

Ceramiq · 09/04/2026 10:10

Kiminki · 09/04/2026 09:55

I am not saying it is not useful but it is something you need to bear in mind.

And we most definitely did: what my DC and I were attempting to establish was whether the discrepancies in grades between the grade actually received, the grade AI gave an essay when asked to mark in accordance with stated criteria and the grade AI gave when additional criteria were included could be reasonably explained by hypotheses generated between a back-and-forth with AI.

The largest discrepancy was the much higher value placed on primary, AI proof research and original insights that challenge established theory by institutional graders (TFs and TAs) than the official grading criteria allowed for.

We also fed in some essays written by fellow students of my DC for alternative perspectives. AI was less sensitive to quality of expression than were institutional graders (TFs and TAs).

OP posts:
Ceramiq · 09/04/2026 10:16

Follow on: AI picked up on typos very effectively on the first round of grading but failed to pick up on overly laborious expression and filler wording until this was flagged (as it had been by institutional graders). This is useful feedback for humanities students wanting to polish their writing and find their own voice.

OP posts:
StormySea23 · 09/04/2026 12:09

We're all clear at this point that the OP has an investment in Claude, yes? Because this is looking more and more like an advert.

poetryandwine · 09/04/2026 13:08

StormySea23 · 09/04/2026 12:09

We're all clear at this point that the OP has an investment in Claude, yes? Because this is looking more and more like an advert.

I think OP is fascinated, and sharing her experiments.

Anthropic doesn’t need her help, especially with the paradigm shifting announcements around Claude Mythos Preview since Tuesday. It is also privately held although there are rumours of an IPO later this year.

I developed a soft spot for Anthropic when the Pentagon objected to their installation of guard rails on US DoD use of Claude, and had the firm barred from all US government contracts. This was generally thought to be a huge overreaction, even amongst those who accepted the debatable premise.

Probably there are ongoing legal discussions or hearings which I am not following.

InsertUsernameHere · 09/04/2026 13:12

Ceramiq · 08/04/2026 09:16

I find the idea of universities not using Personal Statements to assess applications because they cannot gauge how much help students receive absolutely crazy. Either you have a PS and you use it or you don't have a PS. Bocconi University in Milan discarded its PS a couple of years ago and now recruits solely on grades (a combination of a standardized test, either the SAT or a proprietary test, and a student's grades in Years 11 and 12). I'm not sure Bocconi is entirely transparent about its calculation but, be that as it may, abandoning the PS was the moral move. If you require students to provide a PS you must use it and if you don't want to use it you must not require it.

It is useful to consider what the purpose of the PS is. There is much an applicant can learn from the process of drafting it - interrogating there reasons for choosing the course and going to uni overall. In my recent recruitment rounds - there has been a noticeable decrease in performance at interview, I believe this is partly due to use of AI in application forms, it means that applications forms (which contain PS like sections) have less discriminatory value, and it robs the applicant of the benefit of the struggle of writing the application form - which is excellent interview prep.

Sometimes the process is the point of the exercise - and the AI short cut it detrimental to the person

Kiminki · 11/04/2026 10:28

Kiminki · 09/04/2026 09:34

Yet in your example you say it did, twice. I don’t mean it is sycophantic, I mean it will incorporate your responses which will always tend to support your position. If your position is that you agree something needs improving on, it will incorporate this into its response.

Seems I was wrong about the sycophancy; in December 42 US State Attorneys wrote to AI developers demanding they included safeguards to stop sycophancy.

Ceramiq · 13/04/2026 15:39

InsertUsernameHere · 09/04/2026 13:12

It is useful to consider what the purpose of the PS is. There is much an applicant can learn from the process of drafting it - interrogating there reasons for choosing the course and going to uni overall. In my recent recruitment rounds - there has been a noticeable decrease in performance at interview, I believe this is partly due to use of AI in application forms, it means that applications forms (which contain PS like sections) have less discriminatory value, and it robs the applicant of the benefit of the struggle of writing the application form - which is excellent interview prep.

Sometimes the process is the point of the exercise - and the AI short cut it detrimental to the person

I agree with this: the process of writing the PS is more important than the outcome. But it can be very hard to get young people to understand this.

OP posts:
poetryandwine · 19/04/2026 10:53

Hi, everyone -

Apropos of this topic there is an interesting article in the Observer magazine today, which should be freely available at the Guardian online.

It is ‘Come Dine with ChatGPT’ by Patricia Clarke. She and the wonderful cookery author Georgina Hayden worked with ChatGPT on both refining some of GH’s recipes-in-progress, and creating others from scratch.

Consistent with @Ceramiq ’s experience of Claude, ChatGPT was ‘annoyingly’ good at taking Hayden’s criticisms of recipes she’d not yet got right and finding a really good fix.

When asked to develop recipes ‘in the style of Georgina Hayden’, it was a different story. At first glance, ChatGPT got her voice wonderfully right, but a deeper look can reveal an element almost of parody: the AI adopts her phrasing, etc but the chat around the recipes is very superficial (of course we see limited examples). If you read much if it, I think you could distinguish it from Hayden’s writing.

What’s worse s that the recipes are not very good - we analyse only one example, but it is interesting. (Although ChatGPT did give Hayden a recipe she thought would be very good for supper from the food she photographed in her fridge, that is a very different task)

Cooking is a generalist subject many of us can appreciate. What this article explains is a good analogy for how AI is being misunderstood and misused throughout HE, including potentially with UCAS applications.

The article also reinforces @Ceramiq ‘s point that AI is generally good at responding to well posed questions (and much weaker without human guidance)

Whilst Claude is a better AI tool than ChatGPT, the plusses and minuses are similar across all platforms.

Ceramiq · 20/04/2026 11:12

@poetryandwine Thank you for that fascinating feedback! What AI is and isn't good for is THE issue that needs deep investigation. People creating their own experiments in subject areas they master is a great conversation starter.

I wonder whether LLMs aren't going to expose our logocentric culture for its propensity to create meaninglessness? I'm already noticing some academics I admire (both in the humanities and in science) moving away from text based exploration to object based exploration.

OP posts:
New posts on this thread. Refresh page