Meet the Other Phone. Child-safe in minutes.

Meet the Other Phone.
Child-safe in minutes.

Buy now

Please or to access all these features

Secondary education

Connect with other parents whose children are starting secondary school on this forum.

Are this year's GCSE and A level grades going to be fair?

40 replies

dennishsherwood · 21/06/2020 19:41

This year's grades will be determined NOT by teachers, but by a big-machine-in-the-sky - a process called "statistical standardisation". This could disadvantage bright kids in otherwise dull schools. See, for example, this article in The Guardian on Saturday 20 June - the headline is "Against natural justice - father to sue exams regulator over A level grades system": www.theguardian.com/education/2020/jun/20/against-natural-justice-father-to-sue-exams-regulator-over-a-level-grades-system

An enterprising 18-year-old has also set up a petition, www.change.org/p/ofqual-review-the-standardisation-of-grades-in-summer-2020?fbclid=IwAR2toogfGtYUtSawgMBhqyQNYf_sQ8MrWEtvVKo9Gy_myolfQLE-lFEbVsc

and there is a new facebook group too

www.facebook.com/groups/272512373867127/

And if you're a teacher, you might like to look at www.hepi.ac.uk/2020/06/18/have-teachers-been-set-up-to-fail/

OP posts:
Ellmau · 21/06/2020 20:43

I think they're as fair as they can be in the circumstances. Obviously some will lose out.

7ofNine · 21/06/2020 20:46

I think they'll be unfair on everyone not in these cohorts that had to revise and sit exams the usual way.

titchy · 21/06/2020 20:47

Bit late to the party OP! This topic has debated, discussed, petitioned to within an inch of its life weeks and weeks ago.

Teacher assessment plus standardisation (really - machine in the sky - clearly you didn't do much Maths Hmm) means I suspect that grades overall will probably be much fairer than if they'd been taken as exams where any number of things can and do go wrong on the day(s).

dennishsherwood · 21/06/2020 21:23

Ah! Sorry I'm behind the curve... and I must confess I don't even have O level maths (I'm that old...). Do you happen to know how statistical standardisation will work,, exactly? I've been trying to find out...

OP posts:
dennishsherwood · 21/06/2020 21:33

I did say 'exactly'. If I had wanted to replicate the process before the grades were submitted, to check for compliance, how could I have done that?

OP posts:
titchy · 21/06/2020 21:39

You'd need the know the results of the centre for the last three years, the profile of teacher predictions, teacher predictions nationally, the results nationally for the last three years and the key stage 2 results for this cohort.

Which you won't be able to obtain. Nevertheless it doesn't mean the process won't be robust.

titchy · 21/06/2020 21:40

What do you mean by replicate the process before grades were predicted? That document tells what the process is.

dennishsherwood · 21/06/2020 21:42

Thank you. And another question if I may: why were teachers asked to submit centre assessment grades? Especially since, as you have so clearly stated, they can't be validated before submission.

OP posts:
Ellmau · 21/06/2020 21:43

You can't, unless you have access to all the data from all the schools who have candidates.

titchy · 21/06/2020 21:46

What do you mean? The centre is the school. Teachers know their students - who else should submit grades? Confused

Ellmau · 21/06/2020 21:46

Teachers were asked to give grades for their students, and rank students within each grade in order, because they were familiar with said students and their work.

The standardisation is there to adjust for optimistic/generous over prediction, based on schools' previous performance, and the specific cohort's previous performance. Or under prediction of course, but that seems less probable.

dennishsherwood · 21/06/2020 21:55

Yes, thank you again. As you have described it, the standardisation will over-rule a submission that is 'optimistic'. The standardisation process can only do that if it already knows what a 'non-optimistic' submission looks like. So it must know the 'answer' before the submission is made.

I'm confused. The school need only submit the rank order for this process to work. So why were schools asked to submit centre assessment grades - especially in the light of the recent FFT report that suggests many of them - perhaps millions - will be ignored? ffteducationdatalab.org.uk/2020/06/gcse-results-2020-a-look-at-the-grades-proposed-by-schools/ and inews.co.uk/news/education/gcse-a-level-exams-2020-millions-proposed-grades-cut-generous-predictions-england-450236

OP posts:
titchy · 21/06/2020 22:18

I can see where you're coming from, but initial expectations will be at National cohort level. These are based on SATS and previous cohort data. If the entire cohort is graded as expected then the majority of centres grades will stand as long as they're within an expected statistical deviation. So a centre could award 6 grade 8s or 8 grade 8s for History. As long as that's within expected for that centre neither will be queried.

dennishsherwood · 21/06/2020 22:34

Thank you once again. How do you know that it's OK to submit any number between 6 and 8? Where does it explicitly state this? In your example, the average is 7. Suppose the school submits 8. Suppose further that a large number of school do that, submitting 1 more than the average, with the remainder submitting 7, and just a few 6. As I said, I don't even have O level maths, so maybe I'm wrong in guessing that the result would be to blow grade inflation sky high. Just as the FFT report shows might actually have happened. So please prove me wrong - or perhaps give me a worked example. Grade inflation might not matter, and Ofqual might be OK with all those overbid grades. Do we know? But given that 'no grade inflation' was a policy introduced when Michael Gove was Minister of Education, with Dominic Cummings as his special adviser, what do you think the current policy might be? To stop grade inflation, the boards will have to intervene, and maybe that 8 is pushed down to a 7. As I said in my last post, the boards already know that. So why ask the schools?

OP posts:
titchy · 21/06/2020 22:43

Well if every centre over-bid then the cohort results will be higher than expected, so they'd have to adjust everyone's results!

This happens every year by the way - that's why grade boundaries change each year - they're set once all exam marks are in, then statistically adjusted according to KS2 SATS.

Don't forget they look at the overall results - if a centre has got 5 x grade 5, 5 x grade 6 and 5 x grade 7 for the last three years, then a repeat of those results would be within expected. Similarly if instead of 5 grade 5s and 7s, they reported two higher and two lower, that would sit on the same curve of expectation. The average grade for the centre hasn't changed.

dennishsherwood · 21/06/2020 23:04

Indeed - they would have to adjust everyone's results. So I come back to my fundamental question: given that possibility, why ask for grades in the first place? Would it not have been far simpler to go the other way around, and for the boards to say to each school "our model predicts you will have [this many] grades [x] - please fill in the box with that number of names". And once again, I seek the explicit evidence that there will be 'flexibility' within any single grade. To control grade inflation, they need to ensure not just that the % 9 - 4 is OK, but that each individual grade is OK too. If they didn't do that, for my cohort of 100, I'd submit 65 9s and 35 3s, which keeps the % 9 - 4 at the 'right number', 65% (or whatever). Yes, I'm sure you're right that the odd one or two here and there doesn't matter. But I fear that the actuality will not be just one or two here or there - rather, it will be a systematic drift up, as everyone 'asks for more', as FFT indicate. But in doing this, no one has been 'greedy' (or only a very few). The problem is that each school can be sensibly within its local variation, but if each school is just a little 'optimistic', submitting say 1 more that the average, the overall result gives a variation far greater than that of the aggregate, whole-cohort, data held by the boards: the variation of this aggregate is far, far smaller than the sum of the individual variants of each compliant school. What Ofqual failed to do is to make it quite clear that the benchmark is the actual average of specific years, and to state, quite explicitly, that any submission greater than [this much] from that average is likely to be over-ruled. That's what I meant by 'exactly'. And there are lots of other anomalies too. ho hum. Anyway, thank you once again. By the way, there's an example of some of this on www.hepi.ac.uk/2020/05/18/two-and-a-half-cheers-for-ofquals-standardisation-model-just-so-long-as-schools-comply/.

OP posts:
titchy · 21/06/2020 23:08

Because the model doesn't know at centre level how many of each grade to award. It knows the average grade for History at Bogsworth High is 5.7, and the model predicts an average between 5.6 and 5.8, but it doesn't know how that average can be arrived at. Might be 10 grade 8s and 12 grade 1s. Could be 10 grade 6s and 9 grade 5s (examples to illustrate the point only).

titchy · 21/06/2020 23:16

I am aware of that HEPI report! But in all honesty, given that grades are manipulated according to expectation each year anyway (you could ask why don't Ofqual send schools a grid each year)!

Realistically what alternative is there. Ofqual have tweaked their model following the consultation. There are winners and losers each year - there's always someone who unexpectedly bombs, and someone who pulled it out of the bag. This won't be any different. But I do think results will probably be fairer - particularly given the large proportion of grades that end up changing after remark.

Nat6999 · 21/06/2020 23:40

I'm praying they will be, ds is getting his GCSE grades in August & knowing our luck they won't be, I really hoped they would do the exams because I'm sure he would have done better than he will be assessed by school.

dennishsherwood · 22/06/2020 07:58

Hi titchy - four thoughts if I may.

  1. I don't share your optimism about flexibility. To take your example. The average grade for history for a school might indeed be 5.7. But the boards know more that that: they also know the grade-by-grade distribution. So they can, in principle, check that each grade, individually, is 'right' - and they can then identify that 10 x 8s and 12 x 1s is taking them for a ride, even though the average-over-all-grades is the required number. So, are the boards going to standardise on the all-grade average, or grade-by-grade? I don't know. The 'Guidance' document does not specify, exactly, one way or the other. For small cohorts, that raises problems. But what is 'small'? Once again, I don't know. And I don't think Ofqual have told me.
  1. Why not ask schools to fill out a grid each year? In principle, they could, but with exams run by boards, they haven't needed to. The overarching principle underlying the whole system is to control grade inflation, not to give students the grades they deserve. Sometimes those might be the same thing, sometimes not. But it's 'no grade inflation' that has been the key driver since about 2010, and there's no reason to believe otherwise this year (although that is not explicitly declared). So Ofqual sets a policy - say, "only the top 5% of candidates can be awarded grade 9". To make that work, you need a rank order of all students, so you can slice it at 5%. In the past, for each subject, the rank order has been determined within each board according to the (single) mark given to each script. The scripts are marked, and then lined up, from top to bottom, with [so many] candidates given 65, [so many] 64, [so many] 63. At each board, the cohort is large, mingled across the country, and easily 'sliced'. The variability in the % of each grade, as seen year-on-year, is primarily attributable to the large number of students marked '63' (or whatever): drawing the grade boundary at 64/63 as compared to 63/62 makes a measurable difference.

This year, the ranking can't be done at the boards using exam marks. It has to be done at the schools using judgement - giving many more much smaller cohorts, each of which needs to be 'sliced' separately and independently. And to control grade inflation, each school must 'comply' of its own accord. That's what (I think) "statistical standardisation" is doing, and that's why all the rank orders must have only one student on each 'rung' - if they didn't, the boards would not know which of the 2 students on any one rung would fall which side of the grade boundary they need to draw to separate them. Which itself is a clue that the process will operate on each grade separately.

  1. And I agree (very much) that this year's process will be fairer than the past. Because the teacher rank order must (my opinion!!!) be much more reliable than the rank order determined by necessarily 'fuzzy' (but still professionally sound) marking. That's good.
  1. My disappointment, though, is that their year's process could have been really good, if only two things might have been done differently. Firstly, Oqual could have published the 'rules' of statistical standardisation so that the process was more transparent, rather than opaque. Secondly, there could have been some way in which schools could justify 'outliers' rather than have the boards over-rule the centre assessment grades unilaterally and without consultation. Yes, I know that's fraught with difficulties, and gives fraudsters a gift. Of course. But there are ways to counter that - I can think of many. But that's not going to happen, with three results. Some students will be treated unfairly (but fewer than in previous years, so that is 'better'). And - I fear - anger that teachers have been ignored. Worst, the possibility that teachers will be discredited: "You had the chance to show you could be trusted, and to submit the 'right' grades. We've had to over-rule [x%]. So you failed." To me that's bad news - and my benchmark of danger is more than 25% over-ruled, for that is the average unreliability of exam grades - about 1 in every 4 is 'wrong')
OP posts:
titchy · 22/06/2020 11:01

The statistical methodology was published I'm pretty sure in the consultation. You'll have to do some searching to find it but it was there.

You've omitted one very important aspect, that adding in rules around outliers and other anomalies is that every bit of extra fiddling you do lessons confidence in the results. It doesn't matter that that fiddling is designed to make things even better, the class of 2020 already potentially suffers in the eyes of employers and colleges. Hopefully this will only be short term.

I'm glad you agree this years results will be more accurate than they usually are. I'm not really sure give that you think this that you have much to berate Ofqual about tbh.

I disagree the overarching principle is to avoid grade inflation. They could use a normal distribution method of awarding grades if that was the case. They don't. And every year the headlines say 'results up by 0.3%' or whatever!

Finally I'd be staggered if one in four resits was revised. That really would undermine confidence in teachers. But I think that possibility is astonishingly low. I'd stick my head out and say 5% will be adjusted.

ButteryPuffin · 22/06/2020 11:05

What alternative is there? Other than waiting to actually sit the exams in the autumn, which is an option, isn't it?

dennishsherwood · 22/06/2020 12:10

Hi once more titchy - it seems as if we're been reading different documents. I can't find the details in anything on the Ofqual website: please correct me to where that oh-so-vague word "consider" (as in "For AS/A levels, the standardisation will consider historical data from 2017, 2018 and 2019. For GCSEs, it will consider data from 2018 and 2019, except where there is only a single year of data from the reformed specifications." - page 9 of the 'Guidance') is defined. If I am writing a computer program to do this "consider" just isn't good enough. You might also like to read of the experience of Huy Duong (see www.hepi.ac.uk/2020/05/18/two-and-a-half-cheers-for-ofquals-standardisation-model-just-so-long-as-schools-comply/), who wrote to Ofqual to ask specific questions of this type, but received only woolly answers, such as "We are still refining the detail of the statistical model and will publish more detail in due course” (that's dated 27th May).

You're right that 'grade inflation' used to be welcomed as a sign of better education, better teachers, brighter students (www.independent.co.uk/news/education/education-news/pupils-notch-up-record-gcse-results-5334652.html is just one example). But all that changed on 2010/2011 when Michael Gove was Minister and Dominic Cummings has SPAD. That regime is still in place today. And if you look at the data on Stubbs or JCQ and plot the graph, you'll see that 'no grade inflation' has underpinned policy ever since Ofqual replaced the QCA.

And as regards what I think: I really do believe this year's results will be fairer - see, for example, www.hepi.ac.uk/2020/04/18/weekend-reading-this-years-school-grades-ofquals-consultation/.

And ButteryPuffin: two things - publish the full details of "statistical standardisation" and allow for schools to explain their 'outliers'. Those would have made a huge difference.

OP posts:
dennishsherwood · 22/06/2020 12:16

Ah... I've just seen this:

Michael Bell: And to add one more thing, Ofqual have confirmed in an email to me that the workings of the statistical standardisation model will NOT be released until the results are out, so there is no way of looking at what they are proposing in detail. (see www.facebook.com/groups/Alevel2020exams/)

Michael Bell is the chap, with his daughter Lexie, featured in The Guardian this weekend, www.theguardian.com/education/2020/jun/20/against-natural-justice-father-to-sue-exams-regulator-over-a-level-grades-system, and has been running this crowdjustice campaign www.crowdjustice.com/case/challenge-ofqual/.

OP posts:
Swipe left for the next trending thread