There has to be ‘tolerance’. So it is right that if the first examiner marked an answer in the correct band (and for non-teachers, that’s not a grade, but a band for that particular question - the more marks an individual Q is worth, the more bands it has. Long essays for example tend to have 5 bands) then that should stand. Essentially the band is the tolerance.
However, what sometimes emerges is that someone hasn’t chosen the right band and the answer is out of tolerance. When this happens, that question is essentially remarked. In subjects where there are lots of low mark Qs, there’s more scope to be out of tolerance and therefore more questions where the mark might go up. In an A Level essay subject with perhaps just 3 Qs on a paper, each Q might be worth a large number of marks, so the bands are broader and there’s more scope for differences of view, which remain within tolerance and won’t result in a change of mark. It could be said that there’s an inequity between subjects because of this, and humanities have more scope for less papers to see a change in mark or grade following review than subjects with lots of shorter answers. That doesn’t seem right, but I can see why the system exists as it does. A level of ‘tolerance’ has always existed when examiners are being moderated, at every stage and it is necessary.
What remains worrying, is that lots of papers do still need to go up and marks and possibly grades changed, due to marking which is out of tolerance and sometimes signs a toy so. As is reported here, one person saw 19 students rise in one centre…and that’s before we have even reached the deadline for requesting remarks. It’s not uncommon for a school to have at least 1 subject at A Level or GCSE experiencing significant numbers of upgrades following requests for scripts where marks have been anomalous between papers or significantly out of kilter with expectations, or where the rank order of a cohort is significantly different to expectation. It’s not an infrequent experience for schools.
At what point is an examiners entire set of marking looked at? Is there a level of review which finds their marking incorrect that prompts this? Because when a centre has multiple reviews resulting in significant grade changes in a subject, surely alarm bells should ring. Or is it that the Board will only look at papers if the centre has requested it…and those that remain silent just have to put up with the marks that the Board knows are in all liklihood incorrect, given the amount that have already emerged as incorrect and out of tolerance? That is where the system is wrong and encouraging and supporting inequality.
It’s not good enough to say schools need to look at the breakdown of results and get scripts and talk to parents. Even with this, some students are disadvantaged by the system because their parents and school cannot afford to put in for a review, or simply don’t understand or value the possible outcomes. If the Board puts all the onus onto schools and families, rather than actively engaging with addressing and reviewing all of the marking of examiners who have come to light in the review process as inaccurate markers, it is essentially turning a blind eye to it and allowing the inequalities in education to carry on.
If we were to look at the proportion of students on FSM having their papers reviewed or grades changed, it would be lower than the average. If we looked at those in fee paying schools it would be above average. And the system is not facilitating improving that situation, but it’s very nature allows this to continue and even grow.