prh47bridge How is statistical modelling based on previous exam results at particular schools giving any information about the ability of staff in that school to accurately predict their students’ exam grades?
If teachers in those lower-achieving schools accurately predicted the previous exam results, then their current CAGs are likely to be equally reliable, even if the current cohort are generally higher-achieving overall. Perhaps pupil premium, new and improved teaching and a higher-ability cohort have come together in a perfect storm to exceed expectation and bell-curve type rate of improvement?
In the current system the current CAGs will be downgraded based on previous poor pupil exam performances irregardless of their teachers’ ability to give accurate CAGs.
The stat model, if designed to identify overestimated/underestimated grades doesn’t really actually do what it was designed to do.
And although predicted grades may not be published, they are on student and school records. Both my DDs got predicted grades regularly on their GCSE and A Level years’ reports.
All predicted grades vs actual achievement are regularly analysed after results day in school departments. This information is ready available and easy to access. It is blatantly obvious how well correlated (Or not) they are.
Both my kids are now awaiting the results of the current lottery with anxiety - never before has there been felt to be such a disconnect between actual work and skills achieved and the letter grade which guides the next steps in their academic and vocational journey.
Obviously no system is fully fair or perfect (linear exams for example!) but the government approach to trying to maintain the credibility of this year’s results has been shambolic and deeply insulting to the professionalism of teachers (shouldn’t be surprising, though - it’s just been made even more obvious through this situation)