I read the first half and skimmed the second through glazed over eyes.
I understand how they have applied statistics to two different sets of data (one 12x the size of the other).
There is always a risk of being the guinea pig at the start of a new qualification, although this time, the AQA students lucked out. It went the other way for Edexcel Science students at the first sitting in November.
These modular exam systems take a couple of years for a teacher to get their heads wrapped around the system, and the strategies for playing it. Add to that the implications that the January and June cohorts will not be statistically similar for whatever reason. Another confounding variable is that students who were presumed to be more able deserted the new qualification in favour of IGCSE.
One of the descriptions of how the determined grade boundaries was by matching skills from one year to the next. It's not just about numbers and forcing a statistical distribution.
It seems that the conclusion is that the anomalous set of results was the January one. The June 2012 was broadly similar at assessing student knowledge and skills to June 2012, according to the report. No one is suggesting regrading the January results, which would be the logical thing to do.
Bring on linear!