The standardised scores are meant to tell you information about the cohort and where your child scored. So the most important thing is whether you know what cohort was standardised - was it just the 45 who sat, or was it a larger cohort eg all the poe who had ever sat that mock, or a random selection of say 1000 year 5 pupils who were asked to sit the paper in order to standardise it.
Secondly, standardisation looks at what the mean or average mark was for the cohort and then measures how far every other mark was statistically from the mean. So it is very likely in a small sample that the top mark wasn't 140. But the mean mark will always be 100.
So in the 2 scenarios above, if the standardisation was done across the cohort of 45, there may not have been enough different birth months to produce meaningful differentials by birth month, but it is likely that say the average mark was just a few points below your DCs, and many children scored in a small range eg of the 45 most scored between say 70 and 90%.
If a larger cohort were standardised, then it suggest that most children scored highly on this paper, and the 45 children who sat weren't at the high end of the scores.
Of the 2 I suspect the former is more likely, and all the data really tells you is that he did well in this cohort of fairly well prepared children. It would be hard to extrapolate to an actual standardised score, but usually you are trying to score with the top 20/10/5% of candidates depending on which school you're aiming for, so I would view it as a good outcome, but it should also warn you that just dropping a couple of marks could pull them a lot further down the ranked list. A lot of children will be on very similar marks.
Does that make any sense?!