## Anyone good with Percentiles?

(30 Posts)I understand the basics of Standardised Age Scores and Percentiles, but this example has me confused (brain tired with other matters perhaps). Anyhow, thoughts welcome:

(working on basis of 69/70 to 140/141 with 100 as average)

SAS for English 115

SAS for Maths 122

Combined SAS 237 (maximum possible 282)

I would assume individual percentiles to be 84th percentile for the 115 and 93rd percentile for the 122 (using www.nfer.ac.uk/nfer/research/assessment/eleven-plus/standardised-scores.cfm ).

How is percentile worked out on the Combined SAS though?

See, if the combined percentile was much higher than the individuals, then I could understand that, with the logic of the probability of having a child who was both excellent at maths **and** english being lower than one or the other

eg. 1/3 good at english, 1/3 good at maths, 2/3 good at english **or** maths, but only 1/9 being good at english **and** maths.

But for the percentile to be much **lower**....doesn't add up.

Their apparent answer was the one I posted a few posts back, lougle, and which I think was 'fudging' the issue a bit. They did not, it would seem, back it up with chart/tables to justify the 61.

Can the people who gave you the example not explain the fact that the combined is 61?

butterflymum, if the English and Maths scores are identical for any given randomly selected individual, then yes the standard deviation would double. If the two scores vary for a given individual, the standard deviation would increase by less than double, in which case obviously a score of 237 is even beyond the 89.13th percentile.

We know:

15*sqrt(2) <= sd <= 30

and therefore

89.13 <= %ile <= 95.94

(ps..... I would be first to admit I could very well be missing something obvious, as my head is buzzing with other issues at moment - I am only continuing with this query re the 61 as it has bugged me since I became aware of it and my curiosity is getting the better of me , so thanks again for all who have given input thus far ....and yes, I know, maybe I shouldn't be so curious and just go and have some 's and get on with issues that really need my attention).

_There is no one-to-one relation calculating percentile point individually and together.

The percentile is the cumulative percentile from the frequency distribution of the score, so the percentile for the English scores will be based on the English frequency distribution, the percentile for the Maths will be based on the Maths frequency distribution and the percentile for the Combined Score will be based on the Combined Score frequency distribution_

Above is apparently the response that was given by those responsible for the tests that produced the results in my original post. I can confirm again, the tests were English and Maths (no CAT or NRIT), There was no 3rd test. They were sat on the same day, by same cohort, and as individual papers. It was stated that as the test scores would be standardised, the scale would run from 69 to 141 with 100 as the mean.

The percentile table linked to in my first post allows for 69/70 to 140/141 with a mean of 100, hence why I used same and my thinking was along the lines of Joan that **the standard deviation of the joint distribution doubles, i.e. it is 30. A joint score of 237 in that case would be on the 89.13th percentile**.

So, am I still missing something? Yes, I appreciate certain children will have performed better in English than in Maths and vice versa, but I keep falling back on the thought that as the raw scores have been standardised to the same frequency distribution, then surely percentile should likewise follow suit.

I take it we are talking about Verbal Reasoning and Quantitative Reasoning here, rather than English and Maths?

The mean of each distribution is 100 and the standard deviation is 15.

115 is at 84.13th percentile

122 is at 92.88th percentile

The mean of the two joint probability distribution is clearly 200. However the standard deviation of the joint probability distribution is determined by the covariance of the two distributions. This is unknown to us, however in the VR and Quant scores are identical, then clearly the standard deviation of the joint distribution doubles, i.e. it is 30.

A joint score of 237 in that case would be on the 89.13th percentile.

On the other hand if the covariance were zero, i.e. there was no correlation between the two scores whatsoever, then if you got 115 on one test, the most likely score on the other would be 100, and likewise, if you got say 60 on one test, then the most likely score on the other test would be 100.

Since there is no correlation between the two, the **variance** of the JPD in this case is equal to the sum of the respective variances, and since they both have the same variance, this would imply that we double the variance, and therefore the standard deviation of JPD is 15 * sqrt(2), i.e. 21.21.

If the mean is 200 and the sd is 21.21, then a score of 237 is on the 95.94th percentile.

Accordingly the combined percentile must be between 89.13 and 95.94.

Probably closer to 89.13 than 95.94 I might add, as I would suggest that the two are highly correlated, BUT 61 is clearly nonsense.

The only way you could get 61 is if you had a third score (non-VR?), which given the scores in the OP, and a population mean of 300 and standard deviation somewhere between sqrt(3) * 15 and 3 * 15, you would need a score for non-VR of less than 76 (this if while CAT scores are randomly distributed among individuals, each component score of a given individual is 100% correlated with another component score for that same individual) and 70 (if there is no correlation at all).

So one or more of the numbers quoted in the OP is wrong, but most likely the overall percentile.

I'm with titchy.

If the child is better than 84% of the cohort for Maths and gets an *even better* result for English, then how can the combined score for both subjects be *less* than an 84 centile?

The centiles aren't percentages. They are the point at which a certain percentage of children taking the test scored less than the given score.

So, a SAS of 115 would be the 84th centile, which means that 84% of children scored between 0 and 114 SAS and 16% of children scored between 116-140 SAS.

But how can the centile on the combined be less than both Maths and English centiles. How can just 16% of kids do better in English, and a mere 7% do better in Maths, but 39% do better in English AND Maths?

And I assume similarly, an SAS from an English/Maths combined exam, converted to a percentile, is also not the best way to view the scores because by combining the English and Maths in one test and giving a total score, differences between performance on each aspect of the test is also then obscured.

Aha, so given that the English and Maths exams were separate and not a combined English/Maths exam, then the composite scores converted to cohort percentile isn't the best way to view the SAS scores because important differences between the performances on each section will be obscured.

Well...I couldn't resist, so I phoned NFER and spoke to someone on the enquiry line. She said that you do need a different percentile ranking chart for the combined test.

That makes two of us .....will check in this evening to see if anyone else has any more thoughts for us. Thanks meantime.

"But is it not also true that the SAS is also standardising the raw scores in such a way as they fall within a range where 100 is average and then each +/- deviation of 15, thereby range of 70- to 140+, so needing only one percentile table?"

I don't know

"Mmmmm.... I can see what you are saying to an extent, lougle, but surely that would only be the case if the tests were sat by different cohorts? These tests would have been sat on same day, by same cohorts, and both English and Maths SAS arrived at using same range, therefore would conversion not be from same table?"

No, because the standardisation process looks at the raw scores as follows:

1) Break down the cohort in to sub-groups according to age in years and months.

2) For **each sub-group** calculate the mean score.

3) For **each sub-group** calculate the Standard Deviation

4) For each **pupil** calculate the Standardised score using:

S = 15(b — a)/sd + 100 (where S= SAS, b=raw score, a=mean score, sd=standard deviation)

So, to get the Standardised score, you calculate:

15 x (raw score of pupil- mean score)/standard deviation +100

To be meaningful, this would have to be done separately for maths and English, because children are not necessarily as good at one subject as at the other, and the maths and English test aren't necessarily of the same difficulty.

But is it not also true that the SAS is also standardising the raw scores in such a way as they fall within a range where 100 is average and then each +/- deviation of 15, thereby range of 70- to 140+, so needing only one percentile table?

It's confusing, isn't it?

The Standardisation relates to the differences in age between different members of a cohort. So, children can get the same raw score in a test but have different Standardised scores.

I can see that this means that children's Standardised scores can be combined to give an overall Standardised score that can be compared.

However, the percentile ranking is different.

The way I think it should work for percentiles (but I don't know!) is that you would need 3 tables:

Maths percentile ranking

English percentile ranking

Combined ranking

Because the percentiles are simply a chart showing what percentage of children scored below the defined score in the given subject.

If children scored badly in general, on one subject, then the percentile for a given SAS will be quite high. Conversely, if children scored quite well in general on that one subject, the percentile for the SAS will be quite low.

Once you combine them, you'd have to have a correlating combined percentile chart, to show what percentage of children scored less than the given combined SAS.

Mmmmm.... I can see what you are saying to an extent, lougle, but surely that would only be the case if the tests were sat by different cohorts? These tests would have been sat on same day, by same cohorts, and both English and Maths SAS arrived at using same range, therefore would conversion not be from same table?

**it isn't a simple average**

I realise that it may not be that simple, but 61 isn't even in the same ballpark.

Oops, lots of replies since I posted previous comment....will have a quick read through.

But surely the whole point of SAS is that they can be combined, as they have already been standardised? And given that both English and maths were standardised to same range, surely percentile should work on same basis for both?

From the ink, lougle:

"scores from more than one test can be meaningfully compared or added together

Standardised scores from most educational tests cover the same range from 70 to 140. Hence a pupil's standing in, say, mathematics and English can be compared directly using standardised scores. Similarly, should a teacher wish to add together scores from more than one test, for example in order to obtain a simple overall measure of attainment, they can be meaningfully combined if standardised scores are used, whereas it is not meaningful to add together raw scores from tests of different length or difficulty."

Senua, because it isn't a simple average, because children won't perform equally well in both Maths and English, necessarily.

Each chart is based on the actual scores of the cohorts. The percentiles relate to the percentage of children in the cohort who got that SAS. So, if a particular cohort were stronger in maths than English, then they would need a much higher SAS to score sufficiently high that 90% of children scored lower than them. Equally, for English, they'd need a much lower score to ensure that 90% of children scored lower than them.

Join the discussion

Registering is free, easy, and means you can join in the discussion, watch threads, get discounts, win prizes and lots more.

Register now »Already registered? Log in with:

Please login first.