People need to be really careful with the data on questions like this, as you need to decide what is meant by "good" and "better than". There are umpteen metrics in the school performance data and to be honest, there are scripts for multiple episodes of More Or Less in there.
If you look at a metric related to getting into top universities, the proportion of kids getting good grades in facilitating subjects might catch your eye. Teddies (T) is consistently better than Cokethorpe (C) in the last five reported years, averaging about twice the proportion. In fact by that metric, C is worse than Wood Green, Henry Box and Bartholomew and many local comps in most recent years. If you look at average grades and points, T is still slightly ahead of C in the 2017 results.
But these numbers can just reflect entry criteria and subject choices, and C has a strong history of trying to get more kids through FT education till 18. So is it any surprise that a metric which is unfavourable to less academic kids looks bad?
In 2017 C was better than T in the Value Added metric in fact, so it is not wrong to make the claim - it's just a bit useless to not explain what is being measured. I haven't gone through the last few years on the VA score, but I recall in the past seeing some good numbers for C, so I wouldn't be surprised if the web site claim holds up to scrutiny. I'm not so sure about some of the other headline claims on that web page. C's marketing clearly thinks it is OK to take a record of all Bs in Latin and ClasCiv and headline it as "100% A*-B" in Classics. Not the first year they've done it either. So you need to fact check this stuff.
So if you think that what you really want to know about is the quality of the teaching, perhaps as expressed by the VA metric, it might be reasonable to claim that C is good and better than Teddies. I've moved my kids away from C recently for several other reasons which do include concerns about academic focus and performance, but they do get some things right.
Apologies for the long pedantic post. Ranking anything by multi-dimensional data is fundamentally impossible unless you decide first how to boil it down to a one-dimensional thing, and there are many ways of doing that.