But they did it without looking at any coursework or other evidence to ensure those they chose to downgrade were the right students
The underlying assumptions are that a school's performance this year would have been broadly in line with previous years and that teachers know their students well enough to place them in order correctly. If a school that historically had a pass rate of around 65% was genuinely on course to achieve a pass rate of 85% this year it will have been unfair to those students.
Equally, if a school has predicted that 10 students would have achieved the highest grade but historically you would only have expected 5 to do so, those who the teacher believed would have the 6th to 10th highest marks will have been pushed down a grade. If the teacher got the order wrong and, say, the student who the teacher predicted would finish 9th would actually have finished 2nd, it will have been unfair to that student.
But if a school's historic performance was a good indicator of this year's likely performance (which, in general, it is) and teachers placed students in the correct order they will have downgraded the right students.
sounds to me like teachers massively overpredicted
It does indeed. I note that some students have been awarded a higher grade than predicted, so some teachers underpredicted but, on the evidence, it seems most were far too optimistic. If the unadjusted teacher's predictions had been used the results would have lacked credibility.
Given the things I've heard some teachers in England say about the approach being taken for GCSEs, I suspect some schools believed they could game the system to improve their results. It may be that we will hear something similar about GCSE and A-level results when they come out later this month.