We need to distinguish between surveillance studies of populations, when almost all will NOT be infected
vs
tests of people with symptoms, who still probably don't have COVID, but do have a significantly increased chance.
I tried out 2 different scenarios to get a very rough idea of the consequences of false negatives and positives
Using a flow chart from Spiegelhalter where he calculated assuming 6% infected, what false negatives & positives would come from tests,
I copied his basis and took 6% infected for the situation of people tested because of symptoms
For ease of calculating both scenarios, I multiplied by 10 to get a 10,000 sample (and crossed out his figures for 1,000)
==> For the 6% infected (with symptoms) this gives 510 positives in total instead of 600, i.e. 90 too few positives
and then in red I added my figures assuming a general population with only 0.1 % infected (so no symptoms)
==> which gives 106 positives instead of 10, i.e. 96 too many positives from a sample of 10,000 people
This illustrates why population surveys need to use serology tests, not just swabs - hence look at the weekly surveillance reports for absolute numbers, with the daily numbers more for trends
and also why
the ONS COVID-19 Infection Survey will be expanded from regularly testing 28,000 people per fortnight in England to 150,000 by October
and eventually to 400,000 people, also adding the other 3 UK nations.
This will greatly improve confidence levels
However, an individual person with symptoms is probably much more likely to have a false negative than a false positive
- saliva or blood tests will hopefully improve accuracy