^The additional complication here is of course that those numbers are based on random sampling - if we took a random 1,000 of the population and sampled them. We aren't doing that; we are testing people who feel the need for a test/have been told to test, so our (real) positivity rate should be higher than that of the general population.
That would make the false positives a smaller proportion of the whole positives.^
Yes, it can be a very different thing whether the sampling is considered to be independent or not. That's why I was considering the ONS survey results, which are supposedly a random sample - if you assume that all of them are false positives then at the current rate of testing that means that you would expect 80-90 false positives a day, which is about 10% of the total positive tests. So, for the daily tests it shouldn't be making a massive difference. But it could still be making a significant difference to the prevalence survey, we just don't know how much of an affect it is having.
The other big assumption I am making is that the tests being carried out all have the same false positive rate. It could well be that the testing done for the ONS have a different rate (and that pillar 1 and pillar 2 tests also have different rates).