As above plus whether or not the results are repeatable and due to actual causation, or there was just a correlation that happened by chance or an unrelated factor that was not fully accounted for in the study.
Eg the 'blood clots after vaccination' question.
The first question any scientist/medic would ask of the above would be 'how many blood clots would you expect to see in a population of X million people in any Y week period anyway' because it's not a case of saying 'Person A had the vaccine last week and this week had a blood clot therefore it was due to the vaccine' because blood clots are a common health issue in the general population.
I think one of the regular BBC doctors said that they saw more blood clots in a year amongst their practice list of around ten thousand patients than the 37 reported in however many million people that caused the pause in the vaccine programme in Europe.
Plus if you're making a decision like that, you also have to weigh up the impact of not vaccinating people, which is likely to lead to more COVID cases and hence additional hospitalisations and deaths.
It's very easy to design a bad study that gives nonsensical results and it's actually quite hard to run a scientifically robust study where the only variable is the one that you're testing, especially when looking for the effect on the population as a whole. Hence the importance of peer review.
A good illustration would be to ask Mumsnetters how much they earned and compared the results with the population as a whole.