The problem with original research studies is that you can easily find two studies on the same topic that contradict each other. Here’s an example I found after 5 minutes of googling: Sitting is associated with increased mortality vs sitting has no effect on mortality. This can be extremely frustrating for a person that just wants one clear answer. As a result, some people may abandon science as the source of knowledge and deem it untrustworthy and unreliable.
Apparently contradictory outcomes of research studies may be partly responsible for the recent growth of the antivaccination movement. When hundreds of people in an orthodox Jewish community in New York contracted mumps between 2009 and 2010, it was the proverbial water on the mill of the antivaxx movement – how can a highly-vaccinated population contract a disease that they’d been vaccinated against, right? Well they would have found out had they read the research study that described the outbreak. In it, not only did the authors describe the possible scenarios that lead to the outbreak (densely packed households; vaccine vaning; vaccine less effective for this particular genotype etc), but they also emphasized the importance of vaccination – the outbreak was confined and did not spread in the general population, and the symptoms were less severe than in a non-vaccinated population.
So how to go about finding a trustworthy scientific study?
I’ll say it as blunt as I can – I advise lay people to avoid original research papers altogether and focus instead on reviews. Laymen may not see the fine differences between two seemingly contradictory studies that scientists are trained for years to identify, like different materials/methods; sample sizes, drug concentrations, age and sex of participants, you name it. And sometimes they simply can’t access the journals and have to rely on the article’s abstract in the better case, or on the PR article in the worse case. But if a certain topic has been researched long enough, one can find reviews of what is known about a certain topic. Non-scientists would in this case do best to rely on a professional digest of the researched topic.
My advise is in keeping with what I said previously – that you can’t be an expert in all possible fields. In fact, it is next to impossible to keep up with all new developments in one single field. That might have been true some 50 years ago before the boom in life sciences but not today (gone are the times when you could say you were expert in neuroscience – now you can rule only in a sub-domain of this gigantic field). But if a certain person has been working for years in one particular area – let’s say Parkinson’s – then this person is most suited to write a review on what is known about Parkinson’s disease. Simples! And exactly these knowledgeable people are the ones sought after by scientific journals to provide their expert opinion and to review the recent developments in the field.
OK, reviews. Got it. But which ones are the best?
They gather top-tier evidence and based on the pre-selected criteria, they pool all the data together to answer a specific question. Basically they aim to collate and evaluate data in a non-biased manner thus making them a highly reliable source of information (recently appraised by establishing a collaboration with WHO). Why pre-selected criteria? To avoid the mentioned bias, for example if I pool hundreds of studies together and there are four complete outliers, far from the results of the other studies, you might feel the urge to make the resulting graph ‘neater’. You might want to remove those 4 points. If these studies however fulfilled the criteria then they have to be involved in the resulting graph, there is no other way, no questions asked. And others can see that because you as a Cochrane reviewer want to be transparent and you reveal the criteria together with the final review.
Here’s what it looks like in a specific example: Imagine you want to find out if medical professional’s recommendations help people quit smoking (text at the top in Fig. 1). So you say at the outset you want to gather studies where doctors, but not nurses gave advice; you want to take into account only studies where they compared no advice versus an advice, but not additional treatment like nicotine patches; and you’ll accept studies done on both sexes, but not on, say, pregnant women (this would be the plan or protocol for the review as shown in Fig. 1). You add all the other criteria and select what you want to measure. Then you’ll assign two persons to independently identify all the eligible studies and to extract data from them (search, sift and extract in Fig. 1). Next up is the assessment of the quality of the selected studies – sometimes you can’t simply find out the number of participants in the follow-up visit, or the recruitment criteria are not clear – if these studies are otherwise sound, you can still use them, but you’ll weigh them less compared to the more robust evidence. Then appropriate statistics is applied and the results are visualised, discussed and written up. Here’s what the take-home message for our smoking example looks like:
And here’s the link to the whole Cochrane review titled: Physician advice for smoking cessation.
If you consider that these studies are transparent, that they actively avoid any kind of bias, that they are based on the best evidence we have, then it is only advisable to use them without any worry for misinterpretation of the main findings.
Have I just told people to avoid science?
No. As a science enthusiast, I encourage everyone to delve into science as much as possible so we can all share the knowledge that’s been gathered by now and the processes that lead to that knowledge. But as a person who has been seeing non-scientists frustratingly saying “sugar was good, now it’s not; cholesterol was bad, now it’s good; adult brains didn’t generate new nerve cells, now they do” I offer one advice – go and seek a recent review on the matter, or ask a scientist. Of course you can look up original research articles and make conclusions out of them by yourself, but be aware of the possible misunderstandings, misinterpretations or overconclusions of the data.