Do you know someone who has cancer and her treatment isn’t working? Do you know someone on psychiatric meds who is experiencing weird side effects? Do you know a business owner who’s struggling with decisions like whether to invest in high-tech equipment? Doctors are overworked and MBAs are poorly trained. If you want answers you’ve got to read the research yourself. The problem is, half of it is bullshit, and it’s really hard to tell which half.
When a paper is submitted to a good journal, a lot of smart people have at it with the academic equivalent of a howitzer. If it survives to be published, you’d think all the bugs would be worked out. Unfortunately, not all research is peer-reviewed, and reviewers tend to miss or tolerate certain weaknesses. While this is often understood by the scientific community, it can confuse the bejesus out of Joe Average.
If you’re an average person, and you need to make sense out of scientific papers, this guide is for you. If you are a science journalist and you have fewer than six graduate level research methods courses, maybe you ought to read this as well. Hell, if you got a PhD and didn’t take methods for some reason, here’s hoping this helps.
1. Is the paper in a high-quality, peer-reviewed journal?
If the “research” you have is not in a journal, be very, very wary. Sometimes good research is published in books, but be careful. The kind of research I’m talking about is done by scientists. Journalists and government officials are not scientists. Reports commissioned by government departments are (usually) not scientific.
Coming back to research published in journals, go to the journal’s webpage. Don’t worry if it’s a crappy webpage. Look for the word’s “peer-reviewed” and “acceptance rate.” If the journal is not peer reviewed or has an acceptance rate over 20%, that’s bad. You can also google the Journal’s “impact factor.” Higher is better. If you think it’s a bad journal, don’t even read the paper. If the journal is good, it means most of your work has been done for you; however, a journal’s reviewers tend to miss or tolerant certain kinds of errors that you still have to watch out for.
The same line of reasoning applies to conference proceedings. Since some fields, such as human-computer interaction, publish much of their best work in conferences, these can be excellent sources of research. However, you should only look at good, peer-reviewed conferences with low acceptance rates.
2. Who financed the study?
If whoever financed the study had something to gain from the results, don’t trust it! This is especially important in drug trials because these are often done by the drug companies who are explicitly trying to show that the drug is safe and effective. A single independent study to the contrary should be given just as much weight as all the Big Pharma studies promoting the drug combined.
Sometimes this is more subtle. A lot of research on security and drugs, for instance, is politically motivated. If a government funds a study to show that marijuana is dangerous, and the results show the opposite, things can get hairy.
3. What kind of study is it?
You have to evaluate different kinds of studies differently. Some of the kinds of studies you’re likely to encounter are: experiments, surveys, mathematical models, meta-analysis and qualitative studies. You should evaluate each of these differently.
If the article talks about treatment groups and control groups, it’s probably an experiment. Reviewers are very good at checking that the experiment is correctly designed and the results well-interpreted, so you don’t have to worry about that. What you have to really watch out for is who participated in the study. If a drug trial was done on 100 white women, and you are a black man, the results might not apply to you. If you are a professional programmer with 20 years’ experience, the results of a study on 2nd year undergrad computer science students might not apply to you.
If the article talks about a large number of people filling out a questionnaire online, on paper, by telephone or in person, it’s probably a survey. Reviewers are very good at making sure that the questionnaire is correctly designed and the analysis is done right, but look out for causality! Usually questionnaires argue that X causes Y, but only show that X is correlated with Y.
For instance, suppose a study claims that, for corporations, acting ethically (X) causes increased profits (Y). The study than gives evidence that a random sample of very profitable companies act more ethically than a random sample of unprofitable companies. That’s nice and all, but how do you know that it’s not the other way around? That being profitable (Y) causes the firm to act more ethically (X) because more people are watching? How about having really smart managers (Z) causes both X and Y?
When evaluating a survey that claims X causes Y, ask yourself if there are alternative explanations that the authors did not rule out.
If a paper starts with a set of assumptions and logically (usually with symbols rather than words) or mathematically derives a conclusion, I call it a mathematical model study. The good news is, you don’t have to worry much about the math or logic because the reviewers will be studying that quite closely. What you really have to watch out for are the assumptions, especially hidden assumptions.
Just read over the assumptions and think about them. Do they make sense? My favorite example is rationality. We have enormous evidence that while people may be capable of rationality, they don’t use that capability most of the time. If the paper includes assumptions that don’t hold in your case, there’s no reason to believe the paper’s results will apply to you either.
A meta-analysis summarizes the results of many studies. Usually these are a great place to start when you’re new to a field. Unfortunately, they have one serious danger: they’re only as good as the studies they summarize. In a field with a diversity of good and bad studies, the author of a meta-analysis will usually sort out the mess for you. However, some fields, like economics and computer science, suffer from systematic methodological problems. In other words, if the whole field is screwed up, the meta-analysis probably will be as well. My only advice is, don’t just read a meta analysis.
Qualitative research comes in many shapes and sizes. Some is presented in a highly structured way; some is written like a story. A qualitative paper describes a study within a particular context, coming to conclusions about that context, not your context. As you read the study, ask yourself how your context differs from the study’s context. After you’ve read the conclusions, ask yourself if any of the differences matter. For example, if the study is about the decision making process of a clothing retailer, and you’re in the office supplies business, the change of product may not matter.
4. Is it theory building or theory testing
Last thing you have to ask of a paper is, did the paper test a theory or merely propose one? This is usually obvious, but sometimes theory building papers masquerade as theory testing papers. As a general heuristic, put the more faith in theory testing, less faith in “exploratory studies” and very little faith in papers that propose a theory but do not give empirical evidence. While this last type is an important step in the scientific process, it’s like an experimental drug: not yet certified for human consumption.
One last thing that confuses many readers (not to mention science journalists) is the difference between “finding no evidence of a relationship” and “finding evidence of no relationship.” Journalists often write things like ‘so-and-so concludes that drug X is not effective.’ This is almost never correct. Very few scientific papers ever conclude that two things are unrelated (e.g., a drug doesn’t work). Experiments and surveys just aren’t set up that way.
I hope this guide helps you make sense of scientific papers.