A 12-Step Program to Tell Good Science from Bad
Sharon Camp is the President and CEO of the Guttmacher Institute.
The very complexity of scientific studies can make them their own worst enemy. Valuable research is too often communicated in technical language and rigid formats that make it difficult for non-experts to interpret and evaluate the findings. Worse, some groups deliberately use outdated, incomplete, misleading and outright false information to further an ideological or religious agenda. This creates an environment in which it is increasingly difficult for the public and legislators to distinguish scientifically sound studies from agenda-driven junk science.
It needn't be that way. Social science research, with its focus on human behaviors, relationships and social institutions, can be a rich source of material for journalists, policymakers and program administrators. Indeed, social science findings have their greatest impact when they are useful to—and used by—groups that channel research into practice to improve people's lives.
Sharon Camp is the President and CEO of the Guttmacher Institute.
The very complexity of scientific studies can make them their own worst enemy. Valuable research is too often communicated in technical language and rigid formats that make it difficult for non-experts to interpret and evaluate the findings. Worse, some groups deliberately use outdated, incomplete, misleading and outright false information to further an ideological or religious agenda. This creates an environment in which it is increasingly difficult for the public and legislators to distinguish scientifically sound studies from agenda-driven junk science.
It needn't be that way. Social science research, with its focus on human behaviors, relationships and social institutions, can be a rich source of material for journalists, policymakers and program administrators. Indeed, social science findings have their greatest impact when they are useful to—and used by—groups that channel research into practice to improve people's lives.
The questions below (drawn from Guttmacher's "Interpreting Research Studies") are intended to help demystify social science research for those who could make use of the findings but lack specialized training in research methods. It identifies the key questions to ask when evaluating a research report and explains why the answers matter.
- What makes the study important?
A study's importance or newsworthiness depends on how it contributes to what we already know. Does the study answer a previously unaddressed question? Does it address an old question in a new way or with surprising results? Reading through the abstract or executive summary with these questions in mind can help you evaluate the study's relevance even before you review the full publication. - Do the findings make sense?
Do study's key "findings" or "results" make sense, given what you already know about the subject? And are they rooted in the existing body of research? A scientific report should be properly referenced, with original sources for all factual statements and data from other research clearly cited. - Who conducted the research and wrote the report?
It is important to consider whether the study results could be influenced by a researcher's conflict of interest. Are the authors well regarded in the scientific community? What are their professional credentials? Could their work have been influenced by those who employed or funded them? Any potential conflict of interest should be identified up front. That said, good researchers committed to a political or social agenda can still conduct unbiased, trustworthy studies that can withstand independent evaluation, provided they follow practices designed to protect the quality and integrity of research. - Who published the report?
An article published in a peer-reviewed journal has been evaluated by experts in the field to help ensure that it meets high scientific standards. The prestige of the journal is one indication of a study's quality. While studies from sources other than journals (including reports that research institutions publish themselves) may also contain solid, useful information, if an external review process is not mentioned, you should be more cautious about accepting the study's conclusions. - Did the researcher select an appropriate group for study?
A social scientist's work is about people. In practical terms a study often focuses on a subset, or sample, of the larger population. This sample must be selected carefully to ensure that the study results are applicable to the relevant general population. Using a representative sample is the best way to ensure that findings can be generalized to all members of the target population. Other common approaches are acceptable and—with appropriate statistical adjustments for weighting—can produce valid and representative results. Sometimes, however, a researcher may have good reasons to select the target population in a different way. When they do, they should explain their reasons, and you should consider the extent to which their findings are applicable to other groups. - If comparison groups are used, how similar are they?
If a study compares two or more groups (to evaluate the effects of an intervention, for example), the results will be valid only if the groups are similar in all ways other than their exposure to the intervention being studied. Any preexisting differences between the groups could account for different outcomes. In the best study designs, participants are randomly assigned to the study groups. But when differences do exist between the groups, researchers can use statistical techniques to control for differences. Experience and common sense can help determine whether the differences among them are important for the study. - What has changed since the information was collected?
Ideally, the data used in a study will have been collected recently so that the information reflects the current situation. However, because national-level surveys can be quite expensive and time consuming, data may not become public for several years and special analyses may require additional time. For example, data from the large National Survey of Family Growth, which was conducted in 2002, became public only in late 2004, and analyses are still ongoing. It is important to consider how any changes that have occurred in the intervening period, such as new policies, could affect the outcomes today. - Are the methods appropriate to the research purpose?
Social science studies can rely on either qualitative or quantitative methods or a combination of the two. As a rule, quantitative techniques (collecting and analyzing measurements such as whether a person is currently using a contraceptive method, etc.) are best for answering questions such as "How much?" "How many?" "How often?" or "When?" Quantitative studies can also indicate important relationships, such as whether poor women are more likely than better-off women to have more children than they want. Qualitative research (recording and analyzing interactions with people through techniques such as in-depth interviews or focus groups) may be more useful in obtaining a better understanding of complex contextual, attitudinal or behavioral issues or documenting a process. - Does the study establish causation?
Often, the goal of a study is to determine the effect of something: for example, a program, medication or policy. However, it is usually difficult to isolate the effects of one discrete factor from all the other things going on in people's lives. Even if the study shows that a particular outcome occurred after a program got under way, it can be difficult to prove that this intervention caused the outcome. In general, studies can prove only that an outcome is "associated with" or "correlated with" (rather than "caused by") an intervention. Be alert to researchers who make claims about cause and effect that seem dubious or who ignore other possible explanations for their findings. - Is the time frame long enough to identify an impact?
Studies can either follow their subjects over time, checking in with them at various intervals (a longitudinal study), or take a "snapshot" of subjects at a single moment in time (a cross-sectional study). A cross-sectional study is good for comparing groups, while a series of cross-sectional studies conducted within the same general population (but selecting a different group of people each time) can also provide information on trends over time, as long as the groups sampled are truly comparable. Because a longitudinal study follows the same group of individuals over time, it can be better for examining the effects of a particular intervention, as long as it allows enough time for adequate follow-up and is able to retain a sufficient number of participants. - Could the data be biased as a result of poor research design?
The wording and order of questions in a poll or survey can affect the answers participants provide. In addition, a low response rate (say, fewer than 70% of those selected), suggests that the results may be biased because the people who participated are not representative of the target group as a whole. Studies of sexual and reproductive behavior face another hurdle. Participants do not always answer sensitive questions truthfully. For example, adolescent boys tend to overreport sexual activity, while adolescent girls tend to underreport it. - Are the results statistically significant?
When a quantitative study uses a sample, it is important to determine mathematically that there is little probability the result could have occurred by chance—that is, that a different sample could have produced other results. In the social sciences, a study finding is generally considered statistically significant if there is no more than a 5% probability that it could have occurred by chance (often expressed as a "p-value" of 0.05 or less). Statistical significance alone is not enough to prove cause and effect, but it lends credibility to an argument.
The answers to these 12 questions should help you evaluate and interpret reports of research findings. Of course, a study may be flawlessly designed, conducted without bias, appropriately analyzed and statistically significant, yet convey nothing important to you. But if the findings are something that you care about, and you believe that the research is sound, you are in a position to play a critical role in social science research—interpreting the findings and transmitting them to the wider world to have a greater impact.
"Interpreting Research Studies," on which this blog post is based, was written by Jennifer Nadeau and Sharon Camp and shaped by the valuable input of many Guttmacher colleagues and partners.