Making the scientific method better
Lesson summary
Hi there everyone, I’m Jeff and you are listening to Plain English, where JR and I help you upgrade your English with stories about current events and trending topics. Today’s story is a continuation of last Thursday’s story . Today, we’ll be talking about the ways bias can creep into scientific studies…and how new techniques can correct that bias.
In the second half of the episode, I’ll show you how to use the phrasal verb “give in.” This is a really, really good one, so I’m looking forward to sharing that with you. This is lesson 650 of Plain English, so JR has uploaded the full lesson to PlainEnglish.com/650.
This is a long one, so let’s get right into it.
Correcting the bias in science
Did you know that striking a more assertive pose can make you more successful in a negotiation ? Or that women are more likely to wear red when they’re ovulating ?
Both of those conclusions grabbed headlines when they were first published in prestigious academic journals. But both have been questioned by subsequent research . And they’re just two examples of many, many behavioral science studies that subsequent researchers have been unable to recreate .
This is a polite way of saying : the effects might not be real. Or, they might not be as real as they seemed. That doesn’t mean the original studies were flawed . But if you can’t reproduce the finding in future experiments, then the original finding might just be a fluke.
The problem is, there have been a lot of flukes in behavioral science lately. In Lesson 649, we talked about fraud and questionable research practices in academic studies. You learned that there is overwhelming pressure for academics to publish studies that have positive findings . And some people give in and either falsify data or engage in questionable research practices , all to get the career rewards of publishing a paper in a prestigious journal . Publish, the saying goes, or perish .
And at the end of the lesson, I said there are new ideas that will help correct the biases and temptations in scientific research. Let’s start by identifying a few problems and then we’ll talk about new ideas to correct them.
One major problem is called “publication bias.” Academic journals publish positive findings. They publish articles about hypotheses that have been accepted—about effects that have been found. They don’t publish a lot of articles about hypotheses that have been rejected , about guesses that proved incorrect .
You can see how this would be a problem. Imagine two researchers doing two different projects. They’re both well-designed studies . They both test scientifically valid hypotheses . One study finds a positive effect: the hypothesis is accepted. The other study does not find a positive effect: the hypothesis is rejected. They’re both equally valid scientific endeavors , but only one gets published.
The temptation, then, is for a researcher to rerun random analyses or tweak the data until he stumbles on a positive effect. This is scientifically questionable. But the incentives reward positive findings, so it happens.
Not only does this lead scientists into questionable ethical practices, but it’s also a disservice to science and scientists overall . The British Journal of Psychology is more direct . It says: “The non-reporting of negative studies results in a scientific record that is incomplete, one-sided and misleading .” In other words, by never publishing negative results, academic journals deprive the world of valuable scientific knowledge.
Here’s another problem. Researchers are humans and humans have opinions. They often have beliefs and opinions before they start an experiment. And as researchers do their analyses , they can make choices that reflect the biases they held at the beginning of the study.
Here’s what I mean by that. A researcher might test a hypothesis with four experiments. Three might not support the hypothesis. But the researcher would be tempted to throw those three out and write only about the one that did support the hypothesis. The resulting paper would be incomplete or misleading because it didn’t disclose the experiments that didn’t work the way the researcher thought or wanted them to.
We’re all human and we all have biases. So how can this type of bias be corrected?
Simine Vazire has some thoughts on this. She’s the new editor of “Psychological Science,” one of the prestigious journals in psychology. And she has promised to use her influential position to adjust the incentives in her field. She’s using behavioral science techniques on…the study of behavioral science.
As the editor of the journal, she says she wants to hear from research teams at the beginning, not the end, of the research process. Researchers should share their methodology , data collection plans , and hypotheses in advance —before the experiment is run and before the paper is written. This reduces the chance that researchers would cherry-pick the results.
Vazire also wants the data and the computer scripts to be saved in a standardized format so that future researchers can easily confirm the results published in her journal.
There’s an even bolder idea : journal can accept a paper for publication before it has been written. This is called a “ registered report .” And what this means is that a journal like “Psychological Science” will agree to publish the findings of a project before the experiment has been performed, whether the results are positive or negative.
This can have several benefits. First, it reduces the incentive for researchers to hunt for any type of positive result. They’ll still get the career rewards of publishing a paper, even if the original hypothesis is not supported . Here’s the way Chis Chambers, a neuroscience journal editor , put it. He said, “Because the study is accepted in advance, the incentives for authors change from producing the most beautiful story to [producing] the most accurate one.”
Another benefit to this format is that it allows the journal and its expert reviewers to spend more time working with the academics on a strong research design and data collection strategy. They can give guidance and advice before the study is performed, rather than just criticism after it has been completed.
And finally, this is a service to the scientific community . Negative results should be published. That way, other researchers won’t waste their time testing a hypothesis that has already been rejected. Instead, they could spend their time collecting better data or designing a stronger methodology.
Not every academic will like this new, more transparent approach. Many will be offended by the idea that they’re subject to human biases . Others will want to keep their data private . So far, only about 300 journals accept the registered report format—that’s out of a total of over 40,000.
But the industry is changing. Three hundred journals accept registered reports now, but the practice is gaining momentum .
Jeff’s take
I think this is really fascinating . I’m not in the academic world, so a lot of this was completely new to me. But it makes a lot of sense that journals should publish studies that have negative findings—I mean, just out of common courtesy to other people in the field who might develop the same hypothesis in the future!
Another point is that researchers don’t want to waste their time writing a whole long report about a study that didn’t work out. So maybe they’ll find a shorter reporting standard for negative findings.
Great stories make learning English fun