
Warning: Results from research might be less accurate than they seem.
by Kate Everson
September 10, 2015
Don’t believe everything you read, right? That’s what The New York Times learned last week when they covered a group of researchers re-evaluating some of the most cited psychological studies, some of which the newspaper covered when originally released.
The Reproducibility Project at the Center for Open Science in Charlottesville, Virginia, has spent a year testing 100 psychological studies published in three of the area’s top journals — Psychological Science, the Journal of Personality and Social Psychology and the Journal of Experimental Psychology: Learning, Memory and Cognition.
Get ready to cringe: More than 60 of the studies have not held up when reproduced. These are studies that have shaped decisions made by therapists, researchers and educators — including learning leaders.
I talked to Frank Bosco, a member of the team and professor of management at Virginia Commonwealth University School of Business, who gave multiple reasons for the lack of replication. Sometimes it has to do with the publication process.
Researchers are often professors who, in order to earn tenure, have to be published in some of the top journals. But these publications tend to favor studies with results — for example, that temperature affects whether someone prefers Coke over Pepsi — than nonresults that show there’s no correlation between being sweaty and wanting one brown soda over the other. Some journals are less likely to publish findings indicating that no effect or no relation was observed. This so-called publication bias leads to an unrepresentative set of findings, leaving many null findings out of the public eye.
Bosco said some journals are taking steps to avoid this bias by reviewing studies in two stages. First they look at a study without the results included and then, once its merit has been assessed, a second round with the results. However, this does little to help the body of research published in the past.
He also said that much of the RPP’s studies in psychology were experimental in nature. In contrast, most research in human resources is correlational. The difference is noteworthy because research in the correlational tradition has a great track record of reporting findings efficiently, allowing others to locate often dozens of existing studies on the same topic – a sort of built-in feature that allows a “smudge measure” of reproducibility. In experimental research, however, the “key” finding is a given study is often unique to that particular study and, often, no replications are available
He gave six tips for learning leaders who want to apply organizational research to their work:
1. Never trust a single study. Learn to look at meta-analyses, or research summaries, that compile multiple sets of data and contextualize them.
2. Be reluctant to accept “newest” or "hottest" studies. Many consultants who conduct studies can’t make money by using public domain questions that measure factors like engagement, retention or turnover. Instead, they create their own surveys that might not be reliable. The trouble is that they also give their measures new, interesting names to increase marketability. With this marketability often comes a distancing from a rigorous scientific foundation and a needless increase in complexity of the terminology landscape.
3. Conduct your own studies. If your organization is a large enough sample — 1,000 employees or more — use it to your advantage. — leverage your existing survey, performance, and turnover records. Conduct analyses to answer targeted questions, such as "which factors are driving performance?" "Which factors are driving turnover?"
“Everyone talks about Google,”Bosco said. “They hire social scientists who know how to run things, and they conduct their research in-house.”
4. Network with HR professors. Consider it a mutual relationship. Professors want to conduct research using non-student subjects, and learning leaders can help them access their organization’s employees. Then CLOs can use the results to their advantage.
5. Know your research publications. You don’t have to read every journal out there, but look for those specifically designed to translate scientific findings into practices.
6. Leverage research tools. Look for those that locate measurements nd existing findings, such as the Inter-Nomological Network and metaBUS.org.
Bosco said the Reproducibility Project, like any solid experiment, didn’t aim for any particular outcome regarding the validity of these 100 studies. The goal was to provide an honest, open assessment. They have provided all collected data on their website so that others can critque and verify their results.
“Open science in general — the goal is to show the power of it,” he said. “Instead of hiding behind the data, which is sort of the standard right now, there’s a power that comes from providing the data to community.”
Editor's note: Changes were made to clarify previously used information from the source.