The Wisdom Of The Crowdsourced

By

In 2013, IESE’s Raphael Silberzahn and INSEAD’s Eric L. Uhlmann published an academic article that garnered interest from media around the world. Their research found that people with “noble” surnames in Germany, such as kingly Kaiser or princely Fürst, were more likely to be managers than employees with less exalted names.

So, should ambitious people with humble-sounding last names rush to change those names for a career boost?

Not so fast. Along with public interest in the research came critical views. And Silberzahn and Uhlmann greeted critics with open arms, in the interest of science.

Most notably, the authors shared their study’s dataset with Uri Simonsohn, professor at the University of Pennsylvania. Simonsohn then approached the same research question with a larger dataset and different analytic approach. With this new analytic framework, it turned out there was an alternative explanation for the previously found name-profession connection. The three academics compared their analytic approaches, and Silberzahn and Uhlmann acknowledged that, in hindsight, their colleague’s analytical approach was superior.

In an unusual move, the three professors then published a joint commentary explaining lessons learned for management science. Silberzahn and Uhlmann have used the experience to propel their research to the next level: They set their sights on crowdsourcing.

A World of Devil’s Advocates

Crowdsourcing in research refers to using people from across academia to analyze the same data and reach a group conclusion.

In traditional research, three main roles are all fulfilled by one research team:

  • Inventors who create ideas and hypotheses
  • Optimistic analysts who scrutinize the data in search of confirmation
  • Devil’s advocates who try different approaches to reveal flaws in the findings

This means that the same people are often undertaking different roles — making it more likely for oversights, unconscious biases or the possibility that they may simply be unaware of more suitable alternative approaches.

Silberzahn and Uhlmann’s open project suggests that instead of trying to do everything within one team, other teams should take on the role of devil’s advocate.

To test this, they recruited 29 research teams to tackle a single research question: Are soccer referees more likely to give red cards to players with dark skin than to players with light skin? All teams worked with the same extensive dataset, which had been collected by a sports-statistics firm across four major soccer leagues.

The researchers designed their analytical approaches individually and decided on their own analytical framework to test them.

What happened next though, was a game changer. The groups shared their methodologies and gave feedback even before results were revealed. Each group had the chance to amend their framework, and the group published a joint report documenting their findings.

Red Card Racism? Sourcing the Crowd for Answers

With the same dataset and the same question, how much could the conclusions of the study be expected to vary?

Substantially, as it turned out. Twenty of the 29 studies found a significant correlation between dark skin and the likelihood of receiving a red card — meaning that nine research teams did not. Some of these researchers found a statistically insignificant tendency to penalize light-skinned players more frequently, while others found no bias at all.

At this point, the true value of crowdsourcing became evident. The teams were able to examine one another’s data, suggest analytical improvements and discuss what the numbers truly meant.

In their final analyses, the research teams concluded that dark-skinned players were statistically more likely to be carded than light-skinned players. Some might not be surprised that a correlation between referee decisions and player skin tone was found. Yet, the true novelty of this investigation stems from its approach, which allowed all researchers to benefit from the expertise of others and hone their analytical skills.

A further conclusion: we should be wary of trusting a single analysis of an issue, yet this is precisely what the current system of scientific publishing encourages, the authors explain.

Not All Plain Sailing: The Pros and Cons of Crowdsourcing

Naturally, there are both benefits and drawbacks to crowdsourcing. The benefits include:

  • A second opinion. Researchers are in a safe space to vet analytical approaches and explore doubts.
  • Less incentive for flashy results. A single-team project may only be published if it shows significant or surprising results; crowdsourced projects reveal a range of possibilities and may be more credible.
  • Mutual learning. A chance to tweak and improve methodologies.
  • Rigor. A coordinated effort incentivizes multiple perspectives.
  • Vital for policy issues. Consider studies into issues with drastic consequences for policies — and populations, such as austerity measures. Such issues merit multi-team study.
  • Dissent. Crowdsourcing can chip away at the current system, which prioritizes strong storylines and entrenched ideas.
  • Cost-effective. The authors of this study had no budget, but were able to attract scores of fellow scientists with two tweets and a Facebook post.

Despite the many benefits, there are practical considerations to hold crowdsourcing back:

  • Resources. Crowdsourcing can commandeer significant resources in the study of just one research question.
  • Some questions don’t benefit. Simple datasets, in particular, likely won’t benefit as much from multi-team study as those that are larger and more complex.
  • Doesn’t eliminate bias. Ultimately, any choice reflects bias, whether it’s the choice of which information to include or which hypotheses to test. Crowdsourcing may reduce bias, but it cannot eliminate it.
  • Possibility of no consensus. In research, you can’t just select the median results. Different teams will reach different conclusions, and in some cases they may choose not to let others’ findings influence their final decisions.

There are imperfections to the crowdsourcing model, as with any model. Yet perhaps the most compelling argument in its favor is the policy ramifications. When decisions are made that affect a large number of us — surely it’s better to have a large number studying the implications beforehand?

As the world continues to become more inter-connected, it’s likely that the power of crowdsourcing will be harnessed still further in the future.

IESE Insight

IESE Insight is produced by the IESE Business School, a top-ranked business school that is committed to the development of leaders who aspire to have a positive, deep and lasting impact on people, firms and society through their professionalism, integrity and spirit of service.

Leave a Reply

Your email address will not be published. Required fields are marked *