In our Study of the Day feature series, we highlight a research publication related to a John Templeton Foundation-supported project, connecting the fascinating and unique research we fund to important conversations happening around the world.
The eccentric Victorian polymath Frances Galton gave us many things—he developed statistical concepts of correlation and regression towards the mean, popularized the phrase “nature versus nurture,” was (unfortunately) the originator and tireless promoter of eugenics, and proposed a novel method for cutting a round cake on scientific principles. One of his other great contributions was also culinary-adjacent: in 1906, a contest was held at the annual West of England Fat Stock and Poultry Exhibition, in which participants guessed how much an ox would weigh after it was slaughtered and dressed. Galton got ahold of the entries—around 800 guesses in all—and, discovering that the average of the estimates was a near-exact match for the correct result, wrote it up for Nature.
He called the phenomenon the vox populi, but later adaptors named it “the wisdom of crowds” (itself a reversal of “the Madness of Crowds,” from the title of a Charles Mackay’s 1841 study of mob mentality). Studies over the past century have shown the effect of the wisdom of crowds for types of problems ranging from quantitative estimates like the ox’s weight, to voting outcomes, to arbitrary social collaboration like agreeing on what words should mean.
In a recent set of experiments published in the journal Cognition, Jan Pfänder, Benoît de Courson, and Hugo Mercier of the École Normale Supérieure in Paris investigated their own reversal of the wisdom-of-crowds concept: if we only know whether or not a person’s answer to one numerical estimate or multiple-choice question was similar to that of other correspondents,
can we predict whether the person will be competent at answering other questions on the same topic?
The scenario might seem cut adrift from reality: we don’t know anything about the respondents except the commonness of their answers, and we also don’t even know if their common answers were correct. But such a scenario models the kind of imperfectly-informed judgments people have to make every day, guessing whether claims and people are credible.
To investigate the rationality of such inferences are rational, Pfänder et al. created synthetic data from sets of around 990,000 agents with different competence distributions in order to model the correlations between common answers, accuracy, and competence. They then used human studies to look at how people evaluate answer-givers in situations where their only information is whether an answer was common.
The results of the two-pronged approach showed that we do tend to assume that common answers are likely accurate and that the people who give them are likely competent regarding the question topic. In both simulation and actual practice, convergent answers strongly predicted accuracy (the standard wisdom-of-crowds finding) and the competency we assign to complete strangers.
What happens, though, when we have a little more information about the person giving the answer? Maybe they have a conflict of interest, or simply had a chance to discuss possible responses with others before making their decision. When Pfänder and colleagues asked people to gauge accuracy and competence in light of these factors, the results were mixed: when people know about a conflict of interest, they are less likely to ascribe competence. The researchers had predicted a similar effect would occur when people knew that the respondents had been given a chance to discuss the question (and possibly collude) before giving their answers. Instead, they found that when people knew that the respondents weren’t answering independently, they actually slightly increased their estimations of competence.
The findings fit into the larger study of epistemic vigilance—the ability to help us quickly assess new information and the people who provide it. Such vigilance gives us a tool to evaluate people’s expertise even when it concerns subjects we know nothing about. The authors suggest this may be what’s behind the broad societal respect accorded to scientists—it comes not because most people are able to directly assess the science itself, but because they take the existence of scientific consensus as an indicator of competence.
A few weeks after his “Vox Populi” piece was published, Galton wrote to the editors of Nature: “I regret to be unable to learn the proportion of the competitors who were farmers, butchers, or non-experts. It would be well in future competitions to have a line on the cards for ‘occupation.’” Such additional information might have satisfied Galton’s curiosity, but even the raw data would have provided enough to guess at who the experts were, well before the ox had met its demise.
Still Curious?
- Read “How Wise Is the Crowd: Can We Infer People Are Accurate and Competent Merely Because They Agree With Each Other?” in the journal Cognition
Nate Barksdale writes about the intersection of science, history, philosophy, faith, and popular culture. He was editor of the magazine re:generation quarterly and is a frequent contributor to History.com.