
Blog
Of Cows and Companies: using a prediction survey to estimate the effectiveness of business support
17 April 2025
Calling all oracles: the prediction survey for our evaluation of public support for businesses in the UK is now open to all. Interested? Participation will take only 10-15 minutes and you will be asked to estimate how effective business support programmes were at increasing turnover and employment.
Click HERE to complete the prediction survey!
Read on to find out more about the history of crowd forecasting, its existing and potential applications, and why your estimates are so important for our analysis.
As the story goes, it all started with a cow. In 1906, at a crowded country fair in Plymouth in south-west England, 800 people gathered to participate in a curious contest: whoever could most accurately guess the weight of an ox would win (the unfortunate ox, having already been slaughtered, would not meet the victor). When he considered the estimates from this contest, statistician Francis Galton noted with surprise that the median guess was extremely accurate, within 1% of the true weight.
Galton’s bovine revelation remains influential to this day, in large part because the “wisdom of crowds” is a general statistical principle that can be applied to a wide range of contexts. With a large sample size, the aggregation of estimates reduces the noise of individual estimates by replacing them with the (much lower) average noise of group estimates. The larger the sample, the stronger this effect becomes.
The “wisdom of crowds” has been leveraged as a powerful predictive tool in domains ranging from marketing and consumer feedback to political campaigns. Online crowd forecasting platforms gather estimates for many topics, rewarding the most accurate guesses just as they did in Plymouth over a century ago.
In addition to forecasting, collective intelligence has become an established academic discipline that extends the principle of “wisdom of crowds” to other forms of group intelligence and collaboration, recognising the enhanced capacity that emerges when people work together. Many collective intelligence groups have emerged in academic and policy institutions, including the Centre for Collective Intelligence Design at Nesta.
In practice, estimates are typically collected with a prediction survey that asks participants to provide the expected true value (point estimate) and, sometimes, the distribution of this value (e.g. confidence intervals). Prediction surveys are increasingly used in research to strengthen analysis and deepen insight. Researchers benefit from both the predictive power of crowds and their ability to illuminate human psychology, i.e. how people think about the question at hand. Often, inaccuracy is as valuable as accuracy in these contexts, highlighting areas where expectations diverge sharply from reality.
Today we are launching a new survey, asking for predictions about the impact of three support programmes for small and medium-sized businesses in the UK. We are collecting data from you: collaborators, policymakers, researchers, and all those interested in science and experimentation.
The prediction survey is motivated by four main goals: accuracy, analysis, surprisingness, and sentiment.
- We will measure accuracy by comparing the expected outcomes for the business support programmes to the actual estimates of our statistical analysis. The more responses we have, the more accurate the “wisdom of the crowd” will be. We will also find out how wide-ranging the estimates are, i.e. how much consensus there is.
- To deepen our statistical analysis, we will leverage the fact that a crowd’s individual judgments can be modelled as a probability distribution. This distribution will be used as an informative prior in a Bayesian analysis of the outcomes of interest. Having such a prior is advantageous as it allows us to incorporate domain expertise and collective wisdom into our model.
- The findings of the prediction survey will also tell us how surprising our results are, i.e. the amount of “news” in our analysis. As mentioned, inaccuracy is informative: identifying the outcomes for which predictions are least accurate will point us towards the most interesting and informative results.
- Finally, the findings will tell us about general sentiment: Do people expect business programmes to have worked? If so, what are their expectations about the scale and duration? Do they think the method of evaluation will make a difference? Understanding how optimistic or pessimistic respondents are will uncover attitudes towards public support for businesses and towards the traditional approaches used to evaluate these interventions.
We encourage academics, policymakers and practitioners not currently using prediction surveys to consider incorporating them into their own analyses. With tools such as the Social Science Prediction Platform (SSPP), setting up such a survey is relatively straightforward, and data collection is free. For those only familiar with prediction surveys in their commercial applications, we hope this post prompts you to think about other contexts, such as informative priors in Bayesian analyses or as a tool to identify the most newsworthy results.
We also believe there is untapped potential for prediction surveys to inform policymaking by illuminating attitudes towards both specific interventions and towards policy approaches more broadly. Our findings from this prediction survey, for instance, will have direct policy relevance as discussions continue about potential future business support programmes.
The larger the crowd, the wiser! Please consider sparing 10-15 minutes to participate in our prediction survey by clicking HERE, even if you are not familiar with business support programmes.