
Blog
Seeding success? Evaluating the effectiveness of public support for high-potential businesses
12 March 2025
What do you picture when you hear the word “business”? A building made of steel and glass, with hundreds of employees passing through the revolving doors? A colossal factory, with thousands of products moving along conveyor belts? These images may readily spring to mind, but most businesses are not large corporations – they are small or medium-sized, with few (or no) employees and modest profits. Small individually, but mighty together: SMEs are the backbone of the UK economy, accounting for 60% of employment and 53% of private sector sales. SMEs also play a vital role in their local communities, driving innovation, economic growth, and the creation of opportunities.
SMEs are essential – but starting, sustaining, and scaling a business is difficult. Smaller companies face challenges such as limited access to finance, resource constraints and knowledge gaps, which often impede innovation and the adoption of new technologies. Given the economic importance of SMEs, and the unique challenges they face, governments have a strong incentive to support these businesses. But although a range of programmes have supported businesses, little is known about their impacts over the long term.
This blog post discusses a current IGL research project about the effectiveness of past government support for SMEs, particularly programmes targeting firms with high-growth potential. Using experimental and quasi-experimental statistical approaches, we will evaluate the long-term impacts of support, the distribution of these impacts over time and region, the accuracy of different measurement approaches, and the ability of programmes to identify and reach their intended beneficiaries.
_________________________________________________________________________
Starting in the mid-2000s, research about SMEs highlighted the idea that hidden among the masses of smaller companies were prodigies: a small number of high-growth businesses responsible for a substantial proportion of job creation and prosperity. Policymakers became keenly interested in high-growth firms, and whether it was possible to identify, and provide targeted support for, businesses that could rise to these heights. Against this backdrop, large-scale business support programmes were delivered with a focus on high-growth firms, as demonstrated by the closure of Business Link and launch of GrowthAccelerator.
Since then, academic understanding of the drivers and inhibitors of business growth has advanced, but researchers acknowledge that there are still substantial knowledge gaps. Early on, research had identified common “myths” about SMEs – such as a belief that high-growth firms are typically young, technology driven and located in London, and that high-growth is a permanent state, evidence that was factored into policy development. However, the focus on high-growth companies has been challenged, with doubt cast on the assumed overlap between firms that achieve high-growth and those that drive productivity, the connection to innovation, whether it is possible to identify firms with potential in advance and the the lack of robust evidence to show that any support has actually caused an improvement in outcomes. This research raises questions about the impacts of business support programmes: their ability to reach target firms, the relationship between the support provided and business outcomes, and the costs and benefits to the public. More research about the effectiveness of business support programmes is needed.
What does a business support programme involve, and how do we know if it has achieved its goals? Many types of support are available, with a common focus on business development – for instance one-to-one business coaching or mentoring – and funding for specific projects. But despite the attention and investment that goes into these business support programmes, we know relatively little about their outcomes, because very few programmes have been rigorously evaluated. When they are assessed, the methods used are often not credible: of 690 impact evaluations of business support policies across the OECD, reviewed by the What Works Centre for Local Growth, only 23 were found to have a credible approach to assessing the impacts on employment or turnover.
We are left with more questions than answers: Do business support programmes work? If so, which ones? When can we observe the effects of support, and how long are they sustained? Which types of businesses most benefit? Were the programmes targeted at high-potential businesses able to successfully identify them?
IGL is being funded by UKRI to explore these key questions by evaluating the long-term impacts of business support programmes in the UK. Specifically, we will evaluate three programmes: GrowthAccelerator, the Growth Impact Pilot, and an Innovation Vouchers Programme. GrowthAccelerator was a large-scale programme run between 2012 and 2016 by what is now the Department for Business and Trade which supported over 20,000 businesses. Companies that signed up to the programme were assessed and, if deemed eligible, were provided with access to leadership and management training and one-to-one coaching (among other forms of support which we do not consider in our current analysis). The Growth Impact Pilot was an RCT run within GrowthAccelerator in 2014-15, with 546 participating businesses. The treatment group received both leadership and management training and one-to-one coaching, while the control group received leadership and management training only. The Innovation Vouchers Programme was an RCT run by Innovate UK in 2015 with 1,463 participating businesses, in which the treatment group was allocated a voucher for innovation projects with an external knowledge provider, and the control group received nothing.
Our goal in this analysis is to generate robust actionable evidence and insights, with methods convincing enough for academic researchers and findings informative enough for policymakers. First, we ask the most crucial question: do business support programmes work? Two of the programmes we will evaluate are RCTs, so we will use experimental methods to establish the casual effect of treatment on total (cumulative) turnover and employment over at least a four-year period, alongside other outcomes of interest. Although RCTs are considered to be the “gold standard” for establishing causal relationships, it is often not possible, practical or ethical to randomise access to support programmes. However, there are other, well-established methods for causal inference, which we will leverage in our evaluation of GrowthAccelerator: quasi-experimental comparisons of programme participants and a matched group of companies with similar characteristics, and difference-in-difference and regression discontinuity comparisons of groups within the programme.
We go one step further by applying the same quasi-experimental approach to the two RCTs. This will make a substantive methodological contribution: by comparing the estimates from the experimental and quasi-experimental analyses of the RCTs, we can draw initial conclusions about the extent to which non-experimental approaches can provide accurate evaluations of the effectiveness of government interventions. Such comparisons are particularly relevant as past work has highlighted the difficulty of identifying and measuring the drivers of growth and more recently, research has confirmed concerns about the ability to control for selection effects with business advice programmes.
Most assessments of business support programmes take place relatively soon after the programme considered has concluded. This short timespan between delivery and assessment leaves open the possibility that the benefits of the programme have not yet been realised because it is too soon to observe them, that programme benefits will be only short-term, fading away over time, or that a small impact will be sustained for many years, delivering unexpectedly high value for money. In our analysis, we consider impacts on a longer timescale, measuring outcomes for at least four years after programme participation in our primary analyses, and even further into the future in additional analyses. This approach will illuminate the longevity of support programmes, and therefore the value for money of the public investments made. We will also measure outcomes after each individual year, to test when effects may be visible, therefore providing valuable information about when benefits can be expected and the ideal timing of assessments.
When evaluating which businesses benefit from support programmes, the location of these businesses may be important, given the significant regional disparities within the UK. In line with the focus of our UKRI funding, we will explore whether impacts vary based on region, through a sub-group analysis of outcomes.
Part of designing a good support programme is to successfully target the intended businesses – in the case of GrowthAccelerator, the goal was to reach high-growth SMEs. But was the programme able to identify them? Using data from the initial eligibility assessments for GrowthAccelerator, we will measure whether the score given to the companies predicts their long-term outcomes. We will also compare human assessment to data-driven approaches: are people better at identifying high-growth companies than a statistical model with additional data?
Academic researchers and policymakers alike are interested in the questions discussed above, but have different perspectives: academic researchers ask “how does this advance our understanding of business growth?” while policymakers ask “how should we invest public funds to drive business growth?” In order for the evidence we generate through our analysis to be both rigorous and actionable, we must consider both perspectives. We will accomplish this through our choice of outcome measures (e.g. cumulative turnover approximating Gross Value Added) and our statistical approaches (e.g. considering effect sizes of policy interest, not just minimum detectable effects).
Researchers widely acknowledge that perverse incentives and opaque methods can negatively influence scientific knowledge – the “replication crisis” has uncovered a wide range of published research, particularly in psychology and social sciences, that cannot be reliably reproduced by other scientists. In response, the “open science” movement has pushed to make scientific research, data, and publications more transparent and accessible. Our goal in this analysis is to be as clear-eyed, transparent and accurate as possible. To this end, we will employ good practices from open science, including the drafting of a statistical analysis plan and pre-registration of our evaluations on the AEA registry. To test how surprising and informative our findings are, we will also run a prediction survey to measure the anticipated outcomes of our analysis, which will be compared with the actual estimates.
We look forward to sharing results in the coming months. For me this is a starting point but for other members of the IGL team this will represent the end of a very long journey, having originally been involved with the programmes being studied from the very early stages.