In a guest blog, Dan Hodges of Innovate UK explores how experiments are improving the way we work
In our evaluation framework we discuss the value of using experiments to understand what works in supporting business innovation, and how to make our processes more effective.
In this blog I take a closer look at experimentation as an evaluation tool, how Innovate UK is using experiments to evaluate impact, and our work with the Innovation Growth Lab (IGL), based at Nesta, to increase our understanding and use of experiments.
The scientific approach
I still sometimes find the word experiment a bit of an odd one to use in my field of work. It conjures up images of lab coats and bubbling test tubes – something decidedly more scientific than my office job. But the phrasing is actually very appropriate. An experiment is defined (by the most definitive of sources) as:
A procedure carried out to support, refute, or validate a hypothesis.
More specifically, for us, they are a well-defined test of the impact or effectiveness of an activity, designed to demonstrate the cause-and-effect nature of activity and outcome.
A key element of a well-designed experiment is the ability to randomise allocation of some type of activity or treatment within a population of people (or, in our case, businesses).
Think of clinical trials: doctors randomly provide a population of eligible patients with either a new treatment or a placebo. By allocating treatment randomly within a similar population, it is possible to determine whether the treatment had an effect on those who received it by comparing their health outcomes to those in the placebo group.
We essentially want to apply the same practice as clinical trials but instead of looking at drugs, we’re interested in innovation support.
An experiment to test the impact of a programme
Let’s use an example. From 2012 to 2016, Innovate UK ran an Innovation Vouchers programme, which provided up to £5,000 to an SME towards the cost of expert advice.
The programme’s objective was to encourage new engagement with expert advice and knowledge, with the aim of improving innovation capabilities, knowledge exchange, and business growth. The programme was designed to enable an experiment to test the impact of the vouchers on the companies which received them.
Vouchers were allocated through a lottery – following an eligibility and scope check on all applicants, applications which passed this were randomly selected to receive a voucher until the available budget was fully allocated.
The lottery aspect of the programme was known to applicants, and the application process was deliberately light-touch to reduce the burden on businesses. With a random allocation mechanism in place, we were able to deploy the same type of randomised experiment that is typically used in clinical trials – albeit without a placebo for unsuccessful applicants.
The key point is that by randomising allocation, with a large enough sample size we can assume differences between applicants are averaged out and any change in outcomes can reasonably assumed to be attributable to the effect of the voucher – i.e. the only difference between the two groups is the outcome of a random event that would by itself have no bearing on the outcomes we later measure.
This experiment is ongoing. We’re hoping to publish the results later this year, looking at the impact of vouchers on collaboration, innovation, and turnover.
Other opportunities for experimentation
The innovation vouchers evaluation is our largest experiment to date. The fact is, most of our programmes aren’t as easily tailored for randomised allocation of treatment (such as randomly awarding grants to eligible applicants).
We want to fund the best applications we receive, as determined by a thorough, independent technical assessment, rather than randomly to anyone who applies. However, there are options around allocation and award that have been explored in other countries and could be of interest in the future. In addition, elements of our programmes or processes which could allow for experimentation to improve our effectiveness as an organisation.
An international partnership for experimentation
The Innovation Growth Lab (IGL) is an international initiative led by Nesta, with a mission to support organisations like ours to improve our understanding and expand our use of experiments around innovation and growth programmes.
Innovate UK were a founding partner of the IGL and have been working with them to look across our activities for opportunities to learn from experiments.
One option we’re considering is examining whether offering wider business support alongside our usual support could increase the impact of our funding. For example, whether offering business coaching alongside a grant or loan will provide businesses with wider skills which could increase their capability to make the most of the funding, increasing the overall impact of what we do.
In designing experiments like this, we need to take great care to ensure we’re being fair and open, and that we’re not disadvantaging any group of companies.
In the case of business support, there is a genuine question as to whether the potential benefits of the support outweigh the costs, or the best delivery mechanism to realise any benefits. As such, we would consider the RCT to be a means to test whether we can enhance the support we provide to innovative businesses, to understand whether such an intervention could be justified across all the businesses we support.
We also need to ensure we design an experiment which provides us with the information we need to learn. We need to have a sufficient sample size, and well-defined research questions, and we need to ensure we run the trial for long enough to observe any impacts which might be realised.
A trial in Chile which assessed the impact of providing feedback on business plans of start-ups which participated in an accelerator programme found that those which received the feedback were more likely to survive and achieved higher fundraising levels. However, these results only fully emerged over a five-year period – longer than many evaluations run for – highlighting the importance of allowing time for impacts to materialise.
Another option would be to trial variations of support following a competition or grant award, to help companies make the connections they need to other support, partners, or investors to help them grow. In this instance, we don’t know whether any measure would have a consistent positive impact, so could look to trial more than one potential solution and compare the outcomes in each instance.
Taking care to learn
Experimentation is not straightforward but can be very powerful. Trial design can be a painstaking task, to ensure it is implemented in a valid and informative way. The IGL has been instrumental in expanding our understanding and capabilities around experimentation, and in coming years we hope to make this a key means of continuously improving our effectiveness and impact as an innovation agency.
For me, this is one of the most exciting aspects of our evaluation plans. I look forward to sharing the results.