With the shape of the global economy changing before our eyes, never before has there been a more pressing need to develop effective, innovative solutions to effectively allocate resources and support small businesses that have been impacted. Back in the autumn of 2018, the European Commission launched a Call for Proposals (INNOSUP-06-2018), aimed at innovation agencies to design new or improved policy schemes supporting SME innovation, and test these using randomised controlled trials (RCTs). IGL was selected by The Executive Agency for Small and Medium-sized Enterprises (EASME) to deliver support to these projects. Almost one year since the projects kicked-off, the trial designs have taken shape and many of the projects have either launched or will be doing so soon.
We've compiled learnings from this first phase of support in a short report. In the first part of this blog series, we summarise some challenges that the agencies faced, sharing what does and doesn’t work for future innovation policy when evaluating business support programmes, with the hope of encouraging more robust, effective experimentation amongst those designing and running trials.
Designing a trial is a complex undertaking and adjustments often have to be made as plans develop. The second part of this blog series will suggest how future programmes supporting innovation policy experimentation could be designed to increase awareness of the challenges involved, and thereby increase the feasibility of experiments running smoothly.
The challenges when designing a trial for an innovation agency
What then, are some of the more common pitfalls that agencies have come up against when designing trials of this nature? We’ve profiled five of the most widely faced challenges, which you can read about in more detail in our report, as well as some of the other hurdles encountered along the way. Ultimately, these important take-aways boil down to:
Being specific from the outset
Specificity in all aspects of designing a trial are key, and establishing a clear research question from the get-go is vital for shaping trial design and ensuring all involved understand what they are set to learn. Randomised controlled trials (RCTs) are very good at answering specific impact questions, however, we found that many projects were motivated by policy questions that were too broad or complex. We therefore encouraged project teams to use practical exercises and more scientific approaches such as the the PICO framework 1 to help refine their research questions and provide the basis for the hypothesis that they would test through their analysis.
This definitive approach should also extend to narrowing down outcome measures (i.e. the change or impact that projects are seeking to evaluate). While these are typically described in general terms, in many cases it proves a challenge to determine the actual indicators that will be used to track the outcomes and provide data for the analysis. For instance, we have worked with projects to progress from a broad initial expectation that the intervention would increase ‘levels of innovation amongst SMEs’, to the specific nature of that innovation activity and the causal pathways that would ultimately lead to the expected goal.
Designing for success
When designing a trial, it is important that it is done with sufficient statistical power and a consistent approach, so that if the intervention has a true impact, we can be confident that we would detect it. In our experience, statistical power is often overestimated or not adequately considered and this can have very consequential implications for the project. Overestimation can stem from not considering factors such as the level of compliance from businesses involved, attrition and a lack of sensitivity in outcome measures. Even if it is not possible to generate the statistical power required to answer the intended research question, our report explains how the research project can still be valuable and offer numerous lessons.
Considering all potential outcomes during the design stage and pre-emptively anticipating how to appropriately respond is crucial, as we have also found that once trials have kicked off, there are often further issues down the line with recruitment, compliance and survey response rates. When there are several partners involved in the design and implementation of the trial it is also important to ensure consistency in the way that participants are recruited, the intervention is delivered and how outcomes are measured. It is crucial to standardise the delivery across locations and ensure that all delivery partners follow the same approach to recruitment, intervention delivery and data collection. Doing so makes it easier to interpret and understand the source of the results, boost the effectiveness of the intervention and make the results less noisy.
Keeping it random
Randomisation is the cornerstone of an RCT. The theory is simple but in practice there are many factors worth considering to ensure it is implemented in a way that provides unbiased results. Where possible, the optimum time to conduct randomisation is after both eligibility checks and the collection of baseline data, otherwise there is a risk that this information will potentially render biased results. After that, it is highly important to consider both the method of randomisation and how to generate the random allocation sequence (see IGL’s guide to RCTs for discussion of the main approaches to this). Which is most appropriate depends on several factors such as the size of the population, number of trial arms, the type of participants and how they are recruited and assigned. Determining the correct approach can be complex, especially if projects lack existing data that could have helped inform decisions.
Recruitment is typically one of the toughest challenges during a trial involving business owners, and it is always helpful to consider how best to respond if recruitment numbers are different from what was initially expected, eg. consider whether to reduce the number of comparisons that project teams intend to make. It is also important that projects recruit the right type of participant. For example, if the intention of the intervention is to encourage SMEs to adopt a new innovation method, then it may be better to exclude those who are already doing so given the limited scope for them to benefit further.
In addition, a project may want to consider the benefits of using baseline knowledge of a specific innovation activity in their selection criteria. If they have a relatively small sample, it would be better to recruit SMEs who have the most potential to benefit from their intervention. If it doesn’t work for the ‘ideal candidates’ it is unlikely to work for others, but a positive impact for those most placed to benefit would support the assumption that it can also work for more marginal cases.
Expecting the unexpected
If there is one overarching lesson that 2020 has delivered in the field of experimentation (and perhaps, more generally), it is that there will always be something that cannot be planned for. The most noteworthy current, unprecedented factor that is having profound impacts on the smooth delivery of project plans is the COVID-19 pandemic. Health concerns and social restrictions have made it impossible to deliver many planned interventions, while business needs and the priorities of innovation agencies are also continually changing, and rather dramatically at that.
In the wake of this development, groups were asked to consider the feasibility of continuing as planned; in an extraordinary situation such as this one, we anticipated unforeseen change across various areas. Firstly, projects in the early stages of recruitment, or whose interventions were not safe to currently carry out, have had to accept and plan for delays to their activities. Even with such delays, projects could still find a drop or change in demand for innovation support, as many small businesses are focusing all resources on ensuring their immediate survival and are therefore experiencing a reduced appetite for implementing innovation.
Projects that do decide to continue as planned may have to shift their research focus and adapt their interventions (e.g. in this particular case switching from in-person support to a virtual setting). Expectations about when and how benefits are realised by businesses could change and projects have had to accept that even the best laid plans are subject to change.
You can read about these challenges in greater detail, as well as others that the agencies have faced in our report. Look out for the second part of our series, which will be published shortly, we will be discussing the lessons and insights gained so far from our support of projects taking part in the Innosup programme, and how best to overcome some of the challenges mentioned above.