How to learn and improve on what we do in the rugged landscape of programmes and policies?

By Triin Edovald on Monday, 4 April 2016.

Puzzle pieces

The movement towards evidence-based policy has led to many organisations ranging from government agencies to philanthropic organisations to require that interventions be ‘evidence-based’. In this context, ‘evidence’ refers to the best available scientific evidence that enables researchers, practitioners, and policy makers to decide whether or not a programme, practice, or policy is actually achieving the outcomes it aims to. Simply put, we talk about whether a programme, practice or policy ‘works’ or not.

Currently, a randomised controlled trial (RCT) is considered the ‘gold standard’ for establishing a causal link between an intervention and change in outcomes. Suppose you are testing a set of services to SMEs that are new and inexperienced exporters. You randomly assign participants to either an intervention group that receives the intervention that is being tested or a control group that is exposed to ‘business as usual’ (BAU). You then compare the results after a period of time. The introduction of a randomly assigned control group enables you to compare the effectiveness of the set of services against what would have happened if you had changed nothing.

Suppose the trial shows that the intervention under study including an in-depth capability assessment and a face-to-face skills-based programme did not improve SMEs’ readiness for international business, and didn’t help them build international trade capacity. We could wrongly assume that support services for SMEs do not improve their exporting readiness and capacity. What the study allows you to show is that this particular set of services delivered in that particular manner over a specific time period and teaching SMEs those particular concepts did not make a difference.

That leads us to the question: Are RCTs the best way to tell us how support services should be used to maximise SMEs exporting readiness and capacity? Or are there other ways to improve our ability to learn and do things in a better way to achieve the desired results? These are some of the questions that will be explored at the IGL Global Conference and its side events that will be held 24-26 May 2016 in London, UK.

While RCTs certainly provide the best means to determine whether an intervention is achieving its aims and intended impact and by no means have there been enough of them conducted in most policy domains including innovation, entrepreneurship and growth, other methods can be hugely beneficial in maximising learning.

RCTs often allow us to test no more than two or three designs at a time. Also, the pace at which the testing can be done may not always be reasonable within existing time and budgetary limits. Furthermore, when you think of the example above, testing the effectiveness of support services for SMEs, the design space is significantly larger than the specific bundle of services tested. In fact, most outcomes depend on complex combinations between various design options.

One way to facilitate quick learning and get the right combination of components and parameters in place is to be more experimental in a wider sense than ‘just doing RCTs’. We could consider including some mechanisms in the intervention that inform the practitioners in real time, for example, about how well SMEs participating in the face-to-face skills-based programme absorb the concepts and material presented to them. Practitioners could also have more freedom to experiment with different tools and strategies to deliver the material. The rapid feedback loops would allow practitioners to adjust their ways of delivering the intervention to maximise the SMEs’ performance. As a result of that we may end up with an intervention that includes a more light-touch capability assessment or an online programme with some additional components that end up making a greater impact on SMEs’ understanding of the stages of export in relation to their own business.

Obviously, in the case of such feedback loops one could easily confuse correlation with causation and such an approach would not allow us to answer the type of questions that RCTs are designed to address. However, not only would such experimentation facilitate experiential learning, but it would also encourage positive deviance in decision-making and avoid promoting only the top-down diffusion of innovation and support initiatives. 

 

 

Photo credit: Mike Kniec on flickr.com