I joined IGL last year after spending most of my career working on impact evaluations in developing countries. I expected that focusing on evaluations of business-support programmes in the UK and other European countries would be quite a shift for me. But I quickly found that the challenges confronting SMEs in Europe are not so different to those faced by the kinds of agricultural business in sub-Saharan Africa that I was working with previously. Finding ways to make business operations more efficient, improve management practices and adopt digital technologies are apparently key challenges for small businesses everywhere.
When it comes to evaluating the impact of programmes, there certainly are some differences when working in Europe. At IGL I spend a lot of my time thinking about:
Sample sizes: Budgets for programme delivery and for evaluation tend to go further in developing countries than they do in Europe. In my previous roles it wasn’t unusual for a programme I was evaluating to be implemented in dozens of locations with thousands or tens of thousands of participants. Business-support programmes in Europe tend to be tested at a smaller scale, meaning that the sample sizes available for quantitative analysis are more limited. This means that the impacts have to be reasonably large for us to be able to see them above the statistical noise.
Survey response rates: Figuring out how to encourage programme participants to respond to surveys is a big challenge in our work at IGL. Simply sending round an email with a link to an online survey certainly isn’t enough to prompt a response from most busy SME managers. This is a big contrast to most places I’ve worked, where people rarely said no when approached for a survey. (Though I understand that response rates are now becoming more of a challenge around the world, as the pandemic has led to a shift from in-person to telephone interviewing.)
Survey length: One way to maximise survey response rates is to make it as quick and easy as possible for people to respond. That means that the content of the questionnaires has to be very tightly focused on the key outcomes of interest. Unlike in my previous work, I can’t expect survey respondents to be willing to spend an hour or more answering a long series of questions designed to measure multi-dimensional concepts in a variety of different ways.
On the other hand, some of the other key challenges involved in the work we are doing at IGL are very familiar to me. Here are a few I’ve noticed:
Lots to learn about what works: Many people have the impression that the use of experiments and quantitative evaluation is thoroughly embedded in the international development sector, so there must be huge amounts of evidence about which types of programmes work, how they work, and who they work for. It’s true that some development interventions (such as microcredit and cash transfers) have been the focus of intensive research efforts in the past 10–15 years. But these represent only a small proportion of the plethora of different programmes being carried out in the developing world – there’s still a huge amount left to learn. And I’ve found that this is similar in the business-support world in Europe: for all the efforts going into business-support programmes, we don’t have much high-quality evidence about their effectiveness and how best to target the resources available. Most programmes in the past haven’t been set up in a way that will generate evidence about their impact – though this is something that is starting to change with experimentation funds like the UK’s Business Basics Programme and the European Commission’s INNOSUP-06 programme.
Implementation matters: Although our aim is to learn about the ultimate impacts of the interventions we’re testing, a programme won’t have much impact at all unless it is implemented well. Very often a promising-sounding programme doesn’t produce results because of some seemingly small details in the delivery: training sessions are held at a time or place that is not convenient for the intended users, people don’t engage with an app or online service because of bad user experience or limited internet bandwidth, and so on. For this reason, much of my time supporting a programme evaluation is spent in thinking through these details with the implementers.
Making evaluation results available at the right time to influence policy: Implementing a programme and evaluating its impact takes time. Something I’ve seen several times is that evaluations are commissioned to shed light on an important policy question, but by the time the results are available, thinking has moved on and decision-makers have different questions. For evaluation results to be useful, they have to be available promptly. In IGL we’re trying to address this by looking for short-term measures of impact so that evaluation teams can get the results out as quickly as possible – but it’s an ongoing challenge.
I’m looking forward to learning more about all these issues as the IGL team works on reviewing the learning coming out of the Business Basics Programme and INNOSUP-06 over the next several months. We’ll be posting updates about emerging insights on this blog as we go along – do stay tuned for more.