How to boost survey response rates from businesses

By Rob Fuller on Wednesday, 24 April 2024.

Have you ever sent out a survey to businesses and heard… nothing? You're not alone. Low response rates plague researchers, particularly those targeting busy executives. A review carried out in 2010 found that response rates are lower for managers than for non-managers, and particularly low among top management/executives. In the Business Basics Programme, the response rate to surveys of SMEs averaged 43%, and was as low as 19% in one project. On the other hand, some projects have been able to achieve good response rates – e.g. 76% in one of the Business Basics projects and 89% in the EU-funded DepoSIt project.

Low response rates pose multiple threats to research. They shrink your sample size, weakening the statistical power of your study. Worse, they can introduce bias in your results if missing responses aren't random – e.g. if your control group is less likely to respond than the treatment group. And they also limit the applicability of your results, making it hard to know if your findings apply to the entire population you studied.

IGL’s experience of supporting dozens of RCTs with SMEs over the years has given us some indications of what does and doesn’t work to improve response rates. Some of the most effective approaches are:

  • Personal contact: Response rates are generally higher in projects in which key implementation staff have been in close contact with and/or are known well to the participants, probably because the participants felt an obligation to an individual rather than to an anonymous institution. Of course, establishing a personal relationship like this is easier in a small pilot projects than when implementing something at scale. But even in a large-scale project, there may be benefits to having the same individual responsible for all points of contact with a business. For example, the member of the marketing team who recruited an SME to participate in a trial or the person who provided training could also be given the responsibility of following up to request survey responses.
  • Being persistent: Research and evaluation teams have told us they needed to remind SME participants several times about surveys, and, in particular, to call them by phone rather than (or in addition to) sending emails.
  • Provide some benefit to all participants: People tend to act out of reciprocity: they are more likely to respond to surveys if they feel that they have benefited from participation in a project. One consequence of this is that it’s important to make sure that the interventions are implemented and promoted well, so that people want to take part. But there’s a particular challenge when you’re carrying out a trial with a control group who are not receiving any support. One option may be to provide the control group with some type of support that’s unrelated to the intervention and to the outcomes being measured – although it can be difficult to find something that will be valued by the control group but which cannot possibly have an effect on the relevant outcomes. Alternatively, if it is already known that the intervention will be scaled up later, you could commit to providing the control group with the support at that stage.
  • Providing ongoing benefits: Participants are also more likely to respond if they feel that they are still benefiting from a project. A nice example of how this was achieved was in the ‘scientific entrepreneurship’ trial run by City, University of London. The team wanted to conduct monthly interviews for 8 months after the end of the intervention. To ensure that participants would still feel involved in the project throughout this period, they were invited to monthly information and networking events, as well as being offered mentoring sessions. Of course these activities represented a significant investment on the part of the research team, but it appears to have been successful, with 55% of the original participants still responding to the surveys in month 8.
  • Reminding participants that they had agreed at the start of the project to respond to surveys was found to be the most effective way of increasing survey response rates in an experiment conducted as part of the Growth Vouchers trial in the UK.
  • Providing feedback to the business on how their responses compared to other companies that were surveyed. We’ve only seen this approach used in one case, but it seems to have a lot of potential.

Only a couple of IGL-supported projects have tried providing direct incentives in return for responding to surveys (e.g. being given a gift voucher or entered into a prize draw), and it’s not clear what impact these have had. But there is some evidence that such financial incentives can increase response rates from businesses. A review in 2009 found several studies in which financial incentives resulted in marked increases in response rates over a control group. Most of the cases they examine involved prepaying incentives, rather than promising a payment later. Only one study compared prepaid to promised incentives, and this did not find any clear evidence of a difference in response rates between the two.

We certainly don’t have all the answers, and there’s lots of potential for experimenting with the best approaches. We’d particularly like to see evidence about the effectiveness of:

  • Providing feedback to respondents after the survey has been completed – e.g. a link to a brief report on responses
  • Providing feedback to each respondent immediately upon completion of the survey, showing how their responses compare to other respondents
  • Replication of the Growth Vouchers result that reminding participants about their previous commitment is a strong driver of response rates
  • Optimal timing of phone calls to boost responses to a survey – for example, whether to follow-up by phone on the same day or some time later
  • Providing incentives for survey participation, in the form of vouchers, a prize draw, or donation to a charity

Let us know (at [email protected]) if you’re interested in experimenting with these approaches – or if you’ve already done so and have results to share.

In any case, it seems clear that survey response rates will always be a challenge, unless the project is very small and you can dedicate a lot of resources to following up. Even a relatively good response rate (say of 60 to 70%) leaves room for doubt about the findings of a study. For that reason, we encourage those designing experiments to consider using alternative sources of outcome data that do not rely on surveys. This might involve using administrative data (that collected in the course of delivering your services, such as numbers of requests for additional support), existing datasets (such as tax data that is routinely collected from businesses), or by generating data from publicly available sources (such as businesses’ websites). In the near future IGL is planning to launch a project to return to some older experiments and use tax data to examine the longer-term outcomes. We’re also currently working to grow IGL's in-house data science capability, and will have more to share soon on how that will complement our ongoing work on experimental research – including the scope to find more timely measures of outcomes with lower attrition. Watch this space to learn more!