In 2019, coincidentally just about the time the Nobel Memorial Prize in Economic Sciences was awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer for their groundbreaking application of randomised controlled trials (RCT) in developmental economics, the Austrian Research Promotion Agency (FFG) also set sail on an expedition to search for better evidence with three RCTs in innovation policy. The goal of the journey was quite clear: Find real evidence for our impact, which goes beyond classic evaluation methods. Do we as an agency actually make a difference when we offer new services?
As all innovation agencies, FFG is in the process of becoming a more active agency and providing more than just funding. The vision of a service-oriented agency that can achieve more impact was already present four years ago but has gained more momentum now in the discussions about transformative innovation policy. So generating evidence was our initial aim when we started three RCTs. Now we are talking about transformation: that’s quite a development, right? Of course, as we all know now, the goal is not always straight ahead, but that’s the sweet spot of learning!
When the INNOSUP Call was launched, we did not want to reinvent the wheel, rather we looked for partners, best practices and advice. We put effort into designing the RCT, planning scientific methods of how to measure outcome variables and felt quite smart when discussing covariates. It seemed like a great and victorious expedition for THE evidence. The experiment design was our focus, because it was new to us. The conventional agency work, the design of the services, was our standard procedure.
THE CASES
With our first RCT, SimCrowd, we investigated the impact of pairing FFG grants with crowdfunding campaigns for social innovations via a three-arm messaging trial. 22,000 individuals were randomly allocated to one of three treatment groups and received emails that announced the crowdfunding campaign. Depending on the treatment group, FFG was either not mentioned, mentioned in the form of seed funding (funded, if you invested) or mentioned in the form of Challenge Match Funding (invest and match the funding). The outcome we measured was recipients’ interest in the crowdfunding campaigns (tracked via click rates and other indicators). Results showed that mentioning the financial support from FFG has a positive effect on the campaigns. Women in particular showed a statistically significant positive reaction to the Seed Funding offer. This indicates that pairing crowdfunding campaigns with public grant money is advantageous (at least for specific target groups in the audience), which can serve as valuable learning for future campaigns.
In SimCrowd we were able to run robust quantitative analysis, whereas in our second RCT, InnoCAP, we faced more unforeseen challenges. In this RCT we wanted to investigate the efficacy of two interventions to improve firms’ knowledge of non-technical innovation processes. In our experience, SMEs often have shortcomings in planning and undertaking non-technical innovation projects so we wanted to test whether either peer learning opportunities within a workshop or an infosheet and a voucher for a digital expert platform (Clarity.fm) have a greater positive effect on firms’ innovation processes. However, despite a well-planned and thought-out experimental design, things did not go as planned. First, we were faced with a very limited response when we opened our call for interest, only 61 out of 446 eligible SMEs completed a survey to register their interest. Second, after assigning these 61 participants to one of the treatment groups, the uptake of support was very low despite FFG’s various efforts to encourage businesses to use it. Around 15 firms participated in the workshop and only two firms used the voucher for the expert platform. Given these low numbers, no quantitative analysis was possible and follow-up qualitative interviews had to be conducted. Results from the interviews showed that the workshop and especially the peer learning was positively evaluated, whereas the infosheet was not taken note of and the expert platform was simply not used. Furthermore, participants seemed to have different perceptions from FFG, as they expected to get more help and advice on how to write proposals and not on how to run innovation projects.
In a third RCT, FeedSFirst, we sought to find out whether the addition of benchmark scores to the feedback we already provide when awarding funding has an effect on their project implementation. Project success was measured via an expert evaluation of the projects’ final reports. Although the original sample comprised 164 firms, not all projects could be included in the analysis due to various project delays. However, what we can observe so far indicates that the type of feedback has not had a significant influence on project success that would make any impacts detectable.
This process showed how complicated it can be, despite the best intentions and preparation, to obtain robust results from experiments undertaken in a changing policy setting. Despite the sometimes frustrating difficulties, it was a highly valuable path for our agency and the IGL team did great work to help us gather learning and recommendations. In particular, we can emphasise the need to test and prototype services and the RCTs themselves before running the full-fledged trial. Equally important in our experience is including experts with experience (not only knowledge) in the design process from the beginning. But what happens now? What else can we learn from the twists and turns of the last three to four years? Look forward to the next blog in the series detailing the next steps!