The “pilot to scale-up model” is a widely used approach when introducing social interventions or new policies. The premise of the model is very logical — considering there is an abundance of unknown variables that are not accounted for during the design of a policy or social intervention, running a pilot would allow decision-makers to gather information on what works and what does not at a relatively smaller cost than implementing the programme in question at scale. However, under certain contexts, focusing on stronger uptake and adoption, as opposed to expansion or replication, is a more appropriate follow up for a pilot.
When Pulse Lab Jakarta (PLJ) was established in 2012, the concept of leveraging big data, artificial intelligence and human centred design for development and humanitarian work was still novel. As such, PLJ needed to focus on showcasing potential solutions that these new resources could provide. By 2017, PLJ’s portfolio covered more than two dozen projects every year, with each project a demonstration of “what is possible” with data innovation. This strategy was effective in building awareness of and, subsequently, demand for data innovation in the development and humanitarian sectors.
Going from awareness to strong adoption of data innovation, however, requires strategic systems-thinking, capacity building and trust building. For the most part, the types of solutions that data innovation offers are intended to provide continuous insights, which would then feed into policy making or social intervention design processes. Haze Gazer, for instance, was developed to provide the Indonesian Government with a viable tool to monitor potential forest fires and haze-related issues in real time throughout the year. While the potential benefits were immense, sustainability required maintenance, data management and regular ground-truthing. More importantly, there needs to be adequate capacity to interpret the information generated by those tools into policy and decision-making insights. In the absence of such capacity, uptake and effective adoption are unlikely to occur.
By the end of 2019, PLJ shifted its strategy from demonstrating what is possible to focusing more on providing support for government and development counterparts in adopting and integreating solutions that are based on data innovation. A year-long transition process has led to PLJ’s repositioning strategy to shift from being merely a “data innovation facility” to becoming an “analytic partnerships accelerator.” One of the key features of this strategy is to reduce the size of the Lab’s portfolio, allowing reallocation of resources towards working with counterparts to develop capacity, set up systems, build coalitions, and ensure ownership of data innovation-driven solutions.
PLJ’s journey is reminiscent of the traditional development “pilot to scale-up” model, but with an interesting twist. PLJ’s version of pilot to scale-up turned this model on its head: the Lab’s “pilot” relies on a large number of projects to demonstrate potential; scaling up shrinks down PLJ’s portfolio to ensure uptake and adoption as the path to scaling overall impact.
Such a variant of scaling up, it turns out, is not exactly the exception to the norm. From October-December 2020, Saraswati and The Asia Foundation (TAF) Myanmar developed tools to help organisations who were conducting pilots or programmes at a small scale to identify ways to scale up. In the process, workshops and discussions in which these tools were used helped several organisations to discover that what they really want and need to do, under the context of their programs, is not to expand their program, but to further ensure that changes are more embedded within their counterparts’ systems. PLJ, along with Instellar, were invited to help design and facilitate these workshops for two Myanmar organisations, Thibi and Renaissance Institute (RI), as well as Kopernik, which is based in Bali, Indonesia.
The workshops were insightful not only for the participating organizations, but also in highlighting the need to expand the definition of scaling up. Thibi decided to focus on their current subnational counterparts and direct more resources to ensure adoption and uptake. In a similar vein, RI discovered that they could enhance their overall impact by reducing the number of township level data-policy experiments and instead channel the learning from these to higher level policy makers. On the other hand, Kopernik’s Solution Lab pointed out that as a lab, focusing on replicating the solution that they pilot to other areas is the most suitable path to scaling their impact.
PLJ and Saraswati’s experiences further highlight that while there is merit in trialling a programme before scaling it up, there is a need to expand the definition of scaling up. Thibi and RI learned, in much the same way PLJ did, that their scaling up effort should focus more on stronger adoption — through policy, guidance, sharing and learning — while leaving others to focus on expanding coverage. And so there are certain contexts under which the rational next step after a successful pilot is to focus on uptake and system embeddedness, rather than on expansion. This is certainly the case for most programmes that involve the introduction of data innovation, where time and resources need to be allocated to ensure that government systems and capacity are at a sufficient level to make optimal use of the proposed solutions.
However, the traditional model remains relevant under certain circumstances. When there is clear evidence that a certain programme is effective at a pilot scale and that the current capacity of the government is sufficient to carry out such a programme at a larger scale, then the logical next step is to expand their coverage. Vaccination programmes or student deworming at schools, for instance, fall under this category. There is already a robust scholarship that shows the efficacy of vaccination in reducing life-threatening diseases and the effectiveness of deworming programmes in improving learning outcomes. Pilots can be used to confirm the same level of effectiveness of vaccination or deworming programmes in certain countries, and to check the national and local administration’s capacity to carry out such programmes at a larger scale. When these are verified, expansion or replication of the programme in question can be confidently carried out.
To sum up, going big can be good, but isn’t always necessarily better. John Gargani and Robert McLean offer a better prose on this issue in their book, Scaling for Impact: Innovation for the Public Good: “Big, fast and flawed or small, slow, and beautiful — both have their place.”
Understanding and accepting the idea that scaling up can mean expanding coverage and ensuring adoption and uptake under certain contexts are important for both implementing partners and funders. If you are interested to learn more about the tools, check out our online board and learn more about our learnings from the workshops here.