Preparing for scale will almost always require some optimization or adaptation. New users will bring new needs. New partners will bring new capabilities. New funders will bring new performance targets. Typically some trade-offs are required to balance the integrity of your human-centered solution with a model that is efficient enough for scale. Continued iterations of research, prototyping, and optimization are the key to a successful fine-tuning.
Now that your solution has been running in the market for some time, you may be seeing opportunities or feeling some pressure, to grow or scale. Growing your solution will see you invest additional resources to achieve a relative increase in customer reach or outcomes. Scaling, on the other hand, implies exponential gains. Scalable solutions are those that can achieve a steep increase in reach or outcomes, from just incremental additional investment. If your goal is to scale, then it’s important that you and your stakeholders are aligned on what success looks like and what it will take to get there. Scaling an intervention is a complex and long-term effort, but tools like this can help you start the process and quickly identify the first obstacles to tackle.
- Building on the optimization needs you identified during the Exploring Scalability activity, define what iterations you need to make and test. In addition to your core users, give attention to new distribution partners or frontline implementers. They solution has to be desirable, feasible and viable for every stakeholder if they are to stay engaged
- Then choose your learning approach. If you expect to make only small tweaks then you could run a quick Live Prototyping activity to validate those changes. If you anticipate significant adaptations then Rapid Prototyping would be better, running diverse, quick and scrappy tests on components before deciding what adaptations to take forward.
- Your final iterations will likely involve amplifying the parts of your solution that are most effective and trimming or replacing aspects that are not working as well. You’ll need to ensure that these adaptations do not compromise the design elements that have made your solution effective in the first place. Make a list of any risks and plan to track for these through the Monitor and Evaluate activity.
- Finally, invest time in packaging up any guidance that new implementation or distribution partners will need to effectively roll out your solution. Good training is paramount. Establish a new Roadmap for Success together so everyone is aligned on scale goals and milestones.
- Next, spend some time mapping out what you know about your potential conditions for scale. Think about the market you plan to scale into and its users, the channels or partners through which you might distribute or implement, and the investors or funders you might need to bring on board. This will require some Secondary Research, consultations with key stakeholders, and most likely a round of field research with new users.
Monitor And Evaluate
There are lots of ways to Monitor and Evaluate (M&E) your solution, the key is to understand what measurement is right for you. Sometimes it’s easy, either your solution makes money or it doesn’t. But if you’re trying to change a community’s behavior or increase the adoption of service, you may need a more nuanced approach. For this reason, measurement is an area where you’ll benefit from having specialized team members to support. These are a few things to consider in your planning.
- Start by thinking about what data you really need for the stage of implementation you are in right now. During Live Prototyping you might want data to learn more about what works and continue to iterate but in a larger Pilot you might need more rigorous evidence to secure funding for scale.
- Be sure to bring key partners and stakeholders into this conversation. They will have perspectives and expertise that can shape a great M&E plan.
- Now that you have your objectives clear you’re ready to develop your ideal set of measures using the Define Your Indicators activity. Try to find a balance between quantitative and qualitative measures of effectiveness.
- While quantitative metrics will enable you to track how you’re doing against targets, qualitative feedback from users can offer much richer insights into how and why your solution is working or not working.
- Update and refine your M&E Framework as you iterate upon your solution or data and evidence needs change.