Home Discussions IMPROVE MIND SPORTS FOR GLOBAL SOCIAL GOOD

IMPROVE MIND SPORTS FOR GLOBAL SOCIAL GOOD

by admin
0 comment 61 minutes read

Social good embodies ideals that are at the heart of the social work profession and promotes its values and goals. Several trends have converged in recent years to create a sense of urgency around social good and to bring together grassroots organizations, global leaders, businesses, and individual social entrepreneurs who are interested in finding creative solutions to the greatest challenges of our society. These trends include mass immigration, uncertain economic future, human rights abuses, food shortages, and inadequate responses to natural and human-caused disasters. They have brought fresh energy to the focus on the need for innovative solutions to large-scale, or macro, social causes that have traditionally been important for the social work profession and are now part of the Grand Challenges for the profession. Social good refers to services or products that promote human well-being on a large scale. These services or products include health care, education, clean water, and causes such as equality and women’s rights. The quest to promote social good around the world can bring together physical and virtual communities that unite around a cause or an idea, discoursing globally and instantaneously, and translating into coordinated actions such as protests or petition drives. Social good is a term that coalesces many movements around the world, is featured on corporate websites, and unites different sectors of society—government, nonprofit, grassroots, and business.

Although the term social good is widely used across a variety of disciplines, it is currently underdeveloped both in research and in practice. The social work profession is uniquely positioned to lead the development of a scientific agenda, evidence-based practices, and educational programs aimed at promoting social good. Social good unites ideas that are rooted in the profession’s longstanding social justice tradition and are particularly relevant to today’s turbulent and divisive political and economic climate. Social good has the potential for reconnecting the social work profession to its roots in social change and Innovation. and to its future ambitions as represented by the grand challenges. Social good is ripe for further refinement—especially in terms of construct operationalization, study design, and measurement.

STAGES OF IMPACT ASSESSMENT

q11-5873489

Outcomes measurement is most effective the earlier you start to think about and plan for it. Ideally, this means from the program design stage, however, it is better late than never! To understand what you need to measure, you need to recognize what problem your program is trying to solve, how it will resolve that problem, and with what resources. This guide will provide you with an approach to understanding the problem, including its causes and effects. This is the first step in identifying the outcomes your program seeks to achieve.

THE LANGUAGE OF IMPACT ASSESSMENT

Evaluation. An objective process of understanding how a program, policy, or other intervention was implemented, what effects it had, for whom, how, and why. In an evaluation, social research procedures are systematically applied to assess the conceptualization, design, implementation, and utility of programs or interventions.

Outcomes evaluation. The assessment of the changes resulting from the implementation of a program, policy, or other intervention. It includes both intended and unintended outcomes for a range of stakeholders engaging in a program or intervention.

Process evaluation. The investigation of the extent to which a program or intervention was implemented as planned. It helps understand why changes occurred.

Economic evaluation. The assessment of the efficiency of a program by comparing outcomes achieved against the costs of the program. Techniques include cost-benefit analysis and cost-effectiveness analysis.

Outcomes measurement. A systematic way to assess the extent to which a program has achieved its intended results.

Social impact. The intended and unintended social consequences, positive and negative, of programs (interventions, policies, plans, projects) and any social change processes invoked by these.

Social impact assessment. The processes of analyzing, monitoring, and managing social impact.

Impact evaluation. The assessment of the extent to which long-term, sustained changes resulted from the program activities. This type of evaluation is more likely to influence policy.

THE SYSTEM IN WHICH IT EXISTS

Social problems are often complex, or wicked, have a range of causes and effects, and often need the effort of multiple programs to be resolved. Problem analysis helps understand the entrenched nature of social issues, identify the ‘root causes, and help map potential interventions. This can help you identify the extent of the cause your program is addressing, and which effects it might be reducing. It can also help you identify potential partners or alternative programs that (should) work alongside your program to address the complex problem. While your colleagues and stakeholders are an invaluable source of knowledge to develop this analysis, both causes and effects should be evidence-based, meaning they should be based on existing research and literature on the topic, knowledge, and expertise. For Sport, our fictional example, the problem that the program is addressing is ‘insufficient physical activity among school-aged children. Indeed, this is one of the elements contributing to the broader problem – an unhealthy lifestyle, alongside inappropriate diet, insufficient sleep, or extended screen time. While insufficient physical activity is part of a larger problem, Sports is only looking to address the issue of insufficient physical activity among school-aged children.

Your program or organization may be looking to address some of the causes and may have the capacity to alleviate some of the effects of the problem you identified. Analyzing the full problem holistically with help you understand the space where you operate, the part of the problem you are addressing gives you the first indication of potential outcomes you will be expecting (alleviation of which effects), and may help you identify the need for partnerships to help you address elements of the problem you cannot address on your own. It might not be always easy to clearly map all causes and effects, so looking at the big picture, or the whole system in which your program and the problem exist through systems thinking will help you. For example, a cause for not participating in outside-school sports activities may be a lack of interest in such activities, but other causes may relate to the wider system, for example, lack of sports venues near home, no transport options to travel to available venues, or parents being engaged in work at the time when sports activities are available.

PROBLEM ANALYSIS – SPORT

q2222-7215954

Social problems are not isolated, they exist within systems. At this point, you should think about the wider system in which a problem exists. You will need to consider the various groups of stakeholders who exist in the system and how they relate to the problem, your program, and each other. You should start to think of the problem from the perspective of the beneficiary and understand how the various layers of the system affect them. For example, some elements of the system for our case study are the student, their family, and home environment, the school, services available, accessibility, and past experiences. These elements interact and reinforce each other while presenting the causes and effects of the problem. You should also consider in your system the elements that interact with your programs, such as supporting partners, other agencies, or various groups you interact with such as direct and indirect clients, funders, and volunteers. All these can inform the causes and effects of the problem and help you identify how your program can contribute to resolving the problem.

SYSTEMS AND SYSTEMS THINKING

A system, human-made or natural, is an interconnected set of elements that is coherently organized in a way that achieves something. Systems thinking is a holistic method for understanding positive and negative influences on a problem and identifying the ‘big levers’ for creating change. It identifies problem influencers at the individual, household, community, infrastructural, political, and societal level and the stakeholders behind these influencers. Working with these stakeholders to take action will create change in the system, sometimes in unexpected ways, as the system adapts to change. A strong systems approach identifies potential intended and unintended consequences, groups that engage and interact with the program and each other, informing you on which levers should be pulled, or not. Thinking about the system will help you understand the context of your program and the problem you seek to resolve. When one element of a system changes, the other parts will be affected and, in the end, the stability of the whole system. Systems thinkers use a few guidelines or have a few habits.

  • Seek to understand the big picture
  • See patterns in the system
  • Recognize how a system’s structure causes its behavior
  • Identify cause and effect relationships
  • Surface and test assumptions
  • Find where unintended consequences might arise
  • Find leverage points to change the system
  • Resist making quick conclusions

Systems include feedback loops (Feedback loops show at least two factors that relate to each other in a circular fashion. When one-factor changes, how does it impact another related factor?) and have a causal map to show the multiple relationships within the system, between actions and effects.

IMPACT ASSESSMENT SYSTEM AND CONTEXT, SPORT EXAMPLE

q3333-1211759

Supporting activity. start with a mapping of the system to understand the problem in a holistic way and engage with evidence (literature and practice), stakeholders, and practitioners to map the potential causes and effects of the problem. Imagine your beneficiary at the center of the system and the elements they interact with at micro, meso, and macro levels – family and friends, community, infrastructure, societal infrastructure and opportunities, policy, natural environment (e.g. the young person, their family and friends, their school and teachers, the services available to them, access to programs and support, the wider community). Map how these elements connect, reinforce each other, or what can hinder your beneficiary. What are your levers for change?

It is important to align your program and program objectives to the vision, mission, purpose, and goals of your organization. The vision is an organization’s statement of its overall ideal and the ultimate goal of its operation. It describes what the future should look like. The mission describes ‘the business’ of the organization or that of a program, and is more action-oriented than the vision. It describes how that future will be achieved and while it can be formulated at both organization – and program-level, it is often articulated at the program-level as an organization would seek to achieve its vision and serve its purpose through several programs or interventions. The vision will provide strategic direction and facilitate decision-making, while the mission will ensure your activities align with the overall purpose of the organization. The purpose is why an organization exists. Some organizations have shifted in the past years to formulate a purpose statement rather than a vision. In a nutshell, vision, mission, and purpose answer the following questions.

  • Why do you exist? (Purpose)
  • What do you seek to achieve? What is your ‘perfect world’? (Vision)
  • How will you achieve that? (Mission)

The goals are longer-term aspirations your organization has for the future and indicate where your organization’s efforts are directed. Your program’s objectives are more tangible, specific, and measurable aspirations. Your vision, purpose, mission, goals, and objectives should be well aligned with the problem you are looking to resolve.

SPORT VISION, MISSION, VALUES, AND GOALS

Vision. Healthy children, healthy adolescents, healthy adults.

Purpose. Ensure school-aged children maintain healthy levels of physical activity.

Mission. Provide children with opportunities to be active two to three times per week.

Goals. Reduce lifestyle-induced illnesses in children.

Objectives. Familiarise parents, teachers, and students with healthy habits; increase student, teacher, and parent awareness of the benefits of sport; instill an active lifestyle; engage students in after-school sports activities two to three times per week.

SHOULD YOU MEASURE OUTCOMES?

Measurement for the sake of measurement can be harmful to programs and progress. It may be that what you intend to measure is not yet measurable (e.g. the outcome has not been achieved yet), or that measurement interferes with program delivery (e.g. data collection may interfere with how participants engage in the program). Evaluability assessment is ‘the extent to which an activity or project can be evaluated in a reliable and credible fashion. Evaluability assessment tests are.

  • whether a program is ready for outcomes measurement (and evaluation)
  • when outcomes measurement and evaluation would help improve the program

Outcome measurement is the first step toward evaluation. Once data to measure outcomes have been collected, it is the role of an evaluator to analyze this data and complete an evaluation of the program. The evaluator can be internal to the program (e.g. a manager or internal researcher) or external. There are advantages and disadvantages to having an internal or external evaluator, relating to cost, knowledge, flexibility, objectivity, accountability, willingness to criticize, ethics, and utilization of results. The evaluator will give recommendations on when outcomes measurement and evaluation are achievable, the tools necessary, or if evaluation is possible at all. Your organization needs to consider evaluation from the beginning and build in data collection time to ensure the evaluation is reliable and achievable. Your program may not be ready to be evaluated but having an outcomes measurement plan will ensure evaluation is achievable down the track.

Evaluability assessment involves a six-step process

  • Involve key stakeholders (e.g. policymakers, managers, staff) – to ensure the program theory conforms with their expectations (stakeholder analysis).
  • Clarify program design – ensure the relationship between inputs, activities, outputs, and outcomes is as expected from the points of view of key policymakers, managers, and interest groups (logic model).
  • Clarify program reality – whether the program was/is implemented according to the program design (logic model).
  • Assess the likelihood that the program activities will lead to the intended outputs and outcomes (logic model).
  • Agree on required changes to the program design (implementation).
  • Agree about the intended use and value of future evaluation activity (communication).

PLAN FOR MEASUREMENT

Overall, outcomes measurement is beneficial to organizations for several reasons. Before diving into measurement, you need to ensure that your organization’s strategy, culture, engagement, and human resources are set up (or build them!) to support outcomes measurement. Your organization should have an established culture of measurement and understand the importance and use of outcomes measurement for all stakeholders.

FOSTERING A CULTURE OF MEASUREMENT

Outcome measurement does not happen in a vacuum; it requires an organization that is ready, willing, and able. An organization with a strong measurement culture engages in self-evaluation, self-reflection, and self-examination. It considers the impact it is seeking to achieve, takes responsibility for it and actions result to challenge or support its activities. It values candor, challenge, and genuine dialogue, with staff able to use the language of measurement. A strong measurement culture supports experimentation and risk-taking and learning from mistakes and weak performance. Outcomes and impact measurement are visible on meeting agendas, in annual reports, on the website, and in performance reviews. The leadership team leads by example, building capacity for, and investing in measurement while being held accountable for results and measurement culture.

HOW TO BUILD A MEASUREMENT CULTURE

Used across the organization such self-assessment programs can be a conversation starter, an early process of engagement.

Leadership. A guiding coalition of champions, participants, influencers, change agents, and communicators lead strong measurement culture. The Board, CEO, and Executive should be champions and provide structure including incentive systems, clear roles and responsibilities, performance review, and reporting mechanisms. Their own reporting and accountability should be results-led. Assess the skill set of your Board: ensure there is someone with measurement expertise who will inform demand for results-based information, and ask critical questions.

Systems. Assess your current policy, procedures, data management systems, and accountability plans to see if they align with and support outcomes measurement. Does infrastructure (such as IT platforms) need to be developed? Is program documentation in order? Are there ways to integrate with existing data collection and reporting systems? What resources will be required?

Capacity, capability, and connection. What capability exists, where, and in who? What are the professional development needs? Assess and offer training or access to new knowledge. Consider connections including networks (such as Social Impact Measurement Network Australia, professional associations, service networks, or peak bodies), partnerships, mentors, and universities (academics,  students).

Learning orientation. Outcome measurement is ultimately about learning and action. Your organization should build opportunities for learning through communication loops, regular discussion  (such as at team meetings), training, mentoring, and conferences. Results need to be mined for what they reveal is and is not working. This learning needs to be acted on by stopping, growing, or embedding particular approaches. Who will decide which action is to be taken? How and by who will this action be monitored? The outcomes and impact measurement loop is cyclical and ongoing! Understand what merit and quality look like for your outcomes measurement system. Quality means the outcomes measurement system connects with your organization’s mission and values and will include integrity, respect, responsiveness (adaptation based on results), stakeholder involvement, transparency in communication, and being culturally responsive.

Merit means

  • Applying established and appropriate methods
  • Focusing on all the types of impact created (positive, negative, un/intended)
  • Attribution (claiming only the difference you know you’ve made)
  • Utility (application).

Measuring outcomes provides

  • Accurate judgment about the value of a program
  • The evidence base on program effectiveness
  • Accountability and efficiency: a critical tool for resource allocation decisions
  • The basis for learning and responsible policy development within organizations
  • The key ingredient for evaluation, strategic planning, and good governance
  • Staff engagement and motivation
  • Data required by, and to attract, funder

KNOW YOUR PEOPLE: STAKEHOLDER ANALYSIS

Outcomes measurement and impact assessment are more likely to be relevant, thorough, actioned, participated in, of good quality, and successful if your stakeholders are engaged with the process. The first step in achieving this is understanding who your stakeholders are, their current and potential level of engagement with the program, and their attitudes and aptitudes for measurement.

IDENTIFYING AND ANALYSING YOUR STAKEHOLDERS

Outcomes measurement and impact assessment are more likely to be relevant, thorough, actioned, participated in, of good quality, and successful if your stakeholders are engaged with the process. The first step in achieving this is understanding who your stakeholders are, their current and potential level of engagement with the program, and their attitudes and aptitudes for measurement.

STAKEHOLDER GROUPS RELATIVE TO THE ORGANISATION OR PROGRAM

q44444-2912541

Alongside their roles in the program, the priorities, interests, and needs of your stakeholders for measurement need to be understood. Consider what they bring to measurement, how important their perspective is, and what may motivate them to participate. Some stakeholders may seem peripheral, yet important to engage. Identify which stakeholders might have resistance to what you are trying to achieve (and how to address their concerns), how to increase engagement (and how to sustain it), and who might be champions (and how to empower them). Outcomes measurement may be met with resistance due to a lack of internal capacity, especially within smaller organizations, to measure outcomes, lack of funding, a perceived feeling of knowing ‘I am doing good’ hence no need to measure, or that clients wouldn’t care if they measured. For example, in the Sport case study, there might be resistance to the measurement from staff implementing the program as they have low skills in data collection and find it a burden. Yet, they may become champions if they are engaged in measurement early on, trained and provided with the tools to measure, if they understand the benefits of measuring the impact of their work and how it may help them improve outcomes for young people.

From your stakeholder analysis, it is important to think about the level of engagement appropriate for each stakeholder. This can be.

Passive. no engagement, no communication, no relationship

Monitoring. one-way communication, no relationship

Informing. one-way communication, short- or long-term relationships

Transacting. work together in a contractual relationship

Consulting. information is gathered from stakeholders for decision making

Co-design. work directly with stakeholders to ensure their concerns are considered in decision making

Collaborating. mutually agreed solutions and a joint plan of action is delivered in partnership with stakeholders.

Empowering. decision-making is delegated to stakeholders.

Think about the parts of the measurement process with which your stakeholders will be involved: planning, design, question development, data collection, review, and action plans. Think also about the control they have over these processes. Social impact takes place in a political context. The political context is especially important to understand in social impact assessment as this often focuses on the reallocation of resources, serves vulnerable groups, and engages a range of stakeholders with complex relationships. Your stakeholder analysis should support your understanding of relationships and politics surrounding your program. Issues of budget, geographic location, ensuring diverse perspectives, decision-making processes, exit strategies, stakeholder capacity and measurement capability, and organizational capacity for stakeholder engagement, all need to be considered as part of your stakeholder engagement strategy.

PUTTING USERS AT THE CENTRE

As program beneficiaries usually represent a high-priority stakeholder group, it is good practice to consider them as central to your measurement process and decision-making. This reminds us that measurement is about ensuring best practices and improved outcomes for the community’s benefit. It often links to the mission and values of supporting voice and citizenship, respecting human dignity and worth. It fosters a sense of inclusion, agency, and contribution. And improves your measurement process by ensuring meaningful measures, completeness, acceptability of tools, and broadening dissemination. Mechanisms for engaging your community in measurement include reference and advisory group membership, champions, providing expert review and development of tools, and co-design of methods and communications. While there are some challenges to engaging users, there are assumptions about the involvement that deserve disrupting. Challenges might include unequal power relationships, representation, resourcing, thinking ‘it’s too hard’, and assumptions about whether consumers are willing and able. Organizations need to be willing to change their structures and communications, as well as provide support and training to consumers, to facilitate meaningful participation. Understanding who the stakeholders are, how they interact with each other and the program, and their attitude and need for measurement will not only support the delivery of the program but the data collection, outcomes measurement, and evaluation.

UNLOCK YOUR RESOURCES

At the beginning of a program, it can be hard to know the resources you will need to measure your outcomes. For this reason, you might need to come back to this step after you developed a good understanding of the evaluation type you need to implement and the data you need to collect and analyze. You first need to understand what data you will need to collect, the frequency of data collection, the number of stakeholders you will collect data from, and the type of evaluation you want to complete. You also need to decide whether outcomes measurement and evaluation will be an in-house or external activity. If you do not have much control over the budget allocated to outcomes measurement and evaluation, you will need to decide the suite of approaches that you can afford to help measure your outcomes. Consider.

  • Whether you need one, or more, data sources (e.g. survey and in-depth interviews).
  • Alternative methods of data collection: face-to-face (more expensive), telephone, mail, online (this will also depend on the characteristics of your potential respondents).
  • Alternative sources: administrative and secondary data, other organizational data readily available.
  • Who could collect data and when (Could some additional information be collected at intake?).
  • Should data monitoring and analysis be done internally or externally (Would the training be cost-saving in the long-term?).

You also need to plan for resources: allow time for staff to train in data collection and monitoring, time for the actual data collection, time and skills for data analysis (which will vary with the type of analysis and evaluation methods). You might need to employ additional staff to support your evaluation needs. Money is an important resource. Cost planning is speculative, and it is essential to allow for contingencies. You should base your cost estimate on previous experiences, expert advice, and thorough planning. The risk of underbudgeting for outcomes measurement is high, including an inability to capture all outcomes and misrepresent program achievements. Not allocating sufficient resources (staff, time, and money) to communicating findings can make outcomes measurement redundant, by missing out on the opportunity to engage relevant stakeholders and implement change. There is a range of free resources to support organizations looking to complete outcomes measurement.

Some types of measurement are more expensive than others and may need expert advice. Considering the need for resources from the beginning will help you ensure you are setting realistic goals for data collection and analysis. If given your available funding, outcomes measurement, and evaluation are restricted, you might need to look for funding alternatives.

REFLECTION POINTS

As you proceed through the next steps of this guide, consider the resources you will need for.

  • Program planning
  • Outcomes measurement planning
  • Data collection
  • Data analysis and evaluation
  • Report writing
  • Dissemination of findings

PROGRAM DESIGN

WHAT WILL CHANGE: THEORY OF CHANGE

A theory of change is an explicit theory or model of how a program will achieve the intended or observed outcomes. It articulates the hypothesized causal relationships between a program’s activities and its intended outcomes and identifies how and why changes are expected to occur. In doing so, the theory of change comprises a change model (the changes the program intends to achieve) and an action model (the activities that will lead to those changes). A theory of change must be plausible, doable, and testable. It should also articulate the assumptions and enablers that explain why activities will lead to the outcomes outlined. While a theory of change is often represented as a diagram or chart, a narrative can also be used. A theory of change will help your organization to understand how your program will achieve its goals.

Strategy. Helps teams work together to achieve a shared understanding of a program and its aims; ensures all activities align with the purpose of the program; encourages in-depth thinking about the program and its assumptions.

Measurement. Helps to formulate and prioritize evaluation questions and plan evaluations; encourages the use of existing evidence.

Communication. Informs stakeholders, in an ‘elevator pitch’ – a type of approach, about the program’s aims.

Working in partnership. When programs are delivered in collaboration, developing a theory of change will help clarify roles and responsibilities.  

To formulate your theory of change, start by defining the main activity for your program and its long-term outcomes. These represent the ‘start’ and ‘end’ of your theory of change (what you do and for what purpose). Clearly outline the change model (the changes that will result from your program). You can then articulate the main processes or activities (the action model) through which you engage with your target group, population, or community to achieve those outcomes. Your theory of change should be informed by knowledge of ‘what works’ to address the problem you are seeking to solve (e.g. similar programs or approaches in different circumstances), or evidence that an innovative approach (e.g. engaging with groups at different times, in different circumstances) is likely to work and why. You should also consider the enablers that support you to deliver your program and achieve your goals. Internal enablers are conditions or factors that need to be in place for your program to work and are mostly within your control (e.g. relationships, quality of services). External enablers are factors outside your immediate control and describe the environment in which your program operates (e.g. social, cultural, political, economic factors).

DEVELOPING A THEORY OF CHANGE

Explain the components of a theory of change (activities, long-term outcomes/goals, enablers). Use different color post-it notes for each category. Start with one category – usually starting with the goals as most people will have a good idea of what they want to achieve; ask everyone to write the program goals or long-term outcomes on a post-it note. Place those at the bottom of the flip-chart paper. Ask everyone to discuss what outcomes (intermediate or longer-term) need to be achieved to reach this goal and what activities will support the achievement of those outcomes. Place the activities at the top of the paper and any intermediate outcomes in the middle. Take time to discuss, remove duplicate ideas/concepts, rearrange for timeline, and relevance, and add enablers. Outcomes will be based on assumptions (what participants think will be achieved based on experience, or current evidence). Make sure you take note of these to include them in the logic model when you expand on the theory of change. When you are confident with the draft theory of change, circulate it to other stakeholders and ask for feedback. Remember the ‘effect’ – people relate more and have a greater commitment and ownership to things they helped to create!

MAP YOUR PROGRAM: LOGIC MODEL

A logic model is a visual representation of how your program will achieve its goals, including the short-, medium- and long-term outcomes. Like your theory of change, your logic model is best developed at the design or planning stage of a program, but if this has not happened, these can be developed, modified, and enhanced as the program evolves. Use evidence to link activities to outputs and outcomes remember that outcomes are based on assumptions (e.g. we assume that if students are offered the opportunity to participate in organized sports activities after school they will participate 2-3 times per week and their physical health will improve).  Assumptions and risks will accompany your logic model: these are external conditions that could affect the program’s progress, but which are not under the direct control of people implementing, managing, or planning the program. An assumption is a positive statement of a condition that must be met for the program’s objectives to be achieved. A risk is a negative statement of a condition that might prevent the program’s objectives from being achieved. You should use evidence (information about other programs, data, and experience) to foresee these risks and prepare mitigation strategies. In the Sports example, we assume that making after-school sports activities freely available to students will result in higher participation in physical activity. Some factors may interfere with this assumption, for example, parents’ ability to delay the school pickup, or their adversity towards a respective sport may interfere with students’ uptake of the program and realization of outcomes (risk). Or the risks might be at the school level, for example, lack of infrastructure to support the proposed sports activities.

LOGIC MODEL TERMS

  • Inputs are the necessary resources for a program to run. E.g. staff, volunteers, funding, buildings, technology, machinery.
  • Activities are what the program is doing and how. E.g. online information, webinars.
  • Outputs are numbers or counts of things that result from the program. E.g. number of online webinars, and a number of participants.
  • Outcomes are the changes that your program produces in the short-, medium-, and long term.
  • The impact is the lasting, systemic change to which your program or organization contributes.

LOGIC MODEL TEMPLATE

Developing the first half of your logic model – identifying inputs, activities, and outcomes – relies on your understanding or planning of the program. You should include here the resources necessary for your program to run, from internal support to funding, infrastructure and external partnerships (inputs), the range of activities your program will deliver (activities), and how you will keep track of your delivery of these activities (how much of these activities the program will deliver, how many clients it will engage with).

LOGIC MODEL SPORT

q77777-7341513

The second part of the logic model, mapping the outcomes of the program, can be more challenging due to difficulty in identifying outcomes or confusion between outputs and outcomes.

OUTCOMES

Outcomes – what a program achieves – can be measured at different points in time and at different levels. Short-term outcomes capture changes in knowledge (e.g. improved knowledge about the benefits of regular exercise), medium-term outcomes capture changes in behavior (e.g. engagement in regular exercise) and long-term outcomes capture changes in conditions (e.g. reduced rates of obesity among school-age children, adolescents and adults). There are no definitive guidelines on the timeline to measure different outcomes. For example, while medium-term outcomes can sometimes be measurable within a few weeks, in other programs these might only be measured several months or years into the program. Precisely when outcomes can be measured depends on the type of the problem that the program is addressing, the purpose, scope, or the target population. Outcomes can be achieved at the individual or program (micro) level (e.g. improved quality of sleep); community or organization (meso) level (e.g. reduced crime rate) or at the population, industry, or sector (macro) level (e.g. reduced hospitalization rates among young adults). While there is no direct link between the timing of an outcome and the level at which it occurs, changes that occur at macro and meso levels are often more complex and require more time to achieve.

Some evaluation techniques, such as Social Return on Investment (SROI), rank outcomes in terms of their importance to stakeholders, but this is not common practice in non-financial valuation techniques such as logic models or outcomes evaluation. In the context of impact investment, identifying a single ‘primary’ outcome will guide the size calculation and basis of payments in social impact investment, accompanied by secondary outcomes that complement the primary outcome. Terminology should not interfere with the value of the full suite of outcomes in any program, that is, a secondary outcome should not be considered less important than a primary outcome. It may be difficult for some programs to measure their long-term outcomes, due to the timeline and complexity of the primary outcome or longer-term impact. For example, it may be years before Sport can measure their long-term impact (improved health outcomes in adolescence and adulthood) but they can measure the change in the levels of physical activity (medium-term or intermediate outcomes), which serves as a proxy and may predict the ultimate outcomes. The extent to which intermediate or medium-term outcomes can serve as proxies is not straightforward and it requires a thorough literature investigation, consultation of organizational data or experienced practitioners. Additional activities may be necessary to facilitate the longer-term outcomes. In some circumstances, it may be helpful to also set targets for outcomes – the extent to which change is expected. Targets should be based on evidence and be realistic.

You should also consider whether your program yields financial, social, and/or environmental outcomes to ensure you map and measure all potential outcomes. The triple account of outcomes (social, environmental, economic) and targets are often used in accounting techniques (e.g. Triple Bottom Line or Corporate Social Responsibility reporting). Remember, not all outcomes are predictable. It is often hard to project unintended outcomes (positive, negative, or neutral), but these may become obvious as the program matures, and it is important to allow for these to be measured. Collecting qualitative data from a range of stakeholders is a good approach to identify what else is being achieved, in addition to what your model predicted. In mapping potential unintended consequences, you should also think about who else might be affected by your program and the external factors that may influence your program (e.g. people, circumstances, the environment). The impact is the systemic-level change your program intends to achieve. This relates to the vision of your organization.

OUTCOME TYPES

q88888-3374012 q9999-9656018

Developing your logic model is a good opportunity to engage diverse internal and external stakeholders including the evaluation team, people implementing the program, client representatives, leaders, funders, etc. Use flip-chart paper and post-it notes (or an online document everyone can edit). Split the paper (or the online document) into six columns: inputs, activities, outputs, short-, medium- and long-term outcomes. Everyone should write one item on each post-it note (e.g. one input, one output, etc.), then place their post-it notes in the relevant column. You may notice some items that you might have thought of as short-term outcomes may actually fit under ‘outputs’, or that some stakeholders start discussing whether an outcome is medium- or long-term. Shuffle the post-it notes and discuss any points of disagreement or confusion until you have agreed on a logic model that suits your theory of change and program.

When you map the short-, medium- and long-term outcomes it helps to look back at your problem tree and theory of change for a comprehensive picture of the changes that your program seeks to achieve, and your stakeholder map, to ensure you have considered outcomes for all stakeholders (whether engaged in this exercise or not). It helps to begin filling in the short-term outcomes (changes in knowledge), before mapping medium-term outcomes (changes in behavior) that result from this. The changes in behaviors should point towards changes in conditions (long-term outcomes). Remember that some short-term outcomes may be proxies for long-term outcomes; consider the dimensions at which outcomes occur (micro, meso, macro) with various groups of stakeholders; and social, financial, and environmental outcomes.

THEORY OF CHANGE VS. LOGIC MODEL

Theory of change: an explicit theory or model of how a program will achieve the intended or observed outcomes. It articulates the hypothesized causal relationships between a program’s activities and its intended outcomes and identifies how and why changes are expected to occur. In doing so, the theory of change comprises a change model (the changes the program intends to achieve) and an action model (the activities that will lead to those changes). A theory of change must be plausible, doable, and testable. Logic model: a visual representation of how a program will achieve its goals, including the short-, medium- and long-term outcomes. It comprises a detailed representation of inputs, activities, outputs, outcomes, and impact.

UNDERSTAND WHAT TO MEASURE

Measuring all outcomes may not be feasible due to a range of constraints (resources, time, access to respondents). This is a good time to prioritize the outcomes you will measure. You need to consider your evaluation questions – those are the questions that you want to be answered. Think again about your stakeholders (Whose outcomes will you measure?), time (What is your timeline for data collection?), skills (Do you have staff to collect quantitative and qualitative data), and funding (Can you afford it?) available for outcomes measurement. As we need to clarify a few concepts before you can develop the outcomes framework, we discuss below the main types of evaluation and evaluation questions.

TYPES OF EVALUATION AND EVALUATION QUESTIONS

Evaluation is an objective process of understanding how a policy or intervention was implemented, what effects it had, for whom, how, and why. Well planned and executed evaluation provides evidence for improved design, delivery, and outcomes, and supports decision making. Depending on its timing, your evaluation may be.

Formative evaluation. evaluation with the purpose to improve a model. It takes place during a program’s implementation with the aim of improving its design and performance.

Summative evaluation. evaluation with the purpose to judge a model, to assessing the extent to which it achieved its intended (and unintended) goals. This type of evaluation happens at the end of a program, or well after a program ended. ‘When the cook tastes the soup, that’s formative evaluation; when the guest tastes it, that’s summative evaluation’.

Depending on its purpose, your evaluation maybe

Outcomes evaluation. explores the changes occurring as a result of a program.

Process evaluation. investigates how a program was established and implemented or delivered.

Economic evaluation. studies whether a program generates value for money.

It is important to consider the process as well as outcomes evaluation. The former may explain why certain outcomes were or were not achieved. It helps to identify if some outcomes were not achieved due to program failure (i.e. has the program failed to achieve a set of outcomes for its beneficiaries?) or implementation failure (i.e. the program was not implemented as intended, hence the outcomes could not have been achieved). An example of implementation failure in the Sport case study would be if the information sessions and resource packages to parents were not delivered twice a year (i.e. did not deliver one of the intended activities).

PROCESS AND OUTCOMES EVALUATION

qwqwqwwwwwww-3537568

The six evaluation criteria can also serve as guidelines for selecting evaluation questions.

Relevance. Is the intervention doing the right things? The extent to which the intervention objectives and design respond to beneficiaries’, global, country, and partner/institution needs, policies, and priorities, and continue to do so if circumstances change.

Coherence. How well does the intervention fit? The compatibility of the intervention with other interventions in a country, sector, or institution.

Effectiveness. Is the intervention achieving its objectives? The extent to which the intervention achieves, or is expected to achieve, its objectives, and its results, including any differential results across groups.

Efficiency. How well are resources being used? The extent to which the intervention delivers, or is likely to deliver, results in an economic and timely way.

Impact. What difference does the intervention make? The extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, higher-level effects.

Sustainability. Will the benefits last? The extent to which the net benefits of the intervention continue or are likely to continue.

EVALUATION QUESTIONS

q1011111-6043271

Impact evaluation is the assessment of the extent to which long-term, sustained changes resulted from the program activities. This type of evaluation is more likely to influence policy. It can be conducted at some point throughout the delivery of the program (for ongoing programs) when according to the theory of change the impact would have been achieved at least for a group of program participants. The key element of impact evaluation is the counterfactual, or what would have happened had the program not been implemented. Being able to compare the ‘do nothing’ scenario with the outcomes achieved from the program will provide evidence for the changes produced by the program.

DEVELOP AN OUTCOMES FRAMEWORK

WHAT IS AN OUTCOMES FRAMEWORK?

Having developed the evaluation questions will give you an indication of the outcomes you want to measure. You can now prioritize the outcomes; look at the logic model you developed, your stakeholder groups, and your evaluation questions, and flag what you need to measure for the type of evaluation you want to conduct. Selecting the outcomes you need to measure is the first step in developing an outcomes framework. An outcomes framework (also referred to as an ‘outcomes hierarchy’) is a collection of outcomes you intend to measure, the indicators or measures for the outcomes, the data sources you will use to quantify those indicators, and the timing for data collection.

INDICATORS

Indicators are the measurable markers that show whether a change has occurred in an underlying condition or circumstance. Indicators can be expressed as percentages, proportions, numbers, ratios, or perceptions, behaviors, satisfaction, and quality. Indicators can be a single measure capturing a condition at a certain point in time, such as the proportion of participants living with a mental health condition, or a composite made up of several measures, which measures ten aspects of psychological distress but reports this as one value between 1 and 50. Technical criteria refer to the extent to which the indicator is a good measure for your outcomes. For example, whether the indicator is validated (is there evidence to support that the indicator measures what it intends to measure?), or reliable (does the indicator produce consistent results over time?). Contextual criteria look at surrounding characteristics that can help you decide whether the indicator is a good fit for the outcome, given your program context. For example, is the indicator acceptable (will the clients be comfortable answering certain questions?) or is it feasible (is it practical to collect the respective data?). Regardless of our efforts to select or develop ‘good’ indicators, as the terminology suggests, an indicator is indicative of the outcome it seeks to measure. Two or more indicators may be necessary to measure an outcome. For example, improved youth mental health can be measured through the proportion of young people reporting a mental illness in the past 12 months. The two indicators capture the frequency and intensity of mental illness in youth, both needed to assess change.

STEPS TO DEVELOP INDICATORS

  • Allow for time and resources to review indicators. Consider: how broad is the review, how long do you have, can you engage stakeholders, and do you have resources in place?
  • Search for existing indicators used by industry, academic, government, national and international sources, and national and international indicator banks. Drawing on existing indicators will often ensure your indicators respect all technical criteria.
  • Assess indicators against the technical and contextual criteria. This will be teamwork – engaging stakeholders helps to understand if indicators are appropriate and acceptable.
  • Select indicators with consideration as to whether some were prioritized by stakeholders and whether gaps were identified (i.e. need to develop new indicators).
  • Consider new indicators if your existing indicators are not a good fit for your program.
  • Choose only those indicators that are useful, not all that can be measured.

TYPES OF INDICATORS

It is a good idea to assess an outcome through objective and subjective measures. Indicators are often objective and imply quantifiable concepts measuring how much/many/often. They can also capture subjective responses, such as attitudes and feelings (e.g. changes in quality of life; feelings of anxiety).  ‘Qualitative indicators’ are however a vexed topic because qualitative data is inherently different from established ‘indicator standards’ such as validity checks, replicability, and standardization. It is recommended to collect qualitative data alongside quantitative data to give a sense of what the outcome looks like ‘on the ground’ when a quantitative indicator is improved for a person, community, or population. This data enables a program to ‘tell the story’ of impact – what it looks or feels like in people’s lives. Qualitative work is also useful for hearing from people in their own words, which may be especially useful when measuring the impact of programs on people who may not respond well to structured questions or have a high literacy level.

OUTCOMES AND INDICATORS, SPORT

q101222222-3281467

You developed indicators to measure the outcomes for your program. To complete the outcomes framework, you must decide on the most appropriate data collection tools to quantify the indicators. Here we explore different ways to collect and monitor data. Both quantitative and qualitative data can and should be collected for outcomes measurement. It is important to collect data at intervals relevant to the outcome (e.g. pre-program, halfway through the program, end of the program, and/or a few weeks/months after) to monitor the change in indicators and be able to assess the extent to which outcomes are achieved.

BASELINE DATA AND BENCHMARKING

Data collected prior to the program is baseline data. This data can help you compare program participants to the general population (e.g. by comparing with national statistics). It also serves as the reference point, helping you make conclusions about the change by comparing how an indicator has changed as the program progressed. Think about whether this baseline data is readily available for your program, or how you could collect it. In addition, data from secondary sources such as population data, or from the evaluation of programs similar to yours can serve for benchmarking. Benchmarking investigates how the target population compares to larger populations, or the extent to which outcomes were achieved, compared to other programs.

QUANTITATIVE DATA DESIGNS

Surveys, administrative data, and secondary data are quantitative data sources most frequently used in outcomes measurement.

SURVEYS

Surveys are standardized data collection instruments that are usually administered face-to-face, online, by phone, or by post to generate quantitative data. They may also collect qualitative data, often regarding people’s experiences and attitudes. Surveys can be an efficient way of collecting data as they reach large numbers of people for a relatively low cost and can be repeated to track behavior changes. Response rates, however, can be low, which can jeopardize the validity of the data collected. Surveys can be administered at the program, organization, sector, or national level. When deciding what type of survey to administer consider.

  • Your target population (e.g. are they more likely to respond online or face-to-face? Consider their demographics, skills, and likelihood to respond)
  • Budget (online surveys are cheaper to administer than post, phone, or face-to-face)
  • Type of questions (some questions might need visual support; complex questions may be easier to design in online formats)
  • Will respondents be more likely to share accurate information if the interviewer is present or absent?
ADMINISTRATIVE DATA

Administrative data is program data collected for all participants. For example, data a caseworker might record about a client after each encounter, or headline data an organization might use in annual reporting (e.g. proportion of female clients) is administrative data. While the primary use for this data is administrative rather than research, it is helpful for capturing populations who may not respond to a survey, rich information about the same individual, providing information for potential comparison groups for your evaluation, and conducting complex statistical analyses due to large sample sizes. Program data is often collected on participant intake forms, which can serve as baseline data as the program matures. Unlike primary data (data collected by you, for your program), secondary data is collected by someone external to your program (e.g. national data sets, administrative or survey data collected by a different organization).

SOURCES OF QUANTITATIVE DATA

q10133333-5157220

Interviews, focus groups, and case studies are the most commonly used methods to collect qualitative data.

INTERVIEWS

Interviews typically involve a one-on-one conversation between one person collecting data and one person talking about their experience either face-to-face, over the phone, or online. Interviews allow people to talk in their own words and explore topics in-depth. They range from highly structured (standardized questions), semi-structured (a topic guides broad areas to be covered), or unstructured  (narrative-style interviews).

FOCUS GROUPS

Focus groups are a conversation between a small group of people, facilitated by a researcher or data collector. They aim to generate discussion, and debate, to provide a holistic view among the group or show a variety of opinions. Sometimes focus groups are called ‘workshops’ if they involve participants working on an activity together.

CASE STUDIES

Case studies are often used to illustrate good practice, provide contextual data and allow thorough profiling of a particular outcome. They can involve multiple methods of data collection and an in-depth investigation of one or a few individuals involved in the program and the people with whom they engage. The purpose is to provide particularly rich data to understand a novel situation.

QUALITATIVE DATA COLLECTION TOOLS

q101444444-1742764

Employing mixed or multiple methods for data collection (e.g. different types of quantitative and qualitative data collection techniques together) helps increase the accuracy of your measurement. Mixed methods can be used concurrently (e.g. open-end interviews conducted to affirm the validity of a survey) or sequentially (e.g. a focus group investigates topics that will be later explored in a survey, or a survey reveals matters that will be later explored through in-depth interviews/focus groups/case studies). Here are some questions to help you decide what type of data to collect and how.

  • Who will you collect data about? From whom? This is a good time to consult (again!) the stakeholder analysis and your outcomes. It is important to understand who the information is about and who will you ask (e.g. you may ask the individual who achieved the outcome, but also their peers or family)
  • What is the best instrument to collect the data? Thinking of the characteristics of the participants/respondents and the type of information you need, assess whether a survey (face-to-face, online, mail), interview or focus group may be more appropriate.
  • Are there any established, pre-tested instruments? e.g. scales for measuring certain conditions and attitudes. If there are, you must make sure you collect the data according to recommendations (face-to-face/ pen and paper).
  • Are the methods culturally appropriate? This may include thinking about language, norms, and values. It is a good idea to consult with community representatives when developing the data collection tools.

And in the context of your program and resources.

  • Consider what is a good sample size, the timing for data collection given your context (e.g. school holidays), and reimbursement for time.
  • Staff skills to collect this data. Assess whether your staff is skilled to collect the respective data, training, or outsourcing the data collection.
  • Considering the range of data sources and resources (staff, skills, funding, respondents) select the most appropriate for your program.

OUTCOMES FRAMEWORK – OUTCOMES, INDICATORS, DATA SOURCES AND DATA COLLECTION TIME, SPORT

q101555555-5352864

Once again, use flip chart paper and post-it notes (or if working online, a document that can be shared and edited by all participants; you may want to have one of the participants as a scriber that leads the note-taking while the rest of the group brainstorms). Invite your team and key stakeholders if possible. Split the butcher’s paper (or the document you work on) into four columns: outcomes, indicators, data sources, target population, and timing for data collection. Write an outcome on a post-it note and place it in the outcomes column. Move to the next column and add the indicator for this outcome (on a separate post-it note). Continue with the data source (question to include in a survey, administrative, interview, etc.) and the target population and timing for collection (e.g. young people, pre-program participation, and 6-months into the program). You might find yourself organizing the outcomes in the short-, medium- and long-term or you might start by developing outcomes for the main beneficiaries, then other stakeholders. Make sure to discuss your logic model and evaluation questions to agree on which outcomes should be measured. This will provide you with insights from a range of people and agreement over outcomes and indicators, as well as data sources and timing for collection.

RESPONSIBILITY FOR DATA COLLECTION AND MONITORING

While data collection may be seen as essential to program activities and achievements by some stakeholders, it may be met with rejection by others who see it as consuming resources that could be otherwise allocated to ‘doing good’. Ensure you consider.

Who is responsible for data collection, their understanding, and capacity to collect the data – ensure staff have the skills and time allocated to collect the data.

Availability of participants – your program participants are willing and available to provide you with the information needed to quantify your outcomes.

Accuracy of data reported – your tools are developed to capture the intended outcomes.

Relevance of data collected – tools for data collection can change over time and should be revised if proven not to collect information as planned.

Timing and frequency – set clear expectations about when data should be collected; this can have a significant impact on the measurement of outcomes.

What is the sample size – you should consider not only the number of participants to collect data from but also their characteristics. Look to collect data from individuals with the same characteristics (e.g. socio-economic and demographic characteristics) as the group with which your program is engaging (i.e. the individuals surveyed are representative of their population).

Ensure confidentiality – data should be kept on secure servers or locations that can be accessed only by the research team in a de-identifying manner, ensuring that individuals cannot be linked to their answers and their answers cannot influence their relationship with the program.

ETHICS OF DATA COLLECTION AND OUTCOMES MEASUREMENT

All research data collection requires ethical approval from a recognized committee. Different contexts have different formal requirements for ethics approval processes and it’s important to know and understand these, to act upon ethical principles pertaining to human research and measurement.

Key principles include.

Integrity. professionalism, excellence (using known, appropriate, and proportionate methods), honesty, reliability, stewardship.

Respect for persons and beneficence. doing no harm, protecting people from harm, and managing the burden of participation, are linked again to using appropriate and proportionate methods.

Justice. consider the meaning of participation, not compounding disadvantages, being transparent about how participants are selected. Participation – or not – in measurement activity should be independent of a person’s service delivery experience. This needs to be clearly communicated to people.

Consent. people need to understand what participation will mean and how their data will be used. If measurement occurs over a long period or at various points, consent may need to be gained in an ongoing way. There are particular considerations for obtaining consent for children and young people.

Confidentiality. safety of data, who can access and why, any exceptions (e.g. disclosure of threats to harm), and ensuring people are not identified in any reporting.

Research merit and safety. using sound and known methods, with quality assurance built-in. Quality assurance might look like peer review, reference groups of experts, or public communication.

Consider the particular needs of the population you serve. You may work with people who are vulnerable, over-researched, have statutory involvement, may be fearful of saying no, or where there are cultural considerations. Consider the impact of participation on people. Remember that research takes place in a political context. It is important to also consider the ethical requirements for evaluators.

Systematic inquiry. assessment should be rigorous and include a discussion of limitations, not overclaiming.

Respect for people. should respect the rights, privacy, confidentiality, and dignity of all involved.

Competence. adhere to research standards and rigor, reporting should be comprehensive and accessible.

Integrity/honesty. disclosure of conflict of interest; report fairly and accurately

PILOTING: WHY, WHEN, HOW MANY?

A pilot program is a small-scale, short-term trial that helps an organization understand how a program might work in practice. The pilot precedes the implementation of the large-scale program and its purpose is to identify shortfalls and opportunities to improve the delivery to attain the desired outcomes for the target population. It may generate preliminary information on the extent to which intended outcomes may be achieved, although there is no direct relationship between the findings from a pilot evaluation and those from the program evaluation. Pilots are also a good opportunity to test processes and learn how to better operationalize and implement the program in the future.

There is little consent on the sample size necessary for a pilot study as this often depends on the purpose (to validate scales, test program implementation or validity), target population, funding, and time. The recommended sample size is 10-15 participants per group for feasibility studies, 25- 40 participants for instrument development, or 30-40 participants per group for pilot studies comparing groups; a sample that is “representative of the population and sufficiently large, respectively”. It is essential to evaluate the results of pilot studies, including outcomes and process evaluation to assess the extent to which intended and unintended outcomes were achieved and whether the processes need further revision. This may be a good time to rework your planned program using your theory of change and logic model.

The pilot should

  • be implemented according to the theory of change and logic model underpinning the program
  • engage a sample that is representative of the population targeted by the program
  • be evaluated to understand the potential for improvement and scaling

ANALYSIS OF IMPACT

Outcomes measurement and evaluation empower organizations to understand the change their activities are causing for the people they support or the extent to which a program contributes to resolving a social problem. Distinguishing between attribution and contribution is essential. Change, especially long-term change, may be difficult to allocate to a single intervention, hence discussing contribution rather than attribution is often preferred.

Steps for contribution analysis

  • Develop the theory of change and logic model.
  • Assess the existing evidence on your program’s results (Evidence that the program’s activities produced the expected outputs and the expected, and unexpected, outcomes).
  • Assess the alternative explanations (The extent to which external factors may have influenced the same outcomes).
  • Assemble the narrative (Why it is reasonable to assume that the actions of the program have contributed to the observed outcomes? Clarify the credibility of and weaknesses in this rationale). Traveling through these steps involves data collection from a range of stakeholders internally (e.g. direct beneficiaries) and externally – people knowledgeable about the program (e.g. local community members). There are some established techniques to isolate the impact of a program. They rely on measuring change compared to what would have happened had the program not been implemented.

DATA ANALYSIS

Data analysis will depend on the type of data and the timing of its collection. Qualitative data is often collected at a single point in time, although ‘repeat interviews’ may be within the purpose of an outcomes measurement plan. Quantitative data may be collected at a single point in time across a single or two or more groups, requiring cross-sectional analysis to allow you to identify differences between sub-groups of participants. Quantitative data collected at two or more points in time, from two or more groups requires more sophisticated statistical analysis.

QUALITATIVE DATA ANALYSIS

Often qualitative data can be audio recorded; you need to transcribe the data (transfer the data in written format). There are specialized services that can do this for a cost.

Code, analyze, and write up the data

Coding data means dividing up the data among common topics or categories that are mentioned within it, almost as if you were creating your own database. Sometimes the topics or categories are those mentioned by the participants themselves in the data, whereas at other times the topics or categories might be pre-set and informed by the needs of the research (e.g. informed by the research questions, evaluation terms of reference, or outcomes framework, etc). Analyzing the data means then organizing the basic topics or categories from the coding into a more sophisticated conceptual model to express the ideas contained within the whole dataset. Sometimes this process might be informed by social theory. It often means refining the names and framing of the topics and categories. Coding and analysis can be done in Word/by pen and paper but is more commonly done using computer software. Process for thematic coding and analysis is often used and cited as best practice – this involves.

  • familiarising oneself with the data by reading and re-reading transcripts
  • generating initial codes from participants’ responses
  • searching for themes within the initial codes
  • reviewing and refining the themes
  • defining and naming the themes
  • producing a write up of the findings

QUANTITATIVE DATA ANALYSIS

  • Data collected in hard format (i.e. pen and paper) should be digitized with available tools or a spreadsheet.
  • Conduct simple analyses, such as descriptive statistics – these will give you a first impression about how respondents answered a question, what proportion agreed with a certain statement or how many people completed your survey.
  • Conduct complex analyses to assess change across two or more periods of time, or differences between groups. For example:
    • Test whether the difference in one concept reported by one group (e.g. satisfaction with health, all respondents) has increased since the beginning of the program.
    • Check whether two groups are statistically different from each other (e.g. if women’s satisfaction with health is significantly lower or higher than that of men) at one point in time (e.g. at the start or the end of the program).
    • Check whether the difference between two groups (e.g. men and women) has narrowed by the end of the program, compared to when the program started.

These tests, and many more, can be conducted by uploading data in statistical packages such as Stata or SPSS. Some tests can also be conducted in Excel. For example, using error bars you can conclude if observed differences (e.g. the level of satisfaction with health) have significantly changed since the start of the program or the change is due to chance. If the error bars overlap, the difference between the two values is not statistically significant. Evaluation and outcomes measurement can be conducted externally by engaging a qualified researcher or evaluator, or internally, by your skilled staff. There are advantages and disadvantages to using internal evaluators, including questions about the credibility of the evaluation and bias. Ensuring you have the right skills within your organization is essential for rigorous and reliable measurement. This includes considering the skills and competencies of various people across an organization, not only those that will be undertaking outcome measurement.

SKILLS AND COMPETENCIES FOR OUTCOMES MEASUREMENT

q101666666-4795739

How findings are communicated and used is as important as outcomes measurement itself. Effective communication will support accountability and learning through communicating about results and communicating for results. Communicating about results is what is generally understood as communication of findings. It informs stakeholders about the findings of your evaluation. Communicating for results is also known as ‘communication for development’ or ‘program communication’ and is used as a management tool for internal learning and stakeholder engagement. This type of communication focuses on internal learning, clarity across stakeholders, and combined action. The most effective communication techniques capture attention and interest, allowing audiences to interact with the findings. Tailor findings to the audience and consider.

  • Accuracy, balance, and fairness
  • Level of detail
  • Technical writing style
  • The appearance of the publication

Communicating negative or sensitive findings is an important aspect of communication and learning. Negative findings should be used for internal learning to redesign an intervention, improve approaches to interact with clients, or deliver an activity. Results can point out groups of the target population for which intervention worked as well as those for which it didn’t, thus helping to identify ‘pockets of disadvantage’, groups or communities that are falling behind. This can help to develop tailored interventions to achieve better outcomes.

An implementation plan provides a summary of the process, roles, responsibilities, and longer-term strategy to implement and administer your program’s outcome measurement approach. An implementation plan has three key aspects.

Integration. establish outcome measurement processes within day-to-day activities and strategy. Alignment with existing frameworks, systems, and tools.

Adjustment. continual refinement and iteration of the outcome measurement approach, process, tools, and methods.

Leadership and culture. support a measurement culture for performance and continual learning.

Evaluation is an activity that may take from weeks to months and years to complete. It requires a good understanding of the problem that a program is looking to resolve, and the stakeholders involved. It needs resources (people and time) and internal and/or external skills and expertise. While it may seem difficult at times, measuring outcomes is invaluable to understanding the impact of a program, the changes it makes to people’s lives, how services can be improved, who is winning, and who is missing out.

q10177777-9499235

When writing an evaluation report, you must include at least the following sections.

Executive summary. A high-level summary of the evaluation – what it did and its key findings.

Introduction. Introduce the reader to the issue that is addressed in the evaluation, its importance, as well as the program, policy, or intervention that is evaluated. The description of the project may be a separate section.

Evaluation framework. Includes evaluation questions, scope, purpose, and method. Describe the parameters of the evaluation – what questions you intend to answer, what is within the scope of the evaluation, the evaluation methods, and limitations.

Evaluation findings. Use your evaluation questions to structure how you report the findings. You will use findings from across your data sources to answer these evaluation questions.

Conclusions and recommendations. A high-level summary of the successes and lessons learned, as well as how findings should be used.

References. The sources you consulted throughout your evaluation.

Appendices. Additional information, tables, or figures that the reader can refer to for further information or clarification. It may include the evaluation plan, questionnaires that were used for the data collection, and more detailed results (for example further disaggregated by gender, or age groups).

SOCIAL GOOD MESSAGE

Sparrows by morning, live in peaceful nests! Design shouldn’t dominate things, shouldn’t dominate people. It should help people. Don’t spend your time solving your favorite problems, solve problems that need to be solved, generically. A home is a place where you live, and society is a place where your story begins. Honesty shares honesty, as it is honesty’s nature. Stay always in Ablution and get back to the trust you have been, with.

You may also like

@2023 – All Right Reserved