Macintosh HD:Users:al:Downloads:simlab logo.png

Framework for Monitoring and Evaluating Inclusive Technologies in Social Change Projects

For non-SIMLab people: Welcome, friends!

Thank you for visiting. We’re delighted to be sharing this document with you and we’d love your feedback. You’re also welcome to use it in your own work - let us know if you do! And feel free to remix, reproduce and use the text any way you like, as long as you give us the credit and share your work under the same license. Read more at the bottom of the page.

Our M&E Framework is pretty long, because it’s everything we want our staff to know about how we do M&E at SIMLab.

Key pieces of this document that might interest you:

As with all our learning and best practice resources, the Framework is shared publicly

  1. So that our partners can refer to it, and potentially adapt elements of it in their own work;
  2. So that others can comment on and improve the Framework, and
  3. As a contribution to the thinking of the wider sector about challenges and approaches to teasing out the contribution that inclusive technologies make to social change work

Shortlink: http://simlab.org/resources/mandeoftech/

How SIMLab understands monitoring and evaluation

About this Framework

What this Framework Doesn’t Cover

About SIMLab

Our organizational principles

Commit to learning from our work and operationalizing what we learn; tolerate risk; and acknowledge failures

Encourage ecosystems of collaboration and openness; and as far as possible use existing tools, platforms and resources rather than creating our own

How SIMLab works

Advisory capacity, or consortium partner without direct responsibility for implementation or aspects of M&E

Lead implementer, or consortium partner with direct responsibility for aspects of M&E

Why create a Framework for M&E of Inclusive Technologies in social change projects?

SIMLab’s Evaluation Criteria

Criterion 1: Relevance

Criterion 2: Effectiveness

Criterion 3: Efficiency

Criterion 4: Impact

Criterion 5: Sustainability

Criterion 6: Coherence

The Program Cycle

Program Planning Phase

Theories of Change and Logic Models

Program-level ToCs

Logic Models

Indicators

Baseline

Implementation/Monitoring

Creating a monitoring plan

Evaluation

A typical SIMLab evaluation process

Learning and dissemination

Make recommendations

Make findings accessible and relevant

Visualize data

Share findings

Conclusions

Bibliography

Annexes

Annex 1: Evaluation concepts, methodologies and approaches that may be helpful in evaluating inclusive technology projects

Contribution Analysis

Cost-Benefit and Cost-Effectiveness Analyses

Value for Money

Impact Evaluation/Impact Assessment

Outcome Mapping

Outcome Harvesting

Complexity Aware Monitoring

Case Studies

Problem-driven Iterative Adaptation

How SIMLab understands monitoring and evaluation

A monitoring and evaluation (M&E) process is put in place for 3 main purposes:

  • As a management tool to drive change
  • As an accountability tool
  • To provide lessons and learning

M&E may also be used to inform future funding decisions; judge the performance of contractors; or to gather evidence to establish whether a particular approach is useful. In SIMLab’s case, we are also interested in examining how a particular inclusive technology, or inclusive technology overall, contributes to wider programmatic goals. Our M&E findings should thoroughly test, and may serve to prove or disprove the validity of certain approaches to using inclusive technology.

What do we mean by ‘inclusive technologies’?

We define inclusive technologies broadly: those that have broad reach, relatively low costs[a][b], are easy to use[c][d][e], rely on existing infrastructure, and use common data formats. Examples of inclusive technologies include SMS, radio, voice[f] telephony, even blackboards and megaphones. They can be knit together to extend accessible systems and services to hard-to-reach populations.

Monitoring and evaluation are two different phases of one cyclical process which influences all phases of a program and, ideally, feeds into future program design.

Monitoring refers to an on-going, periodic process of tracking implementation with the primary purpose of informing day-to-day project management decisions and tracking how an initiative is progressing. In some programs, monitoring includes “real-time” data and feedback from program participants that can inform immediate decisions.

Evaluation is more[g] of a discrete activity, which refers to the systematic and objective assessment of an ongoing or completed project or program which looks at its design, implementation and results. Evaluations may also aim to determine the worth of an activity, intervention or program.

A third concept, review, is that of an assessment of the performance of an intervention periodically or on an ad-hoc basis. Evaluations tend to be more comprehensive, and reviews often focus on operational aspects rather than wider impact. (OECD 2002)

For us, learning[h] is also a critical element, in which we ensure that the insights[i] we have gained from our M&E are shared within our SIMLab team, and wherever possible, with others, in easily-digestible formats. Learnings should also inform best-practice guidance like this Framework, and contribute to our understanding of what it is to do good inclusive technology work.

About this Framework

This M&E Framework aims to guide SIMLab staff in measuring our work and determining, to the degree possible, the contribution of inclusive technology to the outcomes and impact of our implementation projects.

What follows is intended as a minimum M&E standard for SIMLab staff to follow at each phase of the program lifecycle (planning, implementation and monitoring, evaluation, and dissemination of learning). However, the Framework includes guidance and tools that can be useful for projects at any stage. It should supplement and refer to existing M&E best practice resources, rather than seeking to rewrite or replace them.

The Framework can be applied across SIMLab’s continuum of program and partnership modalities, whether SIMLab is implementing or conducting M&E directly, partnering with a larger program, working with a community-based organization, or some other set-up.

As with all our learning and best practice resources, the Framework is shared publicly[j]

  1. So that our partners can refer to it, and potentially adapt elements of it in their own work;
  2. So that others can comment on and improve the Framework, and
  3. As a contribution to the thinking of the wider sector about challenges and approaches to teasing out the contribution that inclusive technologies make to social change work

This project is a work in progress that will be publicly available under an open license, and regularly updated and improved with support from the ICT4D, aid and development communities in the hopes that it might serve as a resource for others who are working with inclusive technology. We gratefully recognize that this work was made possible by support from the UK Department of International Development and the Hewlett Foundation.

What this Framework Doesn’t Cover

This framework is not aimed at providing guidance on program design and planning. However, a good understanding of the M&E process and areas that would be assessed in an evaluation are useful for informing program design and it’s important to build learning from monitoring and learning into program design and implementation.

Additionally, we do not delve into the range of ways that technology can support M&E itself; e.g., using technological devices to collect, analyze and visualize data for M&E of programs; whether the projects themselves use technology or not. This is covered in depth in other resources highlighted in the bibliography (e.g. Bamberger & Raftree, 2014).

At present, this Framework focuses on M&E at the project level, and does not seek to support evaluation of the overall impact of technology on a field or a broad geographic area.

About SIMLab

Social Impact Lab (SIMLab) helps to build accessible, responsive and resilient systems using inclusive technologies, helping people and organizations solve both the technological and human obstacles along the way. SIMLab believes that equitable participation of marginalized and ‘last-mile’ populations in public, economic, and social life contributes to a more just world. We believe that increasing systemic adoption and use of inclusive technologies leads to greater access to services for all populations, accountability and responsiveness of institutions, and resilience of societies.

Our organizational principles


In 2015, SIMLab developed core principles to guide us in our work and behavior, with colleagues and partners, and in our decision-making. Two in particular prioritize learning from our work and sharing our findings.

Commit to learning from our work and operationalizing what we learn; tolerate risk; and acknowledge failures


We will invest in processing, documenting, and operationalizing learning from our implementation work, and work towards doing so with our best practice and advocacy work. We proactively share learning with others as case studies, published evaluations, blog posts, and tools and guides, all under open, attributable licenses. We publicly acknowledge our successes and failures at all levels of the organization and in all our areas of work.

We will invest in creative communications strategies to make this meaningful and impactful, such as podcasts, webinars, events, long and short format written papers, and short blog posts. We will share resources in accessible and inclusive formats, using simple, clear language. We will translate resources where we can.

As part of the learning process, we will conduct rigorous monitoring and evaluation on our projects, implementing changes as needed. Monitoring will be used to make adjustments to programs as they are happening, and evaluations will be conducted at the end to ensure we are holding ourselves accountable. It is not enough to just present the data and findings. SIMLab will include a recommendations section in each evaluation that takes the findings and presents several opportunities for institutional learning and change. Making and implementing recommendations ensures that we and others are able to usefully apply our M&E efforts to our future work.

We recognize that operationalizing this principle means being willing to ask our donors, supporters and partners to structure partnerships that allow agility in our ideas and projects, challenging the power structures inherent in much of social change work. We recognize that this may narrow our ability to work with some actors.

Encourage ecosystems of collaboration and openness; and as far as possible use existing tools, platforms and resources rather than creating our own

Coming from a history of software development and close relations with platform providers, we are strong advocates for open design elements, like APIs and building on common platforms, that lead to interoperable systems and an ecosystem of mutually supportive products. Our commitment to sharing learning and support of open licensing is linked to this.

...We seek always to be collaborative, rather than competitive, and to break down silos between sectors and specialisms.

How SIMLab works

SIMLab projects follow a range of modalities, loosely broken down below, with guidance on what might be required in each case. This is advisory only - this is always a judgement call and should be signed off by the Head of Programs[k]/CEO. Where no one category perfectly fits a particular situation, be guided by those which are closest to true.

Advisory capacity, or consortium partner without direct responsibility for implementation or aspects of M&E

Here, SIMLab may be working with a partner with an existing M&E approach. Use this guide to help them supplement their lines of enquiry, and where necessary, their methodology, with inclusive technology-focussed issues. We may want to consider our role in the process, e.g. collecting information directly, or influencing the terms of reference or grant agreement so that M&E on the contribution of inclusive technology is included.

If they do not plan to expend any resources on M[l]&E, consider whether the project is likely to meet the standard set by our principles (see above), and discuss our continued involvement and ongoing strategy with the Head of Programs[m]/CEO.

Lead implementer, or consortium partner with direct responsibility for aspects of M&E

Here we would expect to develop an M&E approach for the project, the scale of which should be informed by

  • The scale of the proposed project
  • Available time and budget for M&E
  • Partner capacity to conduct M&E, and mandate to or interest in conducting M&E on technology
  • The research or evidence aspirations of the project team
  • Partner (and donor) openness to change to the planned approach based on incoming monitoring information
  • Partner interest in learning and sharing lessons from implementation
  • Availability of existing relevant data and analyses

In some cases, the only, and often most appropriate way to make space for M&E on the technology aspects is to tag a few questions onto an existing program evaluation design. 

Even when an external party does not require an evaluation, SIMLab should capture and document learning to improve future efforts. SIMLab should conduct its own internal evaluation process when:

  • Our portion of the project funding exceeded $50,000 (between $10,000 and $50,000, an After-Action Review meeting is advisable)
  • We are testing a new project approach, technique or way of working, and wish to document learning from it
  • The Project Steering Group feels that our implementation has experienced challenges which should be formally documented
  • There is no planned evaluation, or the evaluation is not likely to capture learning relevant to SIMLab.

Why create a Framework for M&E of Inclusive Technologies in social change projects?

The use of inclusive technologies in development and social change is maturing, and should move beyond pilots and prototypes to longer-term interventions, grounded in existing learning and best practice, with more rigorous evaluations that specifically review the contribution that inclusive technologies make to our work. To date, robust evidence of the contribution that inclusive technologies make has emanated more from research than from project-level M&E.

Pressure is growing to bring this evidence to bear, and to move beyond continually re-committing the same mistakes. We have seen a new effort to consolidate best practice, establish measures of impact[n][o] for programming that include the contributions of newer kinds of technology components, and focus energy on new boundaries of scale and effectiveness. This can be seen in the Principles for Digital Development, as well as sector-specific efforts like DFID’s Conflict, Crime and Violence Results Initiative, which offers a series of papers and guides on good practice in security and justice issues (see the bibliography for more information on these resources).

Accordingly, more emphasis needs to be given to: how well a particular channel, tool, or platform works in a given scenario[p]; how it contributes to development goals in combination with other channels and tools; how the team selected and deployed it; and whether it is a better choice than not using technology or using a different sort of technology.

This doesn’t mean that M&E tools, guidelines, and systems that are used for other scenarios are not relevant to inclusive technologies. Rather, there are additional considerations, and particularly relevant approaches, that we propose might be helpful to consider in constructing an M&E plan.

How well the ideas shared in this guide work will depend on the skills and capacities of those using them[q][r]. Some users of this toolkit may not be familiar with M&E processes, and for them, we recommend further reading, for example, at the Better Evaluation website (see bibliography for further suggestions). SIMLab staff should discuss training options with their line manager. Other users may be very familiar with M&E, but less familiar with inclusive technologies, in which case we recommend using this guide to improve understanding of the various nuances of monitoring and evaluation technology-enabled programming.

What’s different about M&E of inclusive technologies in social change projects?

Technology often adds an additional layer of complexity to an already complex project. A huge range of factors are in play, including the technology itself, the content or messaging being passed through the technology, the organization managing the response to communication via the technology, the network, cultural factors, capacity and the skills of an individual who is managing training about the technology. Technology tools may be used in conjunction with one another. Determining the exact contribution of technology to a wider program goal, and the wider, unintended consequences of it, is therefore quite complex.

Technology projects are frequently [s][t]new operational partnerships:[u] technologists piloting with implementing agencies, perhaps together with research organizations or an involved donor. These actors may have different priorities in terms of things to measure. 

Technologist partners, such as those providing the platform or tool, may be involved only in the early stages of a project[v] - disappearing from active involvement after development and rollout are complete. They may be accustomed to very quick cycles of prototyping, testing, and iterating, which may not be well understood or documented by traditional M&E professionals, and which may elapse before M&E can contribute new learning to the design. Technologists also may be most interested in, and may build analytics to measure the effectiveness of the user interface, or usability of a tool, rather than the longer term impact of a wider effort. On the other hand, implementing partners[w] for whom technology is a new operational lens may not have a clear idea of what to look for, how to measure the success of the technology roll-out itself, or how to track and assess the ways that a technology component is (or is not) contributing to wider impact or change.

Additionally, some of the impacts most keenly anticipated by technologists - improvements to efficiency, effectiveness, and ease of communication, for example - are often indirect contributions to a larger social, economic, or political goal. These impacts represent the ‘business-case’ argument for incorporating better technology into any program’s operations, and are only indirect contributions to the development-focused goal which traditional M&E focuses on. Technology may be used for staff management and communications, helplines or incident reporting, stock tracking, and data collection, all of which are more akin to capacity-building than the kinds of innovation that attract awards and headlines. Baseline information of this type is particularly hard to come by from smaller partners, leading to a lack of hard data on changes after implementation, even if anecdotal evidence shows positive results[x][y]. In addition, it is more difficult to understand technology’s contribution to impact and community-level change under these circumstances.

Finally, some aspects need to be addressed which are not familiar to traditional implementers. SIMLab’s evaluation criteria include sustainable business models, ethical data practices, and security, privacy and protection, in addition to organisational development practice and support for innovation. Data security questions are particularly difficult, as program staff may end up handling private data and personally identifiable information without a clear understanding of ethical concerns and information management practices, much less legal policies or frameworks, to guide them in the security and storage of these data. In[z] addition, there are particular concerns that arise when it comes to privacy and protection, especially of vulnerable populations, once new technologies are incorporated, and complex continuums of risk and behaviors that play off one another when traditional and digital processes or activities are involved.

Accordingly, few resources are allocated to conducting rigorous M&E on the role of inclusive technologies. Smaller organizations tend to have limited capacity for conducting M&E, and larger organizations’ M&E teams often lack experience measuring the role and contribution of inclusive technologies to impact. Because the focus of the evaluation is on the wider impact rather than the role of the technology itself, there may be little motivation to work on teasing out and understanding the contributions of inclusive technologies and systems. There may also be little interest in analyzing the more systems-oriented improvements to efficiency and information management that may occur where the technology and human aspects meet[aa], where they don’t directly impact program delivery. Platform developers, such as Ushahidi or FrontlineSMS, who might be more motivated, normally rely on partners to conduct M&E, because their platforms are part of wider programs. 

In SIMLab’s experience, this leads to a shortage of concrete evidence of the impact of technology in development and aid programs, and in particular comparative data on different platforms, approaches and strategies. As the field matures, it is appropriate to try to build this evidence base where we can.

SIMLab’s Evaluation Criteria

In this section we outline SIMLab’s criteria for evaluating inclusive technology. They should be read as supplemental to the OECD-DAC criteria, rather than replacing them.

  • Relevance - The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.
  • Effectiveness - A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.
  • Efficiency - Efficiency measures the outputs -- qualitative and quantitative -- in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools, platforms, channels and processes have been adopted.
  • Impact - The positive and negative changes produced by the introduction or change in a technology tool or platform on the overall development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the technology tool or platform on the local social, economic, environmental and other development indicators. The examination should be concerned with both intended and unintended results[ab] and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions and digital information and communication ecosystems[ac].
  • Sustainability Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.
  • Coherence Coherence is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented.

These criteria were developed by adapting a set of widely used M&E principles (originally created by the OECD Development Assistance Committee - OECD-DAC) to our context and building on the work that the Active Learning Network for Accountability and Performance (ALNAP) did in 2006 to adapt these criteria to complex humanitarian settings (See box).

Below we provide an in-depth explanation of SIMLab’s criteria for M&E of inclusive technology programming by showing the original OECD-DAC criteria (in italics), the ALNAP adaptatio[ad]n (in italics and where relevant), and key questions that SIMLab believes should be asked in order to monitor and evaluate inclusive technology programming.

The DAC Principles for Evaluation of Development Assistance (the OECD-DAC Criteria)

In their 1991 DAC Principles for Evaluation of Development Assistance, the OECD-DAC laid out five principles of evaluation to guide DAC member states. The Principles are further defined in the Glossary of Key Terms in Evaluation and Results Based Management.

These principles were subsequently developed into five specific criteria which are today widely used in development evaluation: (i) relevance, (ii) efficiency, (iii) effectiveness, (iv) impact, and (v) sustainability.

ALNAP later adapted and expanded the criteria specifically for use in evaluating complex emergencies: (i) relevance, (ii) connectedness, (iii) coherence, (iv) coverage, (v) efficiency, (vi) effectiveness, and (vii) impact. (ALNAP, 2006)

The criteria are meant to be used together in a complementary fashion. Better Evaluation notes the following supplementary advice from ALNAP (2006):

  • Criteria often overlap, and the same data can be employed for different criteria.
  • ALNAP identifies eight cross-cutting themes which evaluators should always carefully consider when employing the DAC criteria: local context; human resources; protection; participation of primary stakeholders; coping strategies and resilience; gender equality; HIV/AIDS; and the environment. While an evaluation need not include every theme, a rational should be considered for excluding any.
  • While widely used, the DAC evaluation criteria are too often employed mechanistically. They are a valuable guide for framing questions and designing evaluation, but reliance on them should not prohibit more creative processes for evaluation.
  • Feedback has shown that many evaluators employ the DAC criteria to ask questions about results rather than processes. There is, however, much room for the five sets of criteria questions above to prompt consideration not only of ‘what’, but also of ‘why’ – for example, not only “what real difference was made to the beneficiaries as a result of the activity?”, but also “why was that difference made or not made?”

Criterion 1: Relevance

DAC Definition of Relevance: The extent to which the aid activity is suited to the priorities and policies of the target group, recipient and donor.

DAC Guidance on Relevance:

In evaluating the relevance of a programme or a project, it is useful to consider the following questions:

  • To what extent are the objectives of the programme still valid?
  • Are the activities and outputs of the programme consistent with the overall goal and the attainment of its objectives?
  • Are the activities and outputs of the programme consistent with the intended impacts and effects?

SIMLab Definition of Relevance - The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

SIMLab Guidance on Relevance

In addition to the above DAC orientation on relevance, and drawing on ALNAP’s suggestions under this principle, evaluators should consider:

  • To what extent was an analysis of context and an adequate needs assessment conducted? Did the implementor have a good grasp of the context and the communications and information needs and habits of the target population? (cf. also the SIMLab Context Analysis Framework). 
  • To what extent was there sufficient institutional capacity, staffing capacity, local knowledge and experience in the country or region to implement a relevant and appropriate project? This may include organizational readiness for innovation, capacity to manage technology and infrastructure capacity, among other factors.
  • To what extent was the choice of the technology tool context-appropriate and informed by user needs and habits, device ownership, network coverage, literacy and education levels, and other context-specific aspects? How well was the tool designed adequately for the skill level of the intended users?
  • To what extent were the target population and other key stakeholders, including staff and management of the implementing organization(s), involved in the design of the communications mechanism, tool or platform? To what extent were they involved in reviewing prototypes and suggesting adjustments?
  • To what extent does the implementing organization have the necessary technological and operational capacity to take on the management of technology platforms, manage incoming information, and maintain interactive communications with communities? How does the organizational culture allow for risk tolerance and openness to innovation? 
  • To what extent does the initiative take into consideration the target population’s existing portfolio of digital communication, tools and platforms? How was content localized for the target groups?

Criterion 2: Effectiveness

DAC Definition of Effectiveness: A measure of the extent to which an aid activity attains its objectives.

DAC Guidance on Effectiveness: In evaluating the effectiveness of a programme or a project, it is useful to consider the following questions:

  • To what extent were the objectives achieved / likely to be achieved?
  • What were the major factors influencing the achievement or non-achievement of the objectives?

SIMLab Definition of Effectiveness: A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

SIMLab Guidance on Effectiveness: 

In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness.

Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

With technology, plans often do not long survive contact with reality and adjustments and snags are predictable occurrences. What matters is that feedback and failures were acknowledged, that there were systems and communications channels to deal with them and incorporate learning and required changes, and that the technology challenges did not throw off or undermine the effectiveness of the wider project.

The following questions can serve as a guide for evaluating the Effectiveness criterion.

  • To what extent did the selected communications channel harmonize with the information and communication habits and needs of the target population? To what extent did the technology tool(s) or platforms or combination of them meet the information and communication needs of the overall project? How comfortable were implementing partners with the tool/platform? Did users of the tool have access to high-quality support? If multiple systems or communications channels were used, how well did they work together?
  • How did the technology tool or platform perform? Is it largely free of bugs and errors? Is it available in the necessary languages or easily translated?
  • If a digital process or channel replaced a non-digital one or an existing digital process was enhanced by or replaced with a new one, how did the new digital channel compare to the previous way of doing things? What were the differences in terms of meeting the specific objectives for which the new tool was introduced?

Criterion 3: Efficiency

DAC definition of Efficiency: Efficiency measures the outputs -- qualitative and quantitative -- in relation to the inputs. It is an economic term which signifies that the aid uses the least costly resources possible in order to achieve the desired results. This generally requires comparing alternative approaches to achieving the same outputs, to see whether the most efficient process has been adopted.

DAC guidance on Efficiency: When evaluating the efficiency of a programme or a project, it is useful to consider the following questions:

  • Were activities cost-efficient?
  • Were objectives achieved on time?
  • Was the programme or project implemented in the most efficient way compared to alternatives?

SIMLab definition of Efficiency: Efficiency measures the outputs -- qualitative and quantitative -- in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted.

SIMLab guidance on Efficiency: SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

The following guiding questions can help an evaluator understand and gauge Efficiency:

  • Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled?
  • If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool?
  • To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?
  • To what extent was data collected by the technology in the project also used to provide data for the monitoring and evaluation stages? To what extent did this offer data that enabled improvements in roll-out, uptake of the tool/platform, or feedback that informed the overall program?
  • How much time was spent providing user support? To whom (organizational users vs. end users)? Were adjustments made based on what was learned from those asking for support? Did the need for additional support diminish over time?

Criterion 4: Impact

DAC definition of Impact: The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental and other development indicators. The examination should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions.

DAC guidance on Impact: When evaluating the impact of a programme or a project, it is useful to consider the following questions:

  • What has happened as a result of the programme or project?
  • What real difference has the activity made to the beneficiaries?
  • How many people have been affected?

SIMLab definition of Impact: The positive and negative changes produced by the introduction or change in a technology tool or platform on the overall development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the technology tool or platform on the local social, economic, environmental and other development indicators. The examination should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions and digital information and communication ecosystems.

SIMLab guidance on Impact: Impact is distinct from effectiveness, which looks at the extent to which the project met its objectives. Impact relates to consequences of achieving or not achieving the outcomes.

ALNAP cautions that ‘because of its wider scope, assessment of impact may not be relevant for all evaluations, particularly those carried out during or immediately after an intervention. Changes in socioeconomic and political processes may take many months or even years to become apparent. Also, assessment of impact may need a level of resources and specialised skills that have not often been deployed in evaluations of humanitarian action to date. Therefore, evaluation of impact should be attempted only where: a longitudinal approach is being taken; there are data available to support longer-term analysis; the evaluation team includes specialists in socioeconomic and political analysis; and the commissioning agency is willing to invest in a more detailed evaluation.’ (2006)

Identifying,documenting and/or proving attribution (as opposed to contribution) may be an issue here.

ALNAP’s complex emergencies criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion.

Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Some sample areas of interest for SIMLab include:

  • If a digital process or channel replaced a non-digital one or an existing digital process was enhanced by or replaced with a new one, how did the new digital channel compare to the previous way of doing things? What unintended consequences were there due to introduction of or a change in technology?
  • To what extent does the choice of communications channel or tool(s) enable wider and/or higher quality of participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups?
  • How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data?
  • To what extent were ethical and responsible data protocols (see Responsible Data Toolkit, 2015) incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

DAC definition of Sustainability: Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.

DAC guidance on Sustainability: When evaluating the sustainability of a programme or a project, it is useful to consider the following questions:

  • To what extent did the benefits of a programme or project continue after donor funding ceased?
  • What were the major factors which influenced the achievement or non-achievement of sustainability of the programme or project?

SIMLab definition of Sustainability: Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.

SIMLab guidance on Sustainability: The ALNAP expanded criteria replace sustainability with ‘connectedness’, which ‘refers to the need to ensure that activities of a short-term emergency nature are carried out in a context that takes longer-term and inter-connected problems into account.’ ALNAP advises evaluators to look for ‘the existence of a sound exit strategy with timelines, allocation of responsibility and details on handover to government departments and/or development agencies, and adequate availability of funding post-response.’        

For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

  • If the project required financial or time contributions from stakeholders, are they sustainable, and for how long?
  • If the project costs were subsidized by a donor, has a comprehensive business model for the intervention after the end of the funding period been developed? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support?
  • Do implementers have the resources and capacity to continue to use the tool after the end of the project? Have local developers been supported and trained to provide support once donor funding ends? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs?
  • If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?
  • Was the technology tool or platform cost-effective as compared to other options of similar quality? Does it lend itself to a sustainable, long-term implementation? Is the total cost of ownership reasonable and accessible? (Include here both hardware costs, including maintenance and replacement of broken or old units, and any recurring costs such as airtime, charging or subscription.)

Criterion 6: Coherence

DAC does not have a 6th Criterion. However we’ve used the ALNAP additional criterion of Coherence.

ALNAP definition of Coherence: Coherence is related to the broader policy context (developmental, trade and military) within which humanitarian action was undertaken, and the need to take into account humanitarian and human rights considerations.

ALNAP guidance on Coherence: For ALNAP, coherence is linked to if and where different national or agency policies are working together or at odds with one another.

ALNAP considers that evaluation of ‘coherence’ may be the most difficult of the criteria to evaluate, in particular in single-agency, single-project evaluations. However, evaluating coherence is particularly important when there are a number of actors involved in a response, as they may have conflicting mandates and interests. ALNAP suggests asking key questions such as: why was coherence lacking or present; what were the particular political factors that led to coherence or its lack; and should there be coherence at all?

SIMLab definition of Coherence: Coherence is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented.

SIMLab guidance on Coherence:

We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

  • Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products?
  • Is the project team confident that the project is in compliance with existing legal and regulatory frameworks?
  • Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

The Program Cycle

SIMLab builds M&E into the program cycle. We are interested in specifically monitoring and evaluating the development, adaptation or use of an inclusive technology tool or platform as part of a wider programmatic effort. Though we are interested in measuring progress towards the wider program goals (e.g., is there improved governance? do more people have access to financial services?) we are especially focused on the specific contribution of inclusive technology efforts towards achievement of those wider goals. The level of attention that our partners pay to this aspect and our own role in an effort affects how we design our own M&E processes, and how SIMLab’s M&E contributes to the wider M&E efforts.

Below we outline the M&E aspects that SIMLab considers in each phase of the overall program cycle.

Program Planning Phase

During the planning stage, SIMLab plans the roll-out of the inclusive technology platform or tool and also thinks about other aspects that will support us to conduct an assessment later. This includes reminding ourselves of SIMLab’s organizational Principles [link], developing a theory of change for the program (or contributing to one being developed by a partner), conducting a baseline, developing a program plan with aims and intended outcomes, planning check-in points along the way, identifying indicators for SIMLab’s piece of the program, and defining where and how we will gather the information/data we need to conduct monitoring and evaluation.

Theories of Change and Logic Models

A critical first step is the development of Theory of Change and the Logic Model. These are different ways of describing how SIMLab understands that the effort (project, program, initiative, strategy, etc.) will contribute to impact or change.

Theories of Change (ToC) are useful for describing the pathway through which we assume that program outcomes will be achieved. ToCs work best when they are developed at an early stage and where we are thinking through how and why we believe a particular set of actions will create a certain kind of outcome. ToCs are helpful at the goal stage before deciding what kind of programmatic activities would be most relevant.

Logic Models are useful when the programmatic activities have been determined and we are at the stage of outlining program components, inputs, activities, outputs and outcomes. A logic model might be referred to as a ‘logframe’, program logic, program theory, results chain, causal model, intervention logic. There are differences in how each of these are developed and drawn, but they all serve the same purpose of showing the logic chain between inputs and activities and expected outcomes or results.

At SIMLab, we require that project managers are able to draw up a clear ToC for the intervention and ensure that the project honors it, or that the ToC is reconsidered or our participation re-evaluated.

Theories of Change (ToCs)

ToCs can be developed for organizations as well as for specific programs or initiatives. A ToC can serve as a planning tool that encourages critical reflection on an approach being taken and the assumptions being made, whether at project or organizational level.

Program-level ToCs[ae]

A ToC is often also developed for an individual program in order to map out how the different inputs and activities lead to its desired outcomes and impact. The ToC provides an overall framework for the program, from planning to implementation, including the monitoring and evaluating process. The program-level ToC helps SIMLab better map out how inclusive technology will help achieve the stated program goals. ToC development might involve information and communication mapping with partners and individual users to better understand what the existing information and communication flows and bottlenecks are and how technology might help.

Developing a programmatic ToC for a SIMLab program (or for a portion of a program that corresponds to SIMLab), and/or contributing to the ToC of the program implementer will help SIMLab and partners to gain clarity about where inclusive technology plays a role. It will help to articulate the assumptions surrounding the program or initiative and to identify potential positive and negative outcomes or unintended consequences. The process will also help SIMLab and partners to think about ways to determine if and how the technology has contributed to achieving the wider outcomes.

For more detailed information on how to create a Theory of Change, see http://www.theoryofchange.org/. The website also offers online software that can be used to create a theory of change diagram.

Logic Models

Logic models are often created during the planning process and they help the development of the monitoring system, as well as evaluation and reporting. Some donors require a carefully-drafted, detailed logic model, while others are happy with something more high-level. For more background on logic models and the different ways that they can be developed and articulated, see http://betterevaluation.org/plan/define/develop_logic_model.

Even if a donor does not require a logic model, it is useful for technology-focused program implementers to develop one in a participatory way so that they are clear about where they are heading with their efforts. Organizations often skip this stage in tech-enabled programming, even though it could help them to better keep track of their efforts and better understand if and how they moving towards their goals.

In Annex XX is an example of SIMLab’s logframe for a DfID-supported financial inclusion program[af].

Indicators

Indicators help to track progress and to later assess the contribution of inclusive technologies at the level of the intervention and at the organizational level.

Because it is very difficult to determine whether there is a direct link between the use of a particular inclusive technology and the achievement of a wider goal, SIMLab recommends trying to understand the contribution of inclusive technology to impact rather than attribution.

Attribution vs Contribution

It is only possible to attribute change to a technology tool or platform if it’s possible to demonstrate a direct causal link between the technology tool/platform and the results[ag]. This is sometimes easy at output level, plausible at outcome level, but very difficult to do at impact level. Normally the best that can be said is that a particular technology tool or platform has probably contributed to the changes. Showing how the contribution occurred is also a complex undertaking. Normally evaluation of the technology component of a program will be limited to documenting or demonstrating how it contributed towards positive change at the impact level, but it should probably not aim to attribute these changes to the technology platform or tool. 

In order to confidently say that a change happened as a direct result of an intervention, a direct causal link between the intervention and the results must be demonstrated (See Box). This can often be done at the level of outputs, but it’s very difficult to do when it comes to outcomes and impact levels because of the variety of reasons that a particular outcome or impact might have been achieved.

For example, a farmer might receive crop data by SMS for the first time ever. It could be demonstrated that the farmer accessed and/or even understood the information that was sent. It might be possible to demonstrate that the farmer’s knowledge improved because of the information received, but the farmer might have also heard similar information on the radio or talked to a friend during the same period, so the link between the SMS information and the farmer’s increased knowledge is a bit more difficult to demonstrate confidently. Even more difficult would be proving that the farmer’s income or crop yields increased as a direct result of receiving SMS information about farming, because in this case there may have been all kinds of other factors that contribute, outside of the farmer or the program’s control, such as weather, national and global markets, etc.

When developing indicators, SIMLab thinks about where and by/with whom these actions or changes will take place and at which levels we are expecting to see change. We are also careful to define indicators that can actually be measured and to lay out how we will collect the data that will help us to determine whether our indicator is being met. Some of this data may be accessible within the technology tool or platform, or alternatively, we may be build into the software a way of automatically or periodically collecting it.

We also consider the purpose of the evaluation we are conducting:

  • Are the results going to be used as a management tool to drive change?
  • Is the evaluation mainly an accountability tool to determine if we have implemented according to plan and achieved the goals that were initially set out?
  • Is it being conducted to provide lessons and learning that will help us to shape future efforts?

Another key aspect that SIMLab examines when developing indicators is the structure of the program that we are involved in. 

  • Are we the implementer?
  • Are we a partner with a small organization that has little M&E capacity?
  • Are we a partner with a large organization with an M&E team who is uninterested in conducting M&E on our technology tool or platform?
  • Is there a wider program evaluation planned that we could tag a few questions onto?
  • Do we have resources of our own to do any data collection?
  • How do our partners currently assess effectiveness of their work and learn lessons?
  • What relevant data and analysis are already available?
  • What is the capacity of our partners to assess the contribution of the inclusive technology tool or platform to the wider effort?

SIMLab’s role in the evaluation is also important. If an evaluator is already in the frame, for example, we will play one type of role. If our partner has an in-house Monitoring team and the evaluation will be conducted or led by them, our role will be different, and we need to start talking to the partner M&E team early about this framework and our approach to context assessment. Thus it’s critical to be clear about the role will play in the M&E early on during the planning stage.

  • Are we expected to serve as a support?
  • Would we be collecting information directly?
  • Are we able to influence the terms of reference or grant agreement so that some M&E on the contribution of inclusive technology is included?
  • Do we have space to encourage a large INGO to include and assessment of the technology platform in their M&E plan?

Based on the above, during the initial design phase of a program or project, SIMLab puts thought into determining what and how to measure. This varies for each initiative that we are involved in based on local context and who our partners and donors are. We ensure that we’ve agreed with our partners what will be measured, and who will do what. We also ensure that resources have been allocated to M&E and review who they are assigned to. We ensure that time for conducting M&E and discussing results, as well as room to adapt the platform/the program, is built in to our ongoing activity plan.

Three aspects to focus on when establishing indicators (OECD, 2011) are:

  • Defining feasible and measurable impacts.
  • Identifying which “dimensions of change” the program will address and deciding which are most important to measure.
  • Finding agreement among stakeholders on the changes the program is seeking

When deciding what changes to measure, useful questions to consider (in collaboration with partner organizations) are below. (These questions should inform the proposal stage as well, so that the proposal is written in a way that allow for evaluation):[1]

  • Does the program clearly define the changes it wishes to achieve? Have key changes been identified at the levels of outputs, outcomes and impacts? Have these changes been precisely defined, and could they be stated more clearly and accessibly? Have specific changes that could be linked to the introduction or altering of a technology tool or platform been included?
  • Are these changes feasible? Are the outcomes expected by the program realistic? Does the program design demonstrate an understanding that the program can contribute to achieving long-term impacts but cannot guarantee these impacts on its own? Are the changes expected from the technology aspect of the program realistic? Have the links and interrelations between the technology and other program activities been clearly spelled out?
  • How practical is it to measure these desired changes? Have changes been expressed in a way that can realistically be measured (given the context, existing data and available resources?) Could program objectives be stated in a way that makes these changes easier to monitor and evaluate? Which changes should be measured at which result level(s)? Has the potential for the technology tools to support with ongoing data collection been considered in the design of the indicators and the M&E?
  • Which are the most important dimensions of change to measure? Which dimensions of change is the program addressing directly? Which other dimensions might also be highly relevant to the program? Has the right balance been struck between measuring the most important changes and keeping things simple? Have relevant technology-related dimensions been included?
  • Do all key stakeholders agree what changes the programme wishes to achieve? Is it feasible to agree changes with all the main stakeholders or does the program threaten the interests of major stakeholders? To what extent is there local ownership of the program? Have program designers consulted with a wide range of stakeholders and beneficiaries, including government, civil society, MNOs and others who may have future involvement? Has specific inclusion of sub-groups been sought out to ensure that less powerful, more vulnerable groups have been included in program design and in identifying desired changes?

Developing good indicators is critical to the feasibility and quality of the M&E effort. See Box below for additional good practice when using indicators and targets.

BOX: Good practice in using indicators and targets
(Source OECD Handbook on Security System Reform, 2011)

  • Invest time in the process of choosing indicators and targets. Reflect on all the options available to measure each result and refine targets and indicator sets over time as the program, the understanding of partners, and the availability of information change.
  • Identify appropriate indicators at outcome level. Ensure that the program does not only monitor outputs and that there is sufficient emphasis on changes at outcome level.
  • Minimize perverse incentives. Remember that “what gets measured gets done”. Choosing to measure one indicator may mean that the program de-prioritizes other important actions and results. Routine measurement of certain indicators can have perverse results. For example, measuring the time taken to process court cases can create an incentive for courts to work faster, but at the cost of due process.
  • Use multiple indicators or “baskets” of indicators to measure results at higher-level outcome and impact levels. A balanced set of indicators that measure different aspects and that may combine quantitative and qualitative measures is more likely to cancel out biases.
  • Use a mix of quantitative and qualitative methods to measure indicators. Quantitative indicators are often easier to collect and measure. However, quantitative indicators often do not give the full picture, and not every change that is important can easily be expressed in numerical format. Do not be afraid to use qualitative indicators where these are more appropriate.
  • Ensure that indicators and targets can reflect the needs and participation of various groups. Consider how to measure changes that are relevant to the poor and the vulnerable, especially by disaggregating data and checking for measurement biases for/against certain groups.
  • Make your indicators gender-sensitive. Measure whether men and women are equally participating in the program activities, and insist on sex- and age-disaggregated data whenever feasible. Think about whether you need specific indicators to address the different security and justice needs of women, men, boys and girls (for example, looking at the types of human rights violations to which each group is most vulnerable).
  • Promote partnership, inclusion and ownership in setting and using indicators and targets. Wherever possible, indicators and targets should be agreed jointly between the partner government and the international supporting organizations, and ideally with the participation of other local stakeholders and beneficiaries (this may include organizations that represent specific communities, such as women’s organizations, religious leaders, disability rights groups, etc).
  • Choose indicators that can be measured! When identifying indicators, consider whether this information is already available, and if not, how easy it will be to collect it given the context and the resources that are available.Test indicators.
  • Test indicators to make sure they are valid and appropriate measures of the result you want to achieve.
  • Keep it simple. Try to measure what is most important and do it as simply and cheaply as possible. Wherever possible, use information that is already available and that is routinely collected. Build on existing information systems, particularly those of national institutions.  Putting these principles into practice.[2]

The theory of change may not include ‘business case’ elements relating to organizational efficiency, and SIMLab will need to decide whether it is justified and realistic to try to stretch the monitoring effort we ask of partners to include this data.[ah][ai] For example, where an SMS platform is being used for behaviour change messaging, but may also have had an impact on operations.

A sample logframe is included in Annex XX, for projects where the ‘business case’ impact is core or compelling enough to want to examine. It covers:

  • communication with communities
  • staff and volunteer coordination and management
  • quality and accountability
  • time and cost savings

In general, a set of indicators will need to be developed for each separate initiative, based on the guidance in this section.

Baseline

A baseline helps to better understand the current situation and to document where things stand before an intervention starts. It helps an organization to determine later on if anything has changed. In SIMLab’s case, we would normally rely on a partner’s baseline data if we are not the main implementers. In situations where SIMLab is the primary implementer, we would need to conduct our own baseline study. The baseline data might come from a partner’s existing monitoring and evaluation (M&E) system. It might also be developed from rapid assessment studies, surveys commissioned at the start and end of the project, or from secondary data sources.

Baseline data is always critical when a performance evaluation will be conducted because it is relatively impossible to measure change if there is no reliable data on the situation before an intervention started. However, it is often expensive and time consuming to create a baseline, and so it’s possible that a baseline may not be available. In this case, SIMLab might decide to conduct a smaller scale baseline that is relevant to the specific work it will do on a program. The baseline data should help SIMLab to determine (during the mid-term or final evaluation) what changes happened and how inclusive technology contributed to those changes.

Evaluators sometimes try to reconstruct a baseline if one is not available, though we would emphasize that it’s always much better to take the time to conduct a baseline and avoid trying to reconstruct one! Baselines can help you make crucial program decisions, including which technologies are most appropriate for the context[aj][ak].

For more information on reconstructing baselines, see Bamberger, 2010.

Implementation/Monitoring

The indicators developed during the planning process are designed to help program implementers to check on core elements of the theory of change, and establish whether the project is unfolding as expected.

During implementation or “roll-out,” SIMLab aims to gather information periodically that reflects a) the process by which the team is adapting and changing (e.g., ‘iterating’) the inclusive technology and b) data on how program activities are going. The monitoring plan is based on the implementation plan, and SIMLab will collect information on activities, outputs, and intermediate results. Outcomes are looked at during the evaluation.

Monitoring is mainly an internal management function which measures how a program is performing so that managers and other interested parties can determine whether the program is achieving the anticipated results, and to make adjustments to design and implementation if needed.

One advantage to using technology tools is that it is possible to automatically collect information within the system that can help improve monitoring of the system itself. This information helps developers to ensure that changes are made to the system to improve usability, uptake, etc. Alongside system level data is the option in some cases to to track elements like user visits, downloads or messages sent on a platform, or to offer surveys and polls to check in with users, or to send out quizzes to test their user knowledge about a particular topic on which the program has offered information.

In addition, technology can, in some cases, support additional “feedback loops” into the project so that participants can offer information, opinions and recommendations on a regular basis. These can be included in the regular monitoring plan as touch points and data sources[al].

The use of this kind of “real time data” is still weak in many cases, as institutions are often unprepared to quickly change course or they have trouble making quick decisions to respond to feedback and/or to inform users of how they are responding. During program evaluation, the effectiveness of any established or ad-hoc feedback loops should be evaluated to see if user input was taken into consideration and whether changes were made based on the information that was gleaned from users.

Creating a monitoring plan

In order to ensure that the required data are collected, a monitoring plan and system is set up to specify collection methods and logistics: what information, how when, how often, and who should collect it. This system aim to balance SIMLab’s requirements (which may be subject to those of a donor) with the partners’ needs and capacities, and should be based on a principle of joint ownership. (OECD, 2011). If some of the data collection can be automated within the technological system, as mentioned above, the process may be facilitated.

The table below shows a sample monitoring plan. This table is a simplified and straightforward example. Depending on the complexity of partners, tools and platforms, it will need to be adapted. A similar table should be developed for each program that SIMLab is monitoring in order to better understand and schedule in data collection throughout the life of the program. In some cases, a donor or program lead may be responsible for creating this plan. In this case, SIMLab will want to contribute ideas in order to ensure that there are some measures of the technology. If this is not possible, SIMLab may develop its own small-scale plan for data collection to track its own implementation process and gather data that can be used later for evaluation purposes.


Indicator

Information needed

When to gather the data

How to gather the data

How to store the data

Who is responsible

How and to whom will you report the data

Indicators are developed during the planning phase. The data collection plan, helps determine whether indicators are feasible.

Indicators from the different levels of the logic framework need to be tracked

What information will allow tracking of each indicator and enable an understanding of how the program is going for different stakeholders and groups (e.g., women vs men, youth)?

Think about how reliable the data source is. Is there a mix of qualitative and quantitative data? Also try to collect the least amount of data necessary to track the indicator. If the indicator cannot be measured, scrap it.

Is there a time that these data are naturally produced or gathered?

Think about how to streamline the process and avoid parallel data gathering processes or additional tasks for already overburdened staff. Is there any way to collect ‘realtime’ data and is there capacity to process, protect and actually use real-time data?

What is the easiest way to find and collect these data?

Think about where to find the data.

Are they automatically generated anywhere? Can data collection be tied into software automation for any of the indicators? How can the burden of data collection be reduced?

Where and how will the data be stored so that id can be accessed when needed?

Think about how sensitive the data are and where they will be stored. Ensure adequate levels of security and encryption. Consider national level legal regulations on data storage and transfer.

Who is going to collect these data?

Think about how to manage the process to ensure that the data are produced as scheduled.

Who needs to know how the program is going?

Think about how will the data be used for ongoing program improvement and who will support with data interpretation. How will the data be visualized for easier understanding and action? Who is responsible for this step?

In considering all of the above, the elements of cost, capacity and responsibility need to be factored in, as data gathering can become quite time-consuming and expensive. Assigning cost and responsibility to the above factors can help with efforts to simplify, prioritize and focus.

In addition, privacy, security and safety protocols need to be considered. Some donors, for example USAID, now require that all data collected with their funding is handed over to be shared and re-used. SIMLab and partners need to pay close attention to donor data requirements and make decisions about the increased organizational liabilities and risk to vulnerable populations that arise with collecting, storing and sharing personally identifiable data. Capacity to keep data secure and to anonymize it should also factor into decisions to collect it.

Evaluation


As mentioned in the introduction to this guide, there are different types of evaluations and various reasons to conduct an evaluation, from learning to accountability to advocacy around a particular methodology. A huge range of evaluation methods exists, and normally those who commission or conduct an evaluation will decide which methodology to use.

In most cases, evaluation specialists are best placed to determine the most appropriate way of running an evaluation. However, the particular demands of inclusive technology may suggest specific methods, which evaluators may wish to talk through with SIMLab. Additionally, evaluators may not be as experienced with ICTs and inclusive technology approaches as SIMLab staff. Finally, SIMLab staff needs to be able to manage the evaluation process, draft evaluation ToRs, and ensure that final products are of good quality.

Below is a short overview of the evaluation process and some evaluation methods that could be discussed with evaluation specialists. In this case, SIMLab assumes that it (rather than a partner) is supporting the evaluator, although the same rules of thumb would apply for them.

As with monitoring, time, budget, and other constraints will affect what can be evaluated and this will force a certain amount of prioritization and negotiation with evaluators and donors, especially as the technology component is often overlooked.

A typical SIMLab evaluation process 

The Better Evaluation website’s “Rainbow Framework” provides in-depth information about the evaluation process along with helpful links and explanations at every stage.

SIMLab focuses on seven main areas, similar to theirs. How active SIMLab is with the evaluation will depend on SIMLab’s role in a program or initiative, as previously discussed (pages 8 and 33).

  1. Preparing for the evaluation: Outlining the process, decision-making framework, key areas of enquiry, timeframe, budget and other matters in a Terms of Reference (sample ToR in Annex XX).

  1. Logistical preparation: organizing travel, key informant interviews and any other known methodological steps that need prior arrangement.

  1. Information-sharing: At this stage, we would provide a description of the program and share the SIMLab Theory of Change or a program/project logical framework or ToC, along with other key originating project documents. Here is a good point to highlight any potential unintended consequences that should be addressed in the evaluation, and emphasize the importance of not only evaluating the wider impact of the whole initiative, but to also advocate for including a contribution analysis of the inclusive technology aspects in the evaluation. 

  1. Support the evaluator to outline a work plan and methodological approach:
  1. Determine what the sample will be and what sampling strategies will be used
  2. Identify measures/indicators/metrics – are there existing ones or do new ones need to be developed?
  3. Select methods for data collection
  4. Identify valuable/useful data that already exists
  5. Plan to manage data securely, effectively

Here the SIMLab team can share existing monitoring plans and the data that have been collected during the lifetime of the program. In addition, existing indicators and already gathered information can help evaluators to understand and document the process of channel and tool selection, roll-out, tool/platform quality, and to assess to the degree possible the contribution of the technology to the wider impact. A number of approaches to data collection and analysis are available, and knowing what those are can help the SIMLab team join in the discussion. Budget, time and availability of quality data will be a major factor in determining what sample sizes, methods and approaches are possible to evaluate.

  1. Contribute to understanding the causes of outcomes:
  1. Combine qualitative and quantitative data
  2. Analyze and visualize the data
  3. Analyze data and see if they can support causal attribution or provide indication of ‘contribution’ (eg., did the cause actually lead to the effect or contribute to achieving the effect?)
  4. Compare results to the counterfactual (if there is one) to determine what would have happened without the intervention
  5. Investigate possible alternative explanations for the outcomes

At this stage, the evaluator(s) should try to answer questions about the contribution of the tools or platforms to the overall impact.

  1. Help synthesize data for assessment
  1. Visualize the data
  2. Conduct additional analysis to:
  1. form an overall assessment
  2. generalize the findings (can they be applied to future efforts or to other locations/sites?)
  1. Report and support
  1. Use visualizations when applicable and relevant
  2. Make recommendations
  3. Use simple, friendly language
  4. Share the findings and support others to use them

Learning and dissemination

Sharing learning and experiences is critical to ongoing improvement of the sector. No one actor can work everywhere or experiment with all approaches. Further, country contexts and communications habits evolve continuously. A core part of SIMLab’s approach is to transparently share outcomes and learning from the projects we support and to incorporate learning from others into our work. We believe that without broader adoption of this approach, improved practice in using technology for social change may be immeasurably slowed.

Make recommendations

It is not enough to just present the data and findings. SIMLab should include a recommendations section in each evaluation that takes the findings and presents several opportunities for institutional learning and change. Making recommendations ensures that we and others are able to usefully apply our M&E efforts to our future work.

Make findings accessible and relevant

Monitoring and evaluation efforts are most useful when they engender change and are incorporated into the institutional learning process. They can offer learning and new insight into best practice to organizations, and the broader field in which they arise.

However, evaluation reports are often inaccessible and dense, reducing their impact with decision-makers. Using simple, clear language helps, especially if we want to involve and engage local communities in the evaluation process and we commit to sharing evaluation results with them for learning and improvement and to hear their interpretation of the findings. It may be useful to report findings in multiple formats - short blog posts, longer case studies, and full reports - to appeal to a wide audience and increase their usefulness.

Consider key audiences and align formats to their needs. Consider translating the information into relevant languages - including those of project participants and make sure that language avoids jargon.

Visualize data

One way to present data in a compelling way is to visualize it through graphs, charts, tables and other infographics. This does not mean that we need to create a visualization for all data, but that we should use them to enhance what we are trying to communicate and to highlight key findings.

Data visualization and infographic expert David McCandless advocates: “visualizing information, so that we can see the patterns and connections that matter and then designing that information so it makes more sense, or it tells a story, or allows us to focus only on the information that's important.”

Bear in mind what the data is telling you; what you are trying to communicate; and what the key data points are that should have a graph, table or chart. Decide which visualizations are compelling and add value to your report or presentation, and only use those ones.

One note is that while you may not use every visualization, creating one can help you in the analysis phase of M&E. For example, if gender is a key consideration in your program, you may use a pivot table to generate a quick graph disaggregating results by gender. However, if the results are unremarkable, then there is no need to include the graph in the report.

Share findings

Once you have decided on which formats and methods are most useful for your goals, spread the word! At SIMLab, we like to be creative about how we reach people - we promote our findings on social media, our blog, through podcasts, hosting events (like brown bag lunches and webinars), and by telling our networks about them when relevant.

Conclusions

After ten years working with organizations testing inclusive technology approaches all over the world, SIMLab still finds that there isn’t enough evidence of the contribution technology makes to social change work. What evidence there is often is not shared or the analysis doesn’t get to the really knotty issues.

Technology-enabled interventions succeed or fail based on their sustainability, business models, data practices, choice of communications channel and technology platform; organizational change, risk models, and user support - among many other factors. We need to do more as an organization and as a sector to build and examine evidence that considers these issues and which tells us what has been successful and what has failed, and why.

This internal Monitoring and Evaluation Framework has been launched under an open license with the aim that others can take a look and offer their input and perspectives to help make the Framework better. We look forward to your contributions and feedback!


Bibliography

American Evaluation Association. Guiding Principles for Evaluators http://www.eval.org/p/cm/ld/fid=51

Andrews, M., Pritchett, L., and Woolcock, M. (2012) Working Paper 299. Escaping Capability Traps through Problem-Driven Iterative Adaptation (PDIA). Center for Global Development. http://www.cgdev.org/publication/escaping-capability-traps-through-problem-driven-iterative-adaptation-pdia-working-paper

Bamberger, M., (2005) Designing quality impact evaluations under budget, time, and data constraints. BBL Co-sponsored by OED and the Poverty Analysis, M&E Thematic Group

Bamberger, M., (2010), Reconstructing Baseline Data for Impact Evaluation and REsults Measurement, World Bank

Banks, K., (2014) Donor’s Charter http://www.donorscharter.org/

Better Evaluation website http://betterevaluation.org

CEG (2002) Gyandoot: A Cost-Benefit Evaluation Study, Centre for Electronic Governance, Indian Institute of Management, Ahmedabad http://www.iimahd.ernet.in/egov/documents/gyandoot-evaluation.pdf

CoA (2006a), Handbook of Cost Benefit Analysis, Commonwealth of Australia, Canberra http://www.finance.gov.au/finframework/docs/Handbook_of_CB_analysis.pdf

CoA (2006b), Introduction to Cost-Benefit Analysis and Alternative Evaluation Methodologies, Commonwealth of Australia, Canberra http://www.finance.gov.au/finframework/docs/Intro_to_CB_analysis.pdf

Conflict, Crime and Violence Results Initiative: Good practice guides on security and justice issues. (n.d.). Retrieved from https://www.gov.uk/government/publications/conflict-crime-and-violence-results-initiative-good-practice-guides-on-security-and-justice-issues

DAC Principles for Evaluation of Development Assistance (1991). http://www.oecd.org/development/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm

Department for International Development (2011) DFID’s Approach to Value for Money (VfM).

DFID (1999) Sustainable Livelihood Guidance Sheet Section 2, DFID, London http://www.livelihoods.org/info/guidance_sheets_pdfs/section2.pdf

Goussal, D. (1998) Rural telecentres: impact-driven design and bottom-up feasibility criterion, paper presented at seminar on Multipurpose Community Telecentres, Budapest, 7-9 December

Greentree Principles for Digital Development (2014) http://ict4dprinciples.org

Heeks, R.B. (2006) Implementing and Management eGovernment: An International Text, Sage Publications, London

Intermedia. (2014) M&E Framework for Digital Projects in the Transparency and Accountability Space. Produced for Indigo Trust.

Kumar, R. (2004) eChoupals: a study on the financial sustainability of village Internet centers in rural Madhya Pradesh, Information Technologies and International Development, 2(1), 45-73 http://www.mitpressjournals.org/doi/pdf/10.1162/1544752043971161

Magnette, N. & Lock, D. (2005) Scaling Microfinance with the Remote Transaction System, World Resources Institute, Washington, DC http://www.digitaldividend.org/pdf/rts.pdf

Mayne, J. (2008) Contribution Analysis: An approach to exploring cause and effect, ILAC methodological brief, available at http://www.cgiar-ilac.org/files/ILAC_Brief16_Contribution_Analysis_0.pdf

Mayne, J. (2011). Addressing Cause and Effect in Simple and Complex Settings through Contribution Analysis. In Evaluating the Complex, R. Schwartz, K. Forss, and M. Marra (Eds.), Transaction Publishers.

Mayne, J. (2011) Contribution Analysis: Addressing Cause and Effect in Evaluating the Complex, K. Forss, M. Marra and R. Schwartz (Eds.), Transaction Publishers; Piscataway, New Jersey.

McCandless, D. TedTalk http://www.ted.com/talks/david_mccandless_the_beauty_of_data_visualization.html or http://dotsub.com/view/89fed31c-25bb-4bc9-9021-f118b2f4fd82/viewTranscript/eng

Mobile Alliance for Maternal Action (2012) Global Monitoring and Evaluation Framework. http://www.mobilemamaalliance.org/sites/default/files/MAMA_Global_MEPlan_FINAL_all.pdf

OECD 2002, Glossary of Key Terms in Evaluation and Results Based Management, OECD, Paris

OECD. Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. http://www.oecd.org/internet/ieconom/oecdguidlinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm

OECD. (2011) Handbook on Security System Reform: Supporting Security and Justice. Prepared for the International Network on Conflict and Fragility and written by Duncan Hiscock and Simon Rynn of Saferworld. http://www.endvawnow.org/uploads/browser/files/security_ssrtoolkit_me_oecd_2011.pdf

Potashnik, M. & Adkins, D. (1996) Cost analysis of information technology projects in education: experiences from developing countries, Education and Technology Series, 1(3) http://wbln0018.worldbank.org/HDNet/HDdocs.nsf/C11FBFF6C1B77F9985256686006DC949/167A6E81A893851B8525675500681C7E/$FILE/v1n3.pdf

Raihan, A., Hasan, M., Chowdhury, M. & Uddin, F. (2005) Pallitathya Help Line, D.Net, Dhaka http://www.dnet-bangladesh.org/Pallitathya_pcc.pdf

Schilderman, T. (2002) Strengthening the Knowledge and Information Systems of the Urban Poor, ITDG, Rugby, UK http://practicalaction.org/docs/shelter/kis_urban_poor_report_march2002.doc

Shakeel, H., Best, M., Miller, B. & Weber, S. (2001) Comparing urban and rural telecenters costs, Electronic Journal of Information Systems in Developing Countries, 4(2), 1-13 http://www.ejisdc.org/ojs2/index.php/ejisdc/article/viewFile/22/22

Sigauke, N. (2002) Knowledge and Information Systems (KIS) in Epworth, ITDG, Rugby, UK http://practicalaction.org/docs/region_southern_africa/kis.pdf

Stiglitz, J.E. (1988) Economic organisation, information, and development. In: Handbook of Development Economics, H. Chenery and T.N. Srinivasan (eds.), Elsevier Science Publishers, Amsterdam, 93-160.

Whyte, A. (1999) Understanding the role of community telecentres in development – a proposed approach to evaluation, in: Telecentre Evaluation, R. Gomez & P. Hunt (eds), IDRC, Ottawa, 271-312 http://www.idrc.ca/uploads/user-S/10244248430Farhills.pdf

Annexes

Collection of resources on Google Drive



Annex 1: Evaluation concepts, methodologies and approaches that may be helpful in evaluating inclusive technology projects


SIMLab and SIMLab staff may be involved in very different program structures and could be contributing to evaluation, conducting evaluation or being evaluated externally as part of a program that SIMLab is implementing, as a partner in a larger program, or as a tool for gathering data to contribute to M&E. SIMLab might be involved in a new project that incorporates inclusive technology, or might be managing a project where the inclusive technology itself is the core of the project. In addition, SIMLab might be part of a much broader effort where SIMLab tools and software are a part of a wider platform, or SIMLab might be participating in a multi-platform project. For this reason it is difficult to provide a rigid framework or set forth one approach to evaluation of SIMLab efforts, projects or programs.

In this section, instead, we provide a short overview of evaluation approaches that might be suitable for some of the types of programs and projects that SIMLab is a part of. The aim is that SIMLab staff can improve their knowledge of some of these approaches and provide greater input into evaluation processes, no matter what type of role SIMLab played in the program and regardless of the type of project.

The majority of these evaluation approaches can be found on the Better Evaluation website which provides a fuller description and additional resources on most of them. http://betterevaluation.org 


Contribution Analysis

Contribution analysis is an approach for assessing causal questions and inferring causality in real-life program evaluations. It was designed in the late 1990s to help managers, researchers, and policymakers understand and make plausible conclusions about the contribution that a program has made or is making to particular outcomes. Contribution analysis assesses the program logic and analyzes the results achieved to consider alternative explanations for those results. It then ‘builds a story’ about the contribution the program has made to the outcomes, and tests the story with stakeholders. The main value of contribution analysis is that the approach is designed to reduce uncertainty about the contribution the intervention is making to the observed results through an increased understanding of why those results have occurred (or not!) and the roles played by the intervention and other internal and external factors.

More information about this method is available at http://betterevaluation.org/plan/approach/contribution_analysis 


Cost-Benefit and Cost-Effectiveness Analyses

Cost-benefit analysis (CBA) is a technique that is used to compare the total cost of a program or project with its benefits, using a common metric (most commonly money). This enables the calculation of the net cost or benefit associated with the program. It is used most often at the start of a program or project when different options are being analyzed and compared. It can also be used as an evaluation method that assesses the overall impact of a program in quantifiable and monetized terms. (See http://betterevaluation.org/evaluation-options/CostBenefitAnalysis).

Cost-effectiveness analysis (CEA) is an alternative to cost-benefit analysis (CBA) that may be more relevant for SIMLab. Rather than try to quantify whether the cost associated with a program was worth the outcome (such as CBA does), CEA compares the relative costs to the outcomes (effects) of two or more courses of action. CEA is useful in cases where it is difficult to put a value on outcomes, but where outcomes themselves can be counted and compared, e.g. ‘the number of lives saved’. It could be quite useful for comparing traditional ways of achieving a particular outcome with technology-enabled ways of aiming for the same outcome. (See http://betterevaluation.org/evaluation-options/CostEffectivenessAnalysis)


Value for Money

Value for money is a term used in various ways, including as a synonym for cost-effectiveness, and as systematic approach to considering these issues throughout planning and implementation, not only in evaluation. UKAID for example, uses VfM as a key framework for assessing funding, and considers it to be about ‘maximizing the impact of each pound spent to improve poor people’s lives’. Four key terms are used by UKAID in defining VfM are:

  • Economy: Are we or our agents buying inputs of the appropriate quality at the right price?
  • Efficiency: How well do we or our agents convert inputs into outputs?
  • Effectiveness: How well are the outputs from an intervention achieving the desired outcome on poverty reduction?
  • Cost-effectiveness: How much impact on poverty reduction does na intervention achieve relative to the inputs that we or our agents invest in it?

Some other agencies include a fifth element in VfM, which is that programs should also be equitable. Combinations of evaluation methods are normally used to assess VfM. (See http://betterevaluation.org/evaluation-options/value_for_money)


Impact Evaluation/Impact Assessment

Randomized controlled trials (RCTs), or randomized impact evaluations, are a kind of impact evaluation that uses randomized access to social programs as a way of limiting bias and generating an internally valid impact estimate. RCTs can be costly and difficult to generalize; however they are currently considered the “gold standard” in evaluation. Some criticize RCTs heavily for their economics approach and focus on quantitative data, preferring to work with more participatory and qualitative methods. Others consider it to be an ethical breach to provide services to one group and deny them to another group for the purpose of testing and measuring development impact. Some still consider, however, that there can be no real evidence of impact without a control group and an RCT. There are some methods that can help to establish a comparison group (rather than a strict control) without requiring as much investment, yet some consider these approaches as less rigorous. In the case of SIMLab, it might be worthwhile to attempt an RCT in cases where a control group could be easily identified and the study set up with relative ease, as long as any ethical concerns could be alleviated. (See http://betterevaluation.org/plan/approach/rct and http://siteresources.worldbank.org/INTISPMA/Resources/Training-Events-and-Materials/Designing_quality_IE_under_constraints.pdf)


Outcome Mapping

Outcome mapping is an evaluation tool that focuses outcomes on changes in behavior of individuals, groups and institutions and the relationship between these individuals and groups. It approaches impact in a very different way from traditional methodologies that focus on more tangible “products” of a program. Outcome mapping tries to focus on the “black box” of results that emerge downstream from an initiative’s activities, but upstream from longer term, broader changes such as economic, political or demographic changes. Outcome mapping starts with an initiative’s ToC and offers a framework for collecting data on the immediate and basic changes that lead to longer and more transformational change. It allows for a plausible assessment of the initiatives’ contribution to results, which is an important element to highlight for SIMLab. (See http://betterevaluation.org/plan/approach/outcome_mapping)


Outcome Harvesting

Outcome Harvesting enables evaluators, grant makers, and managers to identify, formulate, verify, and make sense of outcomes. In this method, outcome is defined as a change in the behavior, relationships, actions, activities, policies, or practices of an individual, group, community, organization, or institution. Outcome harvesting allows an evaluator to glean information from reports, personal interviews, and other sources to document how a program or initiative contributed to outcomes, whether positive or negative, intended or unintended. It does insist that the connection between the initiative and the outcomes be verifiable. Outcome Harvesting collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change. Information is collected from individuals or organizations whose actions influenced the outcome(s) to answer specific questions. The collected information is then validated or substantiated by comparing it to information collected from knowledgeable, independent sources. The substantiated information is then analyzed and interpreted at the level of outcomes that contribute to mission, goals or strategies and linked back to the questions that were initially posed. See http://www.managingforimpact.org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1.pdf 

Complexity Aware Monitoring

This approach is useful for situations where cause and effect relationships are not well understood. It is based on three key principles: a) Synchronize monitoring with the pace of change. In other words, in extremely dynamic contexts, monitoring needs to happen very frequently whereas in less dynamic and changing situations, it can happen less often.

Attend to performance monitoring’s three blind spots,: unintended outcomes, alternative causes from other actors and factors, and the full range of non-linear pathways of contribution; and c) consider relationships, perspectives and boundaries and how they link with each other or overlap. Some recommended approaches to complexity aware monitoring include Sentinel Indicators, Stakeholder Feedback, Process Monitoring of Impacts, Most Significant Change, and Outcome Harvesting. These are all explained further in the following paper and can also be investigated individually for more information on how to use them: http://usaidlearninglab.org/library/complexity-aware-monitoring-discussion-note-brief


Case Studies

Case studies focus on a particular unit, a person, site, or project, for example; and often use a combination of quantitative and qualitative data. They can help organizations and evaluators to arrive to an understanding of how different elements in a program fit together. Case studies can be aimed at illustrating, exploring, examining a critical instance, investigating operations during program implementation, examining causal links between the program and observed effects, or providing a cumulative overview of the program’s history. Case studies are sometimes combined with quantitative methods in order to flesh out a narrative, or they may take the place of quantitative methods in situations where there was no baseline study done, the data are too expensive or difficult to collect, or the organization has limited M&E capacity. (See http://betterevaluation.org/plan/approach/case_study)

Problem-driven Iterative Adaptation

Less an evaluation method, Problem-Driven Iterative Adaptation (PDIA) is an approach to development in situations characterized by a need for “a) enormous numbers of discretionary decisions b) extensive and intensive face-to-face transactions, to be carried out by (c) implementing agents needing to resist large temptations to do something besides implement the policy that would produce the desired outcome, and yet do so by (d) deploying ‘technology’ (or instruments) to bring about the desired change that are largely unknown ex ante.” The approach is focused on locally problem driven solutions, ‘muddling through’ with an authorization for positive deviance and a ‘purposeful crawl of the design space’, feedback loops based on the problem and on experimentation with information loops that are tightly integrated with decision making, and lastly the diffusion of feasible practice across organizations and communities of practitioners. This type of approach may be suitable for integrating feedback and monitoring information directly into program adaptation and software improvements for an organization like SIMLab. It may not substitute, however, for more formal evaluations that would be required by donors. (See http://www.cgdev.org/publication/escaping-capability-traps-through-problem-driven-iterative-adaptation-pdia-working-paper)

         SIMLab M&E Framework for Inclusive Technologies in Social Change Projects 

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/.  


[1] These questions have been adapted from the OECD’s Guidelines on Evaluating Safety and Security Reforms, 2011. Inclusive Technology-focused additions to the original text are in italics.

[2] Footnote: Despite the importance of indicators, baselines and targets, program designers often find it difficult to set them appropriately. Table 10.3 on page 26-28 of the OECD’s Handbook on Security System Reform gives some practical examples of how these principles can be put into action

[a]"low cost"--is tricky? Some groups think low cost = 0--and good tools need constant work--so that is unrealistic? Also some groups have small bugdets and some have large budgets--so what is "low cost". Should it be competitive or some other metric? Or at non profit rate (cover your costs but not make any profit, and use any overage to provide more services and improve the system?)

[b]Thanks Claudia for your comment! Agreed low-cost is completely dependent on the context and the users. We define low-cost as our users and clients define low-cost in a given project. Sometimes we may need to find solutions that are free, other times a small monthly fee can be built into an existing budget, so whatever the case the solution we seek to find is low-cost to the people or group paying for it.

[c]What about sustainability?

[d]easy to use for what group? Lawyers? those under 125 % FPL? should it say by the targeted group?

[e]similar to above, but easy to use given the people that should be using it.

[f]Interactive Voice Response, Low-Cost Video

[g]I would add here that Evaluations are short-term and "snapshots" on the ongoing activity at that particular time as opposed to the long view when monitoring occurs.

[h]Check out the USAID CLA Approach  as well

[i]Typical language is "lessons learned"

[j]Note: this is deliberately reproduced in the introduction for external readers boxed at the beginning of the document.

[k]I might change this to include language that wasn't so explicit; e.g. appropriate managers/leadership within the organization.

[l]While I am unsure of the standard for donors other than USAID, we, as a USAID Contractor typically view 5% of the budget as being for M&E activities.

[m]Ditto to comment above on language.

[n]Arguably technology its self does not lead to impact  - its what it enables. Maybe its just Oxfam but we have to be careful about this word and what we attribute to impact

[o]I think there are some projects where the technology itself is intended to have an impact - this isn't the only model but it is one

[p]and domain. The rules and regulations and SOPs will greatly impact what is defined as "well".

[q]I think this is really important and underestimated. It has to go hand in hand with MEL/ research expertise. Worse than no MEL is bad MEL showing us something false...

[r]Agree! I don't know how we tackle this, however.

[s]Is that an observation / recommendation - since when and what's the scope of your observation?

[t]I think it's an observation. We very often see technology at pilot stage. Is that just because of the role we tend to play in the process, do you think?

[u]internal, within the organization and also sometimes and often externally with other organizations, groups.

[v]They come in after there is a clear problems is stated, requirements specified, evaluation metrics identified so that those can be collected and reported on--etc. So hopefully they come at the second stage of a project. And stay until the metrics are validated by the partners behind the project.

[w]Really? They must have had a need statement and problem to be solved when they got funding for the project, right? I think that project owners should provide metrics they want answered at the conclusion of a project before the technology is built-to make sure the data can be captured.

[x]so is there a recommendation on how we build onto baselines? And also in some respects the baseline is what you did before the tech e.g. we do simple timing and costing of activities as well as measure staff perceptions

[y]That's what we've found is hard to come by - smaller organizations and even branches of large ones don't have this data.

[z]They look to the tech partner to tell them how to deal wit this, but they don't know the rules in the state, and are often unwilling to pay for any cost increase to build secure systems--so there is a lot of need here--to get providers of legal services to be well versed on the requirements they want in this regard, so that those can be built.

[aa]This seems counter intuitive to me.... analysing the consequence of introducing technology is more feasible to evaluate than the overall impact?

[ab]Hopefully the unintended results will come out in the qualitative data as well as metrics--and then the tool can be improved in the next phase--or immediately depending on resources.

[ac]Should the design not first include a risk minimization strategy--so that the tool is designed from the get go to minimize negative impact?

[ad]+lindaraftree@gmail.com I think this doesn't always appear. Should we rephrase to make this clearer?

[ae]Doing a little reading about adaptive programming and the idea of a "complexity aware theory of change" for programs. It might be beneficial to add in here. https://beamexchange.org/community/blogs/2016/1/19/jenal012016/

[af]Kelly's logframe

[ag]If there is a control group that does not receive the innovation/tool (if ethical) then can the contribution be shown? (with the intervention and without the intervention).

[ah]Example from DFID project - partners struggled to collect data

[ai]Quick case study in a box for this? Place near the ToC section? And Laura can you review the indicator guidance above to see if I've missed anything related to business model or other tech focused recommendations, and add them in italics?

[aj]Is this sufficient for Baselines or should there be additional instructions on where/how to find existing baseline for inclusive tech programs, how to conduct baselines on tech, what questions to ask, etc.? I don't have that info at my fingertips so it might be somewhat of an effort to research that aspect, but it might also be useful/helpful. You may also have an idea of what you've done in the past on this?

[ak]Perhaps we could keep that in mind for additional development as we test the Framework - lets see how often this comes up as something people need?

[al]Add links to BFM Project learning as it is published