Areas of Expertise

Impact and Performance Evaluation

IDG is at the vanguard in analytical, evaluation, and research techniques and methods. Our evaluations follow the quality principles and guidelines outlined in USAID’s Evaluation Policy (revised October 2016), ADS 201 (revised December 2019), and ADS 205 (revised April 2017). IDG conducts performance and impact evaluations under using rigorous quantitative and qualitative methods to generate data and evidence to improve effectiveness and inform programming. Our teams have an appropriate mix of evaluation and technical experts with specialized knowledge of the technical sectors, many of whom are local experts to tap into local systems and build local capacity. We target evaluation questions to meet each evaluation’s specific purpose, based on a clear understanding of the evaluation purpose, stakeholder needs, and the activity, project, or program theory of change (TOC). Our teams also focus early on how to address gender equality and social inclusion (GESI) in evaluation questions. We are ethical researchers who protect target groups by design: women, people living with disabilities, children, and marginalized groups are seen as equals with rights, including the right to refuse to participate. We safeguard them through the methods and sites we select to the way we ask questions and gain rapport, to how we analyze, present, store and secure their data, and to sharing results when feasible.

We weigh trade-offs between statistical rigor, feasibility, and cost of alternative designs. Evaluation methods are selected to fit the purpose of the evaluation, generate the data and evidence to answer the evaluation questions, and to ensure costs are commensurate with the value of the results. Our designs employ multiple, mixed methods to triangulate data, analyzing rigorously from several angles to answer questions comprehensively and credibly. We quantify intervention effects and population variables with surveys, structured observations, and available secondary data. Use of existing data, including re-creating baselines, is more cost-effective than new data collection, more reliable than simple recall, and less extractive of affected populations. To understand the “how and why”, we use in-depth interviews, focus and discussion groups, and participatory tools like walking ethnography, network analyses, mapping, stakeholder roundtables, and case studies. We also use rigorous but novel methods to quantify and systematize subjective data, like process and contribution tracing, “Q” method, Qualitative Impact Assessment Protocol, Qualitative Case Analysis, fuzzy sets, and Rasch modeling.

Project Experience

Related Content