Craving critiques…Nuru Evaluation is Released
Nuru’s first ever Third-Party Evaluation is ready for review!
This is a monumental step in our journey towards developing an international poverty metric system to transform the development industry. It is not perfect. It is not ideal. But, it is a wonderfully important first step.
For Nuru to have a Research team within months of starting operations is amazing. To have piloted a Holistic Poverty Metric System and have completed a Third Party Evaluation using that system in Year 1 is visionary (to be clear, I am not saying that I am visionary, but I am saying that the commitment in Nuru’s model to research, accountability, and Measurement & Evaluation is).
It has not been easy, and we have made mistakes. People have been very understanding. We have learned a tremendous amount. We will take all the lessons from this Evaluation and our metrics work over the past year, and we will strive to improve. We will build upon these lessons to design a Metrics System 2.0, which will move us even closer to the goal of ending extreme poverty. And we invite you to critique, revise and provide feedback. Seriously! The more brains invested in this goal, the better we can serve those living in extreme poverty. Join us in the fight to use data and metrics to help end extreme poverty, to use data to truly transform – we welcome your critique.
Now, let’s get to the good stuff. The Evaluation Package is a series of documents and files (more detail below) that covers a HUGE amount of information and 164 individual metrics. If you are interested in receiving the Nuru Evaluation Package, please request a copy by emailing Nuru’s Research Director, Gaby Blocher.
Components of the 2009 Nuru Evaluation Package:
1. Nuru 2009 Evaluation Final Report with Nuru Commentary: The 68 page Evaluation Report, written by third party-evaluators, includes sections on Data Gathering Methodologies, Evaluation of the Metric System by target area, Comments on selected Metric results, Qualitative Observations on results, and Qualitative Assessment of Gender and Family Structure. Appendices include recommended modifications to survey tools, summary of quotes from the field, and compilation of Evaluator recommendations mentioned throughout the report.
2. Evaluation 1 Score Model: A large data file in Excel, which includes a) raw data, ?b) two primary household survey instruments, c) data collection sheets for business visits, health facility visits, and home visits, ?d) formulaic calculations and descriptions of how values were derived for all 164 metrics, e) and specific explanations as to why 53% of metrics were not included in Index Score calculations.
3. Metric System and Index Score Description: A one-page summary describing the overall PovertyIndex Score system and our concerns about the possible discrepancy between the intent of the system and the actual results of the system, while in a learning, pilot phase. This document, does however, communicate the results of the Nuru Metric System through the 5 Index Scores values (Health Poverty Index, Agriculture Poverty Index, Education Poverty Index, Health Poverty Index, and CED Poverty Index).
4. Score Definitions and Progressions: This Excel file is the backbone of the Metric System in that it includes baseline values, weight, and score progressions defined for each metric.
5. The Data Collection tools: Household surveys and in-depth interview data collection questions are contained within the Evaluation 1 Score Excel Model. Data collection tools used for the Education sector are in separate files.
Given how much information and data there is to peruse, I will do my best to give you examples of some of the highlights below.
One Important Note: The findings below are only one small part of the Nuru Metric System and only one component of the Third Party Evaluation. These values feed into a numeric system that strives to calculate a Poverty Index Score for each of Nuru’s 5 target areas – Water Poverty Index, Education Poverty Index, Agriculture Poverty Index, Health Poverty Index, and Community Economic Development Poverty Index. In addition to collecting data that result in the values above, the Third Party Evaluators also assessed and critiqued the Nuru Metric System itself. Highlights of this assessment are captured below as well.
Key Findings from December 2009 Evaluation / Data Collection:
• 63% of Schools have sufficient Water Available Onsite Year-round, up from 0% one year ago.
• 24% of Households Disinfect Drinking Water correctly. This is up only 4% from baseline and points us in the direction of needing to invest additional resources and efforts.
• 49% of people wash hands at most or all critical times, up from 5% at baseline.
• 47% of farmers utilize agricultural extension training, up from 0% at baseline.
• 65% of farmers are intercropping, which improves soil health and agricultural yield. This is up from less than 25% of farmers at baseline.
• Student/Teacher Ratio in Primary School has increased slightly since baseline (50.6 vs. 50), requiring further exploration and/or efforts to determine how to improve this situation.
• Overall completion rate of Primary School has risen from 11% to 15%, suggesting that additional effort is needed.
• 33% of Households are using bednets properly, up from 25% at baseline. Nuru’s may wish to use this data to further explore what worked and what did not.
• 64% of population is actively involved in “saving.” This shows progress and is up from 32% at Baseline.
• 33% of Businesses are owned by women, down from 42% women-owned businesses at baseline. This suggests further work needed in women’s empowerment in business and/or an
inconsistency between Evaluation 1 and Baseline data collection.
• 53% of the 164 Evaluation 1 Metrics could not be reliably used in calculation of the Index Scores. Reasons are varied and are specifically noted in the Evaluation 1 Score Model file as well as pp 28-38 of the Evaluation Report. This highlights the multitude of lessons we have learned as to how to this better in Evaluation 2 and in Metric System 2.0.
Key Metric System Recommendations from Third Party Evaluators:
The following key gaps exist in the current Metric System:
• Oversimplifies complex realities
• Does not capture qualitative changes in people’s lives
• Misses many gender-specific indicators
An Improved Metric System should:
• Insure that indicators / datapoints are selected strategically to maximize community benefit and minimize cost of collection
• Include both quantitative rigor and qualitative flexibility in design of the next Metric system
• Integrate gender-sensitive indicators and gender-focused programming in all 5 target areas of the Nuru model
• Emphasize community participation and local leadership in metrics and evaluation, as has occurred with local Nuru strategy and programs
• Include revised metrics and indicators. See pp 8-28 of Evaluation Report for further detail. Examples include:
– Revise the output metric of “% population trained in water treatment methods” to an outcome metric of “% of population using correct water treatment methods.”
– Determine a) specific type of malnutrition Nuru wants to measure and b) most reliable method of measuring specified type of malnutrition.
– Define metrics more clearly, such as what is meant by “B21 – Average Days Required to Opena Business.”
– Remove metrics that tend to serve more as useful datapoints than indicators of poverty, such as A5 Price of Fertilizer and A6 Price of Seed.
Whew. So those are the highlights. The Evaluation Package has landed.
Just for Fun – Eval By Numbers:
164 – number of metrics used in Evaluation 1 in Kuria
19 – rainstorms Evaluation Team sought refuge from during data collection
17 – number of villages, according to local chiefs, in the Nuru catchment area
159 – approximate number of boda boda motorcycle rides used to manage data collection across the
527 – number of households surveyed
14 – approximate number of Nuru Research Volunteers who tirelessly donated their time to assist with data entry
34 – Kenyan data collectors trained
16 – Kenyan data collectors / interviewers hired
321 – number of chapattis eaten by the Evaluation Team over the course of data collection
24 – number of days of the Official Evaluation in Kenya (Nov 27-Dec 20)
405 – approximate number of person-days spent on data collection for Evaluation 1
8 – number of schools interviewed