Nuru International is a social enterprise founded with the mission of ending extreme poverty. “We work to, equip the poor in remote, rural areas to end extreme poverty in their communities within five years.”  We cultivate holistic empowerment in order to build sustainable, scalable solutions based on design thinking in five program areas: agriculture; health; water and sanitation; community economic development; and education.

Now you’re thinking, “That sounds great, but what does it mean?” It means that we’re about the entire community- we don’t just focus on improving access to healthcare or disbursing microloans, we do it all because everything is intertwined, everything matters. Only by addressing all of the problems associated with extreme poverty can we facilitate a community’s movement out of it.

At Nuru, when using the word “we” we’re not just referring to U.S. Foundation Team members working in the community- “we” also includes the community leaders with whom we work. Since our goal includes exiting a community within five years, it is vital that the community have strong leadership to continue its progress forward. For that reason, we maintain a small presence of only four or five U.S. members in the field at any time – the rest are community members selected, and paid, by Nuru.  These community members are mentored in service leadership so that U.S. members can stay behind the scenes, but the local leaders have access to our expertise.

The being said, we recognized that we can’t know everything.  That’s why we work as a general contractor of poverty solutions in our communities.  Partnering with organizations that have a greater depth of knowledge in specific program areas allows us to skip some of the trial-and-error interventions and go right to the ones that work.  Some examples include: working with CAWST to provide training resources; using the agriculture training expertise of One Acre Fund; and working with Mifos to track our micro-loans.

Together with the local community, other organizations, and the Developed World, we implement solutions to the problems of the extreme poor.

When we arrive at a community, our first task is to work with the members to define the community’s goals within the context of an exit from extreme poverty. These goals are the driving force of all decisions made.  In defining these goals, we inherently establish a reason for us to leave once they have been accomplished.

Our purpose is ending extreme poverty, but what makes this goal more difficult is the measurement of success: how do you know when you’ve alleviated extreme poverty?  We’ve decided to measure success as the rate of change in the community.  However, deciding which factors to measure is challenging with so many studies and evaluation resources available.  For us, measuring a select number of area-specific metrics allows us to make the best program decisions and effect the greatest change in poverty levels in the communities in which we work.

The rest of this post is an outline of our experiences using our Metrics 1.0 system, what we learned from our first evaluation process; and how we’re going to move forward to create a better, more action-based Metrics 2.0 system.

Kuria, Kenya

Have you ever been in Africa in December?  I have. I remember exotic animals, smiling faces, overfilled trucks, buses, and bicycles, and intense thunderstorms.

What I also remember are the challenges associated with implementing a third-party, 188-metric evaluation.

As discussed above, we want to know that our work is moving the community in the right direction – that our interventions are having a positive impact.  We decided that the best way to do this was to collect lots of information. We would then analyze the information to redirect our interventions or stay the course on them, depending on the results of our analysis. To these ends, we created a Metrics 1.0 system that consisted on 188 different metrics.

Throughout the first half of 2009, our staff members collected data related to each of these metrics. The Research Team and the field staff worked together to establish baseline values, exit criteria, priority levels, and expected progressions for each Metric, and the list of Metrics was refined. Some were added, some were taken away, and some were tabled for future use at other locations.

December arrived and we wanted to be sure that the information we gathered to determine progress was accurate and we weren’t “tooting our own horn”. So, we enlisted two third-party evaluators to create the survey tools, interview forms, and observation tools. These independent evaluators led the data collection process for Evaluation 1 in Kuria.

What we found out is that more data is not necessarily better than less data.  More data leads to a more complex shade of gray. Our findings, because of the complexity of the questions that we asked of community members, were inconclusive in many cases. In some cases, the data we gathered could have been analyzed to tell completely conflicting stories. Where that was the case, we could not assess the current state of a given Metric.  Of the data that was collected through the evaluation, less than half of it was usable in determining the amount of change in the community poverty level. However, 100% of the information offered insights into what was happening in Kuria, even if it didn’t fit neatly into the assessment of an area.

The most important thing we learned is that we desired information that pointed to an intervention that needed to happen.  A lot of the data was interesting, but didn’t tell us what we could do to improve the poverty level in the community – we wanted something more, something concrete, something on which community leaders could base their program decisions.

Sweat, Smiles, and Flow Charts

An important part of our culture includes acknowledging when things don’t work, and then finding a better solution.  When you’ve put a lot of time and effort into something, letting it go becomes difficult and you tend to go through a type of withdrawal, or stages of grieving.  On the Research Team, we’ve successfully passed the “acknowledgement phase” and are fully in the “better solution phase” for our next metrics system.

We reflected on all of that data that had been gathered during the first evaluation, and realized that it could be grouped into two main categories: Poverty Data and Program Data.

Poverty Data is information that can show changes in the poverty level of the community.  This data tracks our progress towards our purpose of ending extreme poverty. Poverty Metrics are designed to be used in any community, irrespective of country or culture.  They are meant as our universal standard for poverty assessment.  Poverty metrics can be assessed by third-party evaluators as an objective measurement of the effects of our interventions.

Some Poverty data will be used to generate what we’re calling a Dashboard.  The Dashboard is made up of significant Metrics from each target area, allowing us a quick look at changes in the state of the communities where we’re working.

Program Data is used to generate Program Metrics. Program Metrics are very important tools that we will be using in our field programs to measure effectiveness.  The Program Metrics will be outcome metrics that we measure to assess the interventions that we are implementing in each of the five areas. Program information is meant to be collected by our staff in-between third-party evaluations.

Below is a graphic depicting the data we expect to gather, the metrics it will generate, and the way we will use the data:

Metrics Flowchart

Graphic 1: Nuru Metrics System 2.0

Independent of the Poverty Data and Program Data categories exists the Poverty Intelligence Network, or PIN.  The PIN is meant as an all-encompassing data collection tool, along the lines of a total community database.  For example, we could collect information directly related to a Poverty metric, a Program dataset, or information that is independent of either.  Data collected through the PIN is meant to aid field staff in the decision-making and action process.

So, we are well on our way to incorporating the hard lessons we learned from last year’s implementation of Metrics 1.0 into the development of Metrics 2.0. We have enlisted the help of many outside experts, made a few decisions about how we will proceed and what the system will look like, and developed a plan for verification and implementation of the system. By early 2011, we will be ready to rock with Metrics 2.0! If we’ve done things right, the bad parts of Metrics 1.0 will not be part of this new system. The bad parts of Metrics 2.0 will be new and different bad parts, but we will not be afraid to iterate again. Thanks for reading!

 

Close