We have gotten a few critiques on how we approach monitoring and evaluation lately from various resources. They are good critiques. I would categorize them two different ways:

  • We have gone through a lot of iterations of our approach to M&E as well as our approach to our interventions themselves, such that at this point a lot of time has passed and we are not able to show impact.
  • We are not able to quickly answer some of the questions that donors and potential donors have for us about our constituents.

These are both interesting critiques that point out  some of the challenges we have had in the past and continue to have related to M&E.

As to the first point: the M&E model that we started out our Nuru work with was a well-conceived one but not one that worked in execution for what we were trying to do back then. It was based on C.A. Sullivan’s Water Poverty Index. We had a system where we generated five index scores which, together, were meant to represent the poverty level of the communities where we work. There was an index score for each of our five programs at the time: Agriculture, Education, Healthcare, Small and Medium Enterprises (SME, now evolved into CED), and WatSan. These scores ware based on a composite of a bunch of different metrics related to each of the programs. The main problem with this old system is that it didn’t really make sense to measure the effectiveness of programs that changed with the needs of communities. It made sense to paint a pretty vivid picture of the community according to a bunch of different things, but not to tell us about our effectiveness.

When we realized this was a problem, we went back to the drawing board. This was in 2010. We took the whole year to find the best new system we could then we took all of 2011 to incorporate this new system into all the programs’ models.

Now, we are pretty comfortable with this new system, but all that has happened related to it is baseline data gathering. We have gathered baseline data for Agriculture, CED, Education, Health, and even our poverty metric tool, the MPAT. We have not gathered impact data yet, but we will by the end of this year.

The new system is meant to tell us two different things: what is the enabling environment in the communities where we work (the MPAT), and how well are our programs doing. Those two purposes bring me to our second critique that I have listed above. There are a few things that donors want to know that, for us, don’t fall into either of those categories. Examples are education levels of the adults in our communities (for the education program we want to know about literacy levels of children), annual yearly income at the household level (for CED we want to know about savings rates and ability to cope with shocks), and alternate objectives of the IGA program other than profitability (we measure IGA’s success with just one outcome).

We are altering some of our data gathering tools to be able to report these numbers for our potential donors. It is easy enough to do so, so we are trying. We can tell you more about this in the fall.

Critiques are always so useful to us! We are learning every day and trying to get better at all of our work.