A conversation with Ian Schwenke on data, power, and locally-led evaluation

When people talk about “impact,” they are often referencing numbers or data. 

  • How many farmers were reached? 
  • How much did income increase?
  • How many cooperatives were trained?
  • How much was produced?

But what do those numbers mean, and who gets to decide? Below, we unpack why locally-led evaluation matters, how power shows up in measurement activities, and what happens when numbers and lived reality collide.

Introducing Ian Schwenke, Nuru MEL Advisor

Nuru is a collective of organizations across East and West Africa, strengthening smallholder farmers and pastoralist households through climate-smart agribusinesses that stabilize rural communities. Data shapes program design, power dynamics, and ultimately determines which priorities are acted on.

To explore Nuru’s approach to data collection and interpretation, we sat down with Ian Schwenke, Nuru’s Monitoring, Evaluation, and Learning (MEL) Advisor. Ian works across the Nuru Collective, supporting locally-led organizations to design evaluation systems that are technically sound, ethically grounded, and rooted in local lived experience.

Tacy, Nuru Marketing and Communications Manager: Can you tell me a little bit about yourself, your background, and how you came to work at Nuru?

Ian Schwenke: I came to Nuru with both field experience and a technical background in evaluation. I spent nearly a decade living and working across Africa, including serving as a Peace Corps volunteer in Benin, where Nuru now also operates. That experience gave me a strong understanding of local context and how development efforts play out on the ground.

group of people gathered around an outdoor table having a discussion

Ian meeting with neighbors during his Peace Corps service in Benin

After that, I earned a master’s degree from Georgetown University’s School of Foreign Service, where I focused on evaluation methods and how organizations measure and approach development outcomes.

At Nuru, I’ve been able to bring those two perspectives together. For the past five years, I’ve led MEL efforts across the collective, helping expand systems and approaches across seven countries while ensuring they remain grounded in local realities.

Tacy Layne: Can you explain how your role fits into Nuru’s broader mission?

Ian Schwenke: I serve as Nuru’s Monitoring, Evaluation, and Learning Advisor. In practical terms, that means I work across the Nuru Collective to support the design of and strengthen the systems for measuring impact. These systems have to be rigorous enough to stand up to scrutiny, but flexible and grounded enough to reflect local realities.

Because Nuru is a collective of locally-led organizations, each organization is independently governed and embedded in the communities it serves. My role isn’t to define success in isolation from afar. It’s to work with local teams to align international standards and external reporting needs with on-the-ground realities, offering technical expertise and methodological guidance while ensuring that the ownership of data and learning stays local.

An Introduction to Monitoring and Evaluation at Nuru

Tacy: Before we get into data specifically, can you briefly describe Nuru’s structure for readers who may be less familiar?

Ian: Nuru began work in Kenya in 2008 and has since grown into a collective of eight locally-led organizations across East and West Africa. Each Nuru organization is legally independent and locally governed. Each organization designs and implements programs based on the needs, priorities, and realities of their communities, within Nuru’s model of supporting locally-led agribusinesses. Nuru, a nonprofit based in the US, exists to support that collective through fundraising, learning, systems support, and strategic coordination.

Our collective structure matters because it shapes how decisions are made, including decisions about data. We don’t believe that people thousands of miles away should define success for communities they don’t live in. Our systems are built to reflect that belief.

Tacy: How does Nuru’s collective structure impact monitoring and evaluation?

Ian: Traditionally, evaluation in development follows a clear pattern: data is collected locally, and then analyzed and interpreted externally. That can make it easier to standardize and report results, but it often loses important context along the way.

At Nuru, we try to balance that differently. We use shared frameworks and indicators so results can be compared and communicated across countries and to external partners. At the same time, local teams adapt how those tools are applied to reflect their specific context, priorities, and norms.

My role is to help harmonize those two sides, aligning international standards with what is meaningful and measurable on the ground. At its core, our approach is about generating credible evidence while also creating learning that is useful for teams and communities, not just external audiences.

What does Nuru measure?

Tacy: What kind of data is collected across the collective?

Ian: We collect both quantitative and qualitative data, along with a range of other measurement approaches that help us track progress in ways that are both comparable across contexts and grounded in local realities.

On the quantitative side, we register farmer information, track yields, model income, monitor market outcomes, and analyze cooperative performance over time. These metrics help us understand if agribusinesses are becoming more professional, profitable, and resilient, and to what extent the member households are directly benefiting from participation.

On the qualitative side, we conduct focus group discussions, key informant interviews, and community dialogues. These methods help us understand why certain outcomes are happening and how people experience change in their daily lives beyond purely numbers.

We also use data to model more complex concepts—like income stability, gender dynamics, and resilience—drawing from both quantitative indicators and lived experience.

Tacy: What does this data collection look like for someone who may be unfamiliar? Can you provide an example?

Ian: At a basic level, data collection is about understanding what’s actually happening on farms in a way that can be used for both learning and decision-making.

For example, after harvest, a local enumerator might visit a farmer to ask what they planted, how they farmed, and what they produced. The farmer may not measure their land in hectares or their harvest in kilograms, so the conversation is adapted to local units and translated into familiar terms.

The enumerator, often from the same area, can ask follow-up questions to clarify practices and results. That information is then converted into standardized measures so it can be analyzed across farmers and shared externally. The goal is to reflect the farmer’s reality accurately, while still producing insights that teams, communities, and external partners can understand and act on.

group of people walking through field of crops in Nigeria

Ian with Nuru Nigeria and Nuru Burkina Faso team members, observing crops in Nigeria, 2022.

How is Nuru’s approach to MEL unique?

Tacy: In the development sector, what does monitoring and evaluation usually look like?

Ian: Many organizations talk about locally-led development, but evaluation is where that commitment often stops. Donors may fund locally-led programs, but still want independent judgment on results, which can limit how much ownership is truly transferred.

As a result, even when programs are locally designed and implemented, evaluation is often externalized. External evaluators are brought in, data is extracted, and conclusions are drawn far from the communities involved.

That creates a disconnect. Local teams lose ownership of learning. Communities don’t see themselves reflected in the results. And evaluation becomes something done about them, rather than something shaped with them.

Tacy: How does Nuru approach this differently in practice?

Ian: We start by embedding evaluation capacity within country organizations. Each Nuru organization has local MEL staff who are deeply familiar with the context, language, and social dynamics of the communities they work in.

My role is to support those teams, not replace them. I provide methodological guidance, technical review, and connections to external research partners when useful. I also act as a bridge between local teams and the broader evaluation field, helping translate local insights into formats that are visible, credible, and engaging to international audiences. But interpretation and decision-making remain grounded locally.

We also facilitate learning across the collective. If one country organization develops a strong approach to measuring a particular outcome, that knowledge is shared with others. Over time, this creates an ecosystem of learning that doesn’t rely on a single external authority.

group of people stand in a line together for a photo with Nigerian mountains in the background

Nuru Collective team members, including Ian, at an experience sharing event between Nuru Burkina Faso and Nuru Nigeria, 2022.

Measuring What Matters: Income, Yields, and Sustainability

Tacy: One of the most common words in development right now is “sustainability.” How does Nuru define and measure it?

Ian: We try to avoid treating sustainability as an abstract concept and instead focus on what it looks like in practice, an approach I explore further in my recent piece with the African Development Bank. Nuru, that largely comes down to whether local organizations and the farmer cooperative agribusinesses they support can operate independently over time.

Cooperatives, at their core, are businesses. And for a business to be sustainable, it has to be profitable, professionally managed, and able to access and operate within markets. So rather than defining sustainability in broad terms, we measure those underlying components directly.

First is profitability. Through audits, we track whether cooperatives are generating consistent net income across multiple years. Second is professionalism, which we assess using tools like SCOPEinsight that look at governance, financial management, and market engagement.

These approaches are internationally benchmarked but implemented locally. We train local staff and partners to carry out assessments, ensuring strong contextual understanding, while producing results that can still be compared and communicated externally.

group of people stand together outside Esipe Dicha Union in Ethiopia

Nuru CSO Casey Harrison, Esipe Dicha Union Manager Tesfaye Estifanos, and Nuru MEL Advisor Ian Schwenke, 2026.

How Nuru Collects Data 

Tacy: How does Nuru prioritize women’s inclusion, and what does that look like when we’re thinking about data?

Ian: Within our work, Nuru prioritizes a women-first approach. But how that shows up in practice varies by country and by the social norms already in place.

As a starting point, we consistently disaggregate our data by gender to understand where gaps exist in outcomes like production, income, and participation. From there, we work with local teams to interpret those gaps and identify what may be driving them, such as differences in access to inputs, labor, or decision-making, and determine what actions are appropriate in that context.

In some cases, we’ve used more detailed survey tools, like the A-WEAI survey, to understand differences between groups, alongside simpler approaches like comparing outcomes between men and women and identifying patterns in the data. Across contexts, we’ve often seen differences in production between men and women, with women often producing lower yields or earning lower income.

At first glance, it would be easy to draw conclusions. Maybe women lacked skills. Maybe they weren’t adopting inputs. Maybe productivity was the issue.  But those assumptions don’t fully explain the gap, and the underlying drivers aren’t visible in the numbers alone. 

When we paired those findings with additional analysis and qualitative work led by local teams, a different picture emerged. The gap was less about ability and more about agency, particularly around decision-making, access to inputs, and control over resources.

We’re continuing to prioritize women’s inclusion across initiatives, but there isn’t a single method that applies everywhere. Without local context and interpretation, we risk designing solutions to the wrong problem. The key is asking better questions alongside the teams closest to the work.

Tacy: What considerations go into how Nuru collects data?

Ian: Many of the communities we work with have low literacy levels, complex gender dynamics, or histories of conflict and trauma. Asking questions without sensitivity can distort responses or cause harm.

We start by designing surveys based on internationally recognized approaches, including what to measure and how to ask questions. We then work closely with local teams to adapt those tools so they reflect the context, language, and norms of the communities where data is being collected.

That includes translating surveys into local languages, training enumerators who understand the communities they’re working in, and building in data quality checks so we can revisit questions or clarify responses when something doesn’t align.

For example, if you ask a woman about decision-making power while her husband is standing nearby, you’re unlikely to get an honest answer. Or if you ask a farmer about yield using units they don’t typically use, like hectares or kilograms, you may not get reliable data at all.

Good data collection means meeting people where they are, while still maintaining enough consistency to analyze results across contexts.

group of farmers sitting together outside a building with Georgetown students in Ethiopia

Georgetown University students meeting with cooperative in Ethiopia, 2024.

Interpreting Data: What do the numbers tell us?

Tacy: Interpretation seems to be where power dynamics show up most clearly. How does Nuru handle that?

Ian: The same dataset can support very different conclusions depending on who’s looking at it. In development, interpretation often defaults to those with the most technical credentials—frequently people far removed from the context. At Nuru, we treat interpretation as a shared responsibility. Communities, local staff, and technical partners all bring different insights. When those perspectives are combined, the result is more accurate and more useful.

We also prioritize returning findings to communities in forms they can engage with. Data shouldn’t disappear into reports that never come back. It should inform decisions, strengthen cooperatives, and guide future action.

Tacy: Can you give an example of how findings are returned to communities in a meaningful way?

Ian: Building on how we measure sustainability, one example is how we return those results to cooperatives. After assessing profitability through audits and professionalism through tools like SCOPEinsight, we share the findings directly with cooperative leadership.

The initial outputs can be technical, so local teams translate them into simple summaries and facilitate discussions around what the results actually mean in practice. That includes identifying where the cooperative is performing well and where there are gaps, whether in financial management, governance, or market access.

From there, cooperatives can request targeted support based on their own priorities. Over time, this creates a feedback loop where data is used for external reporting, and also to help cooperatives improve performance year to year and move toward long-term sustainability as independent businesses.

Sector Shifts: What can we expect for MEL?

Tacy: Looking ahead, what changes do you see coming in how data and evaluation are done in development?

Ian: First, locally-led evaluation will become increasingly necessary as global funding tightens. Organizations won’t be able to rely on external evaluators for everything. Building local capacity is both ethical and practical.

two men stand together smiling for the camera in front of a bus in Ethiopia

Ian with Nuru Ethiopia Monitoring & Evaluation Program Manager, Tatek Amataw

Second, local research institutions across Africa are growing rapidly. There is tremendous talent and expertise that funders need to learn how to engage directly. The challenge will be shifting long-standing habits of outsourcing to international firms and instead building more direct, equitable partnerships.

Third, new technologies, including AI, will reshape data analysis. These tools could democratize access to methods, or they could widen existing gaps. The outcome depends on how intentionally they’re used, particularly in terms of who has access to them and how they are integrated into local systems.

Finally, complexity isn’t going away. Climate change, conflict, and economic shocks will continue to affect communities in unpredictable ways. Evaluation systems must be flexible enough to adapt without losing rigor.

Tacy: If you had to summarize Nuru’s philosophy on data in one sentence, what would it be?

Ian: Data should be useful first to the people closest to the work, and still clear enough to inform decisions beyond it.

Tacy: Are there any final thoughts you want to share?

Ian: Data can either improve how programs are designed and implemented, or lead to conclusions that miss what’s really going on. In the right hands, it can illuminate complexities and serve communities. Data equity cannot be only a technical add-on. It has to be a foundational commitment to local communities from the start.

Learn more from Nuru MEL Advisor Ian Schwenke

Close