Learning for Action

View Original

One Way Evaluation Can Center Affected Communities: Put Nonprofits in the Driver’s Seat

See this content in the original post

During these pandemic times, the staggering level of inequality that was already a hallmark of our pre-Covid country has become even more savage. People of color are disproportionately contracting and dying of the virus; thirty million people have filed for unemployment in six weeks while low-wage workers have the highest risk of losing their jobs; and the people who can work from home are more likely to be white and highly educated. Over 500 progressive groups have called for a People’s Bailout to pave the way for a just recovery – but with Republicans in charge of the Senate, the Covid bailout bills have favored corporations while falling short of what is needed to help struggling families. Covid-19 has deepened the divide between those whose lives were already precarious and those who can weather the storm in relative comfort.

Everyone who cares about such barbarous injustice is rushing in to help. In the social sector, nonprofits are serving clients in new ways and massively ratcheting up their provision of emergency services, while foundations are rushing checks out the door for general operating support. Given the urgency, the speed is significant, and so is the focus on general operating support. Grantmakers are usually reluctant to provide such flexible funding, despite repeated calls that it is needed to strengthen the sector.

Evaluators don’t serve clients directly or disburse grants, so I have felt a little as if I’m standing awkwardly to the side of this flurry of activity, asking how I can help, but coming to terms with being a non-essential worker. (I am not the first to notice this.) In a time when everyone is just trying to survive, evaluation can feel like a luxury. How does the field respond? Certainly we are rethinking data collection to take physical distancing into account, get creative, lower burden as much as possible, and delay where necessary.

But Covid has prompted questioning that goes more to the heart of things. Given the ways that the pandemic is both revealing and deepening inequities, at LFA we are doubling down on work already begun: interrogating our practice as we seek to more thoroughly integrate Equitable Evaluation Principles, the first of which is that evaluation and evaluative work should be in service of equity. Equity is a rich concept containing multitudes, but surely one of its most important meanings is centering affected communities. Affected communities are those with direct experience of our society’s challenges: individuals and families living through daily crisis, as well as frontline nonprofits working with them.

See this content in the original post

In the evaluation world, centering affected communities often means lifting up client voices as part of data collection and analysis. Certainly, that is critical. But I’ve also been thinking about where evaluation sits in the social sector, and whose interests are centered as it’s carried out. Most commonly, foundations pay for us to evaluate programs that they fund. And even though the field has sought to move away from strict notions of accountability (to funders) to focus more on (grantee) learning, ultimately funders wield enormous power in shaping evaluation. They are, after all, the payers in this market – and their questions are usually the ones answered. As much as evaluators strive to center nonprofits that implement programs (and, by extension, the communities they serve), we inevitably feel the gravitational pull of funder interests.

The Covid crisis is horrific – but its sheer horror has accelerated our desire to reckon with the fundamentally unjust system we all negotiate every day. Neoliberal capitalism extracts resources from communities while massively enriching a tiny fraction of the population. Some wealthy families “sin-wash” their money by endowing foundations — foundations that then fund nonprofits to serve the very communities suffering from resource extraction. (Anand Giridharadas makes this argument in Winners Take All: The Elite Charade of Changing the World.) Evaluation is deeply implicated here. We are the cop on the beat, reporting back to philanthropy about how nonprofits use funds that grew from a system that is harming the nonprofits’ communities.

This was all true, of course, way back in 2019 – but Covid seems to have shaken loose our incrementalism. The social sector will surely be listening more closely to the insights of organizations like Justice Funders, as they lay out what it means to move along a spectrum from extractive to restorative to regenerative philanthropy.

What should change so that we center the interests of nonprofits and their communities, rather than the interests of the funders? From the early 2000s, LFA has sought to put the interests of the nonprofit first, advocating for nonprofits to be in the driver’s seat of evaluation. Rather than seeing the goal of evaluation as staying accountable to a funder, nonprofits can seize the opportunity to meet their own goals to learn, improve, and excel. (This is a point that Steven LaFrance, the founder of LFA, has been driving home for over a decade. See his 2017 talk to a group of Sobrato Family Foundation grantees here.)

See this content in the original post

We work with amazing foundations, and everyone has the best of intentions. Even so, we work inside an incentive structure where foundations hire evaluators to evaluate the programs of their grantees. What if the system were restructured so that the demand for evaluation came from nonprofits, rather than foundations? What if evaluators really answered to nonprofits? How might this shift our ability to become more authentic learning partners?

Instead of foundations paying for a specific evaluation (or learning engagement), they could simply pay learning partners to serve nonprofits, however those nonprofits saw fit. There are already foundations doing this type of work, with models worth emulating and elaborating upon. For example, the Tipping Point Community (a Bay Area grantmaker engaged in anti-poverty efforts) has hired LFA to support a cohort of workforce development grantees to collect and analyze long-term wage and employment data stored with California’s Employment Development Department (EDD). Knowing how clients fare in the years after participating in workforce development programs is vital to understanding program effectiveness – but collecting that information through labor-intensive follow-up emails and phone calls results in partial (and possibly inaccurate) data.

Administrative Careers Training Program, from Opportunity Junction (a Tipping Point Community workforce development grantee)

Tipping Point’s grantees can opt in to get support from LFA, choosing from a menu of services that includes developing learning questions, crafting a data request to collect data from the EDD, analyzing the data that comes back, and interpreting the findings. The grantees can use anywhere from zero to all of those services. Even though Tipping Point is paying, it is asking for nothing beyond the inclusion of core data elements in common in the data requests (so that it can summarize results across participating nonprofits and learn more about these programs from a systems perspective). And finally, Tipping Point is not pushing its grantees – instead, it is responding to those who have said they are excited to get started, asking for this service to begin as soon as possible.

As a pilot project, this engagement is relatively small, but it stands out as a model to build upon. Funders could grant the funds to nonprofits, which could hire the learning partner – this would take the funder out of the equation. Another way to go is a funder’s collaborative, which would address two challenges. First, nonprofits typically spend valuable time reporting to multiple funders on similar data in slightly different ways; if funders in a collaborative agreed upon one set of critical data points, this consolidation would be a massive time-saver to nonprofits. Second, with contributions from more grantmakers, the project could grow to provide nonprofits with access to even more robust supports.

See this content in the original post

If more robust supports were available, the nonprofits could expand upon the types of questions addressed. In the case of the Tipping Point grantees, they might move beyond wage and employment trends. While workforce nonprofits are clearly keen to find out about those outcomes, a deeper dialogue with them and their clients might reveal other questions they want to ask. I also imagine that some of the most important questions to answer are really about needs assessment: learning partners can be an extra pair of hands (and ears) to learn about the most pressing needs of the communities served. We can also engage communities and nonprofits in developing solution assessments: because those closest to the pain are the closest to the solutions. Communities know what they need, and learning partners can facilitate deep listening.

There are other ideas in the social sector to shift the balance of power between nonprofits and funders: from participatory grantmaking, to foundations competing with one another to fund programs. As we move into the post-Covid world, we must accelerate our adoption of ideas like these. Our obligation is to push for learning and evaluation that integrates equity principles. Sometimes that means doing our work differently within individual projects — but we also need to think more broadly about how the market for learning and evaluation is configured. Let’s push for more arrangements like Tipping Point’s so that we can more authentically center affected communities: nonprofits and the communities they serve. Not just in Covid times, but all the time.