About this blog Subscribe to this blog

Reckhow: A Misleading Approach to Assessing Advocacy

This is a guest post from MSU professor Sarah Reckhow:

ScreenHunter_02 May. 06 17.27A new article in the Stanford Social Innovation Review presents a quantitative framework to help philanthropists assess their advocacy grants. The authors all work with Redstone Strategy Group, a consulting agency that “helps philanthropies, non-profits, and governments solve the world’s most urgent social problems.”

Policy advocacy is a growing area of foundation giving, particularly in education. So it is not surprising that funders who view themselves as strategic or venture philanthropists would be eager to find ways to assess a “return on investment” for advocacy.

Unfortunately, this framework is based on a simplistic view of the policy process and it appears to overvalue short-term returns on investments.

The framework draws on a list of things that advocates, PR firms, political operatives, and philanthropists think work; it is not based on current evidence from political science or policy research.

Utilizing this framework could encourage philanthropists to continue making wasteful investments in short-lived advocacy campaigns.

First, the authors formulate their framework around the notion of an advocacy campaign. This assumes a short-term style of advocacy, which is likely to result in a publicity-oriented approach. This is a limited view of the broad scope of work involved in policy advocacy. Moreover, it overlooks the most successful approaches to advocacy. Which groups have been the most influential in education policy for decades across all levels of government? Teachers unions. Arguably, their power has waned in recent years, but their perennial involvement in education policy decisions is not a product of campaigns. It is a product of institutionalization; in other words, teachers unions are well-established organizations with strong ties to a constituency, permanent staff, and knowledgeable well-connected leadership. The advocacy evaluation framework presented in SSIR is likely to overestimate the value of one-off campaigns, such as Learn NY, PENewark, Communities for Teaching Excellence, and ED in ‘08.

Second, the framework assumes an initial ideal point (presumably the ideal outcome selected by the foundation) that is assessed based on "success of the campaign." Assuming that a policy change actually does occur, it is quite likely that the final outcome will be a product of negotiation and compromise. Successful negotiations may be far more important for the implementation and long-term sustainability of a policy than the initial post hoc assessment of “success.” A policy change close to the foundation's ideal point may erode two years later and have little lasting value. A policy change farther from the foundation's ideal point may remain in place for decades. What is the relative value of these two outcomes to the foundation? A better framework for assessing advocacy would account for diminishing returns if policy change erodes, the foundation’s willingness to negotiate away from the ideal point, and the value of engaging specific bargaining partners.

Third, the framework deviates from basic research findings on the politics of policy-making. For example, the vast majority of lobbying campaigns fail. It is extremely hard to convert resources into policy change, and there is a huge built in advantage for the status quo. By simply comparing political conditions before and after, the framework authors are likely to overestimate the success rate of advocacy campaigns. Also, the authors base their framework on a simple “stages” model of the policy process. They propose that "a campaign to champion a new policy idea will need to pass through all three stages [agenda-setting, adoption, and implementation].” To quantify the overall likelihood of success, they multiply the likelihood estimates from all three stages. A review of recent major federal education policy change suggests that policy-making frequently proceeds through a very different process. For example, Race to the Top, NCLB waivers, and student loan reform were never on the public agenda. They existed in the minds of political insiders, who gained the opportunity to implement these ideas under Secretary Duncan.

A framework like this is obviously tempting for funders, due to the promise of a measurable return on investment. But over-investing in the kinds of advocacy campaigns that would be measurable with this framework are likely to result in more shuttered websites, accusations of promoting fake grassroots, and little long-term impact. I sincerely hope that foundations carefully assess the return on investment for evaluation services they purchase from consultants. 

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

An interesting and perhaps accurate criticism. Might I ask what models the author would suggest as being more useful? My concern is that, rather than propose another (better?) model the author is simply suggesting not using this one.

Sometimes perfect really is the enemy of good,.

The comments to this entry are closed.

Disclaimer: The opinions expressed in This Week In Education are strictly those of the author and do not reflect the opinions or endorsement of Scholastic, Inc.