Monday, March 30, 2020

‘What is feminist about outcome harvesting?’, Gender and Evaluation, Barbara Klugman

‘What is feminist about outcome harvesting?’, Gender and Evaluation, Barbara Klugman, 16Apr2019
At: https://gendereval.ning.com/profiles/blogs/what-is-feminist-about-outcome-harvesting


Rituu suggested I reflect on my practice and also share with you the set of blogs a few outcome harvesting practitioners wrote for AEA365 (attachment) to give a sense of the method and in so doing pay tribute to the method’s originator, Ricardo Wilson-Grau – see attached. So, this note reflects on outcome harvesting as feminist. It is arguably feminist in its because of both its steps and principles (See in the attachment, blog 1 which outlines the steps, and Quinn Patton’s comments on the principles in blog 2; as well as Ricardo Wilson-Grau’s image of the tree which lists the steps and principles.)

Fundamental to a feminist approach is participation – that those affected by an intervention need to be involved throughout the cycle of planning, implementing, evaluating and improving it. Outcome harvesting requires that an evaluation be useful, and that those most involved in an intervention get involved in the shaping of the evaluation and the interpretation of findings.  This is not only a utilisation-focused approach, but one that recognizes that in the design itself, those who could be using findings for learning, need to be inputting on the purpose and shape of the evaluation and doing the sense-making along the way with the evaluators.

The approach also relies on those engaged in the intervention to generate outcomes or to engage deeply in the framing of outcomes descriptions and descriptions of the contribution of the organisation being evaluated and its allies, a process which involves a high level of recognition of the knowledge of those involved even while it includes coaching or capacity building.  This is relevant to a feminist approach because ‘M&E’ is frequently disempowering of those running or benefiting from an initiative, whereas OH’s coaching style builds their sense of power and agency. It also supports or coaches them to make meaning of the relationship between their outcomes and their strategies and to use this to strengthen their strategic thinking and planning going forward.





This photo illustrates outcomes generated by leadership of the Treatment Action Campaign (TAC) in one of South Africa’s most rural and impoverished provinces, the Eastern Cape.  Using post-its, they put outcomes in one colour, TAC’s contribution in another. This formed the basis for collective discussion about where TAC had been and where it is going. When I was asked by one of TAC’s core funders to draw lessons from the experience of TAC, I asked its leadership if this would be useful to them. They argued that multiple papers had been written about how TAC won a legal victory that brought the right to treatment to millions of South Africans. But little had been written about their organizing and mobilizing strategies on the ground. They sent me to the Eastern Cape. In my consultations with their leadership, they were keen on the exercise because they felt their own history was untold. What they most wanted was a pamphlet about their own work. So, this is the trajectory we followed. In addition to their pamphlet, the product for the funder, which was about lessons from TAC as a whole, used the Eastern Cape experience to bring alive the TAC story. See Membership based organizations in constitutional democracies  Lessons from the Treatment Action Campaign.


OH recognizes the need to identify and name collaboration which is an aspiration of feminism. Like other methods steeped in a recognition that social change in complex contexts happens in neither a linear nor predictable way, OH focuses on the contribution of the initiative, looking for a plausible or reasonable explanation of its direct or indirect influence on outcomes rather than seeking to attribute changes in their entirety to the initiative being evaluated.  Outcome harvesting draws on systems theory, in particular recognizing the multiple factors may have influenced change. It is not interested in proving that a single intervention caused a change, but rather in tracing back from actual changes that can be observed, to see what the contribution of a particular intervention was, whether directly or indirectly.  In addition, it makes it possible to systematically name the role of others in the contribution.  Indeed, in the framing of outcome statements, in addition to drafting the outcome and the contribution descriptions, one can draft the ‘contribution of others’ and analyse that to understand if, when and how collaborations have been critical in influencing outcomes.

The following quote illustrates one of the findings of an evaluation that used outcome harvesting among other methods, to get to grips with the influence of collaborations among workers’ organisations, researchers and others involved in building ‘inclusive cities’:
“Everyone has their roles, but combining together these capacities and experiences we were able to meet larger objectives, larger than the sum of its parts or number of people involved. For example, there are people trained in handling media who collaborated with us, people skilled in the lobbying, politics and the ways in which these events take place. There are other people with very strong skills in rallies, protests, organisation, or logistics. The different capacities together made our achievements a reality.” –Silvio Ruis Grisales, RED Lacre, Latin America Regional Network of Recyclers
(From Klugman, B., ‘Global advocacy networks: lessons from the ‘Inclusive Cities’ project and WIEGO, AEA Conference, Denver, 17 Oct 2014)

OH recognises the importance of process. Feminism gives attention to process, to the values embodied in the way individuals and institutions behave, and to recognising small steps that are signs of increased empowerment or agency.  By enabling one to identify any outcomes, and to choose how to categorise them, OH can give weight to values and objectives that may not be the end-goals of initiatives. For example, if the end goal is a change in policy, OH can nevertheless harvest and name increased participation of marginalised groups in decision-making spaces, invitations to such individuals or groups that recognise that their voice matters. Naming these shifts is important in recognising the role of organising and voice in efforts to ensure that any policy changes reflect the perspectives of those most affected, a core value of feminism. As another example, in an initiative to strengthen capacities of farmers, OH can give weight to increased leadership by women farmers, or to decisions by farmers’ organisations to recognize long-standing practices of women farmers in seed management. Of course, any method can do this, but because OH asks, ‘what changed’ rather than looking for predefined changes in a logical framework, it renders those involved open to understanding the dynamics of change and what changes are considered meaningful by those involved.






 
Having said that, OH is about outcomes; it is not the best method for evaluating the quality of relationships which is why I frequently use it alongside other methods. A feminist evaluation approach is likely to encourage evaluation designers and users to ask questions about power in an initiative and about if and how well it challenges power relations. While OH can pick up outcomes in this regard, interviews, focus groups, workshop conversations or even surveys may be needed as a supplement to get to grips with relationship dynamics.
 In my view, feminist practice can be brought to most evaluation approaches; and any evaluation can ask questions of concern to feminism. I am not arguing that OH is better or unique, but for those not familiar with the method, I hope these thoughts encourage you to explore it further.

AEA365: APC TIG Week: How to evaluate actions aimed to improve advocacy capacity - Barbara Klugman

How to evaluate actions aimed to improve advocacy capacity - Barbara Klugman

Hi, I’m Barbara Klugman. My practice focuses on evaluation of social justice advocacy. One of the challenges I frequently bump into is how to evaluate training done or conferences run in support of advocacy. Clients may have done a post-event survey indicating that participants gained new knowledge or met new people, both of which were goals of the training or conference; but clients do not know whether this learning actually strengthened the quality of participants’ advocacy.
Lessons Learned: To address this, trainers are encouraged to gather data that distinguishes learning from action, preferably with observable changes in behavior, relationships, actions, policies, or practices. In order to gain verifiable information, I try to get participants to give details/nuances of their application to get them away from talking in generalities.  For example, questions that include when, where, and who can nudge participants to describe technically verifiable events rather than broad statements of use.  For advocacy purposes, the goal is then to try and understand did the participants’ action produce desired advocacy results.  I like to use a two-part system.  First is a yes/no—did the actions influence any actor?  If yes, I again probe for the details that could, in theory, be verifiable.  For example, “Can you please describe what the stakeholder did or said differently after you engaged them, and when and where this took place?”

Hot Tip: Don’t automatically dismiss observation as a data collection method.  Many advocacy trainers think observation is too costly or not practical.  However, many advocacy events are observable and the organizations can use a simple template to capture changes in behaviors, relationships, policies or practices in those event settings.

Cool Trick: Don’t get stuck in overly prescribed outcome frameworks.  Because of the dynamic nature of advocacy work, outcomes from trainings could be highly variable.  Rather than only looking at target competency changes or specific desired outcomes, using an approach like outcomes harvesting can capture nuanced aspects of the influence of advocacy training.

For example, as a developmental evaluator of the Atlantic Fellows for Health Equity based at Tekano (South Africa), I have captured outcomes from interviews, some of which I’ve done, some were done by the internal evaluator.  The harvest gave us the following information regarding fellow’s descriptions of shifts in their behavior which we organized according to the three competencies the programme aims to improve as well as categorizing actions taken to engage other fellows.



Lessons Learned: It is important for advocacy training evaluators to remember that the effectiveness of participants’ advocacy will be heavily influenced by the opportunity to use those skills in the prevailing environment.  As a result, evaluators should build in a context analysis as part of their assessment of outcomes influenced by the training or conference.
Rad Resources:
   The Kirkpatrick Model for the rationales of distinguishing reaction and learning from behavior change and results such behavior in turn influenced
   For detail on how to categorize outcomes see Ricardo Wilson-Grau’s publication on Outcome Harvesting. http://www.managingforimpact.org/sites/default/files/resource/outome_harvesting_brief_final_2012-05-2-1.pdf
AEA365 is hosting the APC (Advocacy and Policy Change) TIG week.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. 



Outcome Harvesting: Revealing types and patterns of change over time by Barbara Klugman

28 March 2019, AEA365 See original blog at: https://aea365.org/blog/outcome-harvesting-week-outcome-harvesting-revealing-types-and-patterns-of-change-over-time-by-barbara-klugman/


Hi, I am Barbara Klugman, an independent evaluator based in South Africa. I use outcomes harvesting mostly for evaluating progress of social justice advocacy initiatives and outcomes of training initiatives.  I also support funders in using it for monitoring progress and strategic learning about their grantmaking portfolios.

OH can reveal processes and patterns of change over time by analyzing sets of outcomes. Ricardo Wilson-Grau, the originator of OH, notes that, “The true value of an outcome harvest is not collecting individual outcomes but demonstrating how sets of outcomes reveal processes and patterns of change over time. So, the evaluator must take care not to be drowned in outcome detail and ensure that the story or picture of change emerges.” [Outcome Harvesting Principles in Practice 2016] This can be done in diverse ways.

Cool Trick:  Collect and analyze small outcomes over time. When collecting outcomes over time, OH tells the story of how one group, in collaboration with others, influenced specific outcomes over time as illustrated by the example below.

African Centre for Biodiversity (ACT): Evaluation using outcome harvesting 2016:


Cool Trick:  Categorize outcomes by type. During analysis, categorize the outcomes into different types. For each harvest, the types are developed based on the question being asked.  For example, a harvest user tracking outcome of an advocacy initiative sorted outcomes as illustrated in the figure below. Analysis revealed the proportions of outcomes aimed at influencing government that were actual changes of policy versus changes in narrative and debates, versus changes in organizational capacities, among others, again something unpredictable in advance and across very diverse organizations and regions.  It is this kind of quantitative data that gives an overview of what is happening in a large multi-issue, multi-organizational initiative and surfaces strategic questions going forward.



-->
Cool Trick: Look for patterns. OH reveals patterns that of change not predicted in advance. For example, an outcome harvest was part of an evaluation of Ford Foundation’s 2012-2017, $54million Strengthening Human Rights Worldwide global initiative (SHRW). The initiative aimed to influence the human rights movement internationally, in particular the agency and influence of groups in the global south. The grantees developed their own strategies to achieve this broad objective. Not surprisingly, the actual outcomes targeting governments or inter-governmental human rights bodies demonstrated most changes were at the international level (42%) or at national level (35%). What was unanticipated was that 23% related to regional human rights institutions.  This unintended and unexpected finding alerted the evaluation users to the extent to which human rights groups are now focusing attention at regional level.
Lessons Learned: Harvesting over a number of years reveals the nuances of organizations’ contributions towards high level goals. Harvesting across a diverse portfolio of organizations demonstrates their cumulative influence on different types of outcomes.
Rad Resource:
   Further demonstration of how the method reveals patterns can be seen in the report of the Ford Foundation Review at https://www.hrfn.org/resources/towards-a-new-ecology-for-the-human-rights-movement/


Outcome Harvesting Week: What is Outcome Harvesting?


See post on AEA365: Outcome Harvesting Week: What is Outcome Harvesting? Barbara Klugman, Heather Britt and Heidi Schaeffer

Hello, we are your blog series hosts, Barbara KlugmanHeather Britt and Heidi Schaeffer, colleagues of Ricardo Wilson-Grau, the originator of Outcome Harvesting.  Ricardo mentored and inspired many members of the AEA community.  Sadly, he passed away on December 31, 2018. This series of posts on Outcome Harvesting is in his honor.  In this first blog we use his own words to introduce Outcome Harvesting (OH).
“Outcome Harvesting is designed for grant makers, managers, and evaluators who commission, manage or evaluate projects, programs, or organizations that experience continual change and contend with unexpected and unforeseeable actors and factors in their programming environments.”
“Unlike other monitoring and evaluation approaches, Outcome Harvesting does not necessarily measure progress towards predetermined objectives or outcomes, but rather, collects evidence of what has changed and then, working backwards, determines whether and how an intervention contributed to these changes.” (2019, p1)

OH is an appropriate method when the evaluation is asking “who changed and what changed?” It is not the right method for evaluation questions such as: “Did the training program increase participants’ knowledge and skills?” It is a good method for asking questions such as: “What are the participants doing differently after acquiring new knowledge and skills?” And, “What do the organizational policy and/or practice changes look like since the training program began?”

OH describes an outcome as an observable change in behavior (relationships, actions, activities, practices or policies) of an individual, group, community, organization in civil society, corporation, government, media, or member of public. In every outcome harvest, the intended users of the harvest findings define what constitutes an outcome. An OH should seek outcomes that relate to the evaluation question, but also note unintended or unexpected outcomes and both positive and negative outcomes.
Outcome Harvesting follows six interactive steps:

An outcome description includes:
   summary: who, when and where changed their behavior;

   contribution: how the intervention contributed, directly or indirectly, towards influencing that change in behavior;
   significance: of the outcome in relation to the organization’s or initiative’s goals.
Harvesters compile and categorize outcome descriptions (e.g. by type of actor, location, or other useful grouping). Then, harvesters interpret the patterns in the aggregated outcomes to answer monitoring and evaluation questions. Harvest users should be deeply engaged throughout analysis and interpretation.
By the end of 2016, Outcome Harvesting had been used by over 400 networks and associations, NGOs, community-based organizations, research institutes, and government agencies in 143 countries on all seven continents.

Hot Tip:  Ensure that the initiative is at the point of influencing outcomes, as it can be inappropriate and disempowering to do an OH too early.

Lesson Learned:
It takes practice to confidently and accurately identify and draft outcomes and to engage the users in the whole process. If using OH for the first time, consider working with a co-facilitator or mentor familiar with the method.

Rad Resources:
   Outcome Harvesting: Principles, Steps and Evaluation Applications (IAP, 2018) by Ricardo Wilson-Grau
   World Bank, Outcome-based Learning Field Guide (2014)
   Outcome Harvesting website
The American Evaluation Association is celebrating Outcome Harvesting week. The contributions all this week to aea365 come from colleagues of the late Ricardo Wilson-Grau, originator of Outcome Harvesting, and these articles are written in his honor. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.



Friday, February 22, 2019

Orientation to using Outcome Harvesting for evaluating advocacy. Klugman with Ricardo Wilson-Grau 2012

OUTCOME HARVESTING

Here's the talk I wrote with Ricardo Wilson-Grau reflecting on our experience on using OutcomeHarvesting for evaluating an advocacy network. I presented it at the American Evaluation Association 2012 Conference.

I find that the idea of harvesting outcomes helps us to be rigorous in naming who changed what when. Often we'll say 'oh that meeting was really useful because it gave people ideas'. The discipline of Outcome Harvesting is to make sure that we record who took away what ideas from what event and how did they use those ideas - that is what changed what, and how did a meeting contribute towards those changes (by giving the person new ideas and motivation to take action)....so read on....

Thursday, February 21, 2019

Evaluating social justice advocacy: a values based approach. Center for Evaluation Innovation 2010

See my briefing document on values in social justice evaluation:

AEA365.org Using SNA to quantify and visualise relationships of power and influence in advocacy networks

Reposted from AEA365.org:

I have been using social network analysis (SNA) both for understanding shifts in the scope and depth of collaborations within networks, and to understand if and how relationships among a cohort of fellows in a leadership development programme shift over time. This blog talks to use in evaluating advocacy networks.


APC TIG Week: Using SNA to quantify and visualise relationships of power and influence in advocacy networks by Barbara Klugman

Hi, I’m Barbara Klugman. I offer strategy support and conduct evaluations with social justice funders, NGOs, networks and leadership training institutions in South Africa and internationally. I practice utilization-focused evaluation, frequently using mixed methods including outcomes harvesting and Social Network Analysis (SNA).
Rad Resource: For advocacy evaluation, SNA can help identify:
  • how connected different types of advocacy organizations are to each other;
  • what roles they play in relation to each other such as information exchange, partnering for litigation, driving a campaign, or linking separate networks;
  • if and how their positioning changes over time in terms of relative influence in the network.
The method involves surveying all the groups relevant to the evaluation question, asking if they have a particular kind of relationship with all other groups surveyed. To illustrate the usefulness of SNA, the map below illustrates an information network of the African Centre for Biodiversity, a South African NGO.  In the map, each circle is an organization, sized by the number of organizations who indicated “we go to this organization for information” – to answer one piece of the evaluation question, regarding the position and role of the evaluand in its field, nationally and regionally. Of the 55 groups advocating for food sovereignty in the region who responded, the evaluand is the main bridger between South African groups and others on the continent. It is also a primary information provider to the whole group alongside a few international NGOs and a few African regional organizations.
As another example, an SNA evaluating the Ford Foundation’s $54m Strengthening Human Rights Worldwide global initiative distinguished changes in importance and connectedness before the initiative and after four years, among those inside the initiative (blue), ‘matched’ groups with similar characteristics (orange), and five others in Ford’s portfolio (pink). It shows that the initiative’s grantees and notably those from the Global South (dark blue) have developed more advocacy relationships than have the matching groups (see larger size of nodes and more connections). However, the largest connector for advocacy remains Amnesty International – the big pink dot in the middle, demonstrating its continuing differential access to resources and influence relative to the other human rights groups.

Hot tips:
  • Keep it simple: As surveys ask about each organization, responding takes time, so ask only about roles that closely answer the evaluation questions regarding the network. For example, “my organization has engaged with them in advocacy at a regional forum”; “my organization has taken cases with them”
  • Work with a mentor: While SNA software like Gephi is open access, making sense of social network data requires statistical analysis capacity and SNA theory to extract meaning accurately.
Lesson Learned:
  • Consider whether or not to show names of groups as your tables or maps will surface who is ‘in’ and who is on the outside of a network in ways that might have negative consequences for group dynamics or for individual groups, or expose group’s negative perceptions of each other.
Rad resources:
Wendy Church, Introduction to Social Network Analysis, 2018.