I’ve been hearing a lot about failure lately: “I don’t want to admit that I have failed”, “Let’s re-frame these results so it doesn’t seem like we failed”.  Most of these comments have come from what I consider to be successful people, running successful programs and businesses.  Why is this conversation coming up about failure?

People are trying new things.  We live in a complex world that requires innovation to adapt and improve our approaches.  It’s courageous to try something new, especially when we’ve received funding and feel like we’re accountable for results.  Innovation requires a certain kind of risk, because we are doing something we aren’t actually sure will work.  If we already new what would work, and if our context never changed, we could confidently continue what we are already going. 

Why is innovation so important?

We live in a changing world with new challenges, complexity, and ever-shifting influences.  Innovation allows us to imagine new solutions.  This requires a lot of leadership and willingness to learn.

Why is innovation so hard?

We may understand the need to change, simply because we know our approach needs to improve or because we can envision a better way.  Knowing change is needed, however, isn’t the same as knowing what to do about it.  Until we try something, we don’t know if it will work: we are operating in a constellation of needs, stakeholders, funding, relationships and other pressures that will impact the best-laid plans.  We can’t know how something will work until we try it.  Trying something new means exposing ourselves to a world of unknowns.  Our challenge is to act with our best information, intentions and approach.  Then we need to reflect, because there will be nuances to our experience that can teach us a great deal about how we might move forward effectively.  Listening carefully is the key to our ability to learn and improve, bringing a clear understanding of what didn’t work forward, just as much as what did work. 

I am much more worried about failure of imagination, failure to act, and failure to reflect, than I am about hearing “this completely failed, let’s learn from it.”  The very reason that we tried something new was to see if it worked.  If it didn’t, let’s not repeat it, and let’s understand why. 

How do we as evaluators create a safe space to talk about failure? It’s a conversation that helps us evolve and grow as a profession.  It’s key to supporting our clients to benefit from their experience; saying something failed shouldn’t be about admitting weakness, it should be about celebrating a new approach and building collective wisdom around how it worked, what didn’t work, and what lessons can be learned and shared.

What can we do to support talking about failure?

·         Create a safe space for the conversation

·         Make it clear from the beginning that learning is the goal

·         Focus on the experiment, not the success or failure of the organization carrying it out

If we knew exactly how to do something, it wouldn’t be innovation. We can create the opportunity to build on our failures through innovation, action, and reflection.

 
 
“No offense, but I’m not actually going to read this” said a client last week about my Final Evaluation Report.  I’ve been gathering data for two years and have spent countless hours putting it together.  Actually, I don’t take offense.  The final report format requires of me a certain amount of comprehensiveness; by which you can also envision a swath of dust-catching pages full of detailed data, long explanations, and figures. 

In fact, I consider the final report an essential document, because it is the full version that details the methodology, data sources, analysis and other important information.  I know, however, that this is not the final product that my client wants to see.  It’s just the repository for all of the relevant information, including appendices with all of the survey instruments, interview protocols, and detailed results. 

What my client wants to see is a richer representation of the data.  They want to see it in colour, in context.  They want to know what it means.  This is one of the most exciting and meaningful parts of my work.  I have created a number of reports in association with the final report, which help to visualize the data available, and help explain the relationships between different aspects of the work.  This “report” is no longer one thing; it is a variety of versions and formats which may have multiple goals: understanding the process of a particular strategy, articulating outcomes within a combination of strategies, illustrating the results of a particular method, and communicating with different kinds of audiences ranging from internal decision-makers to community partners. This is another step beyond data analysis, drawing on skills in communication and design, and it’s challenging but rewarding.

You can find out more about better evaluation reporting from the exceptional Kylie Hutchinson, who is a great guide in making sense of data in every situation.  There are also other helpful resources out there, such as Stephanie Evergreen and Ann K. Emery.

This is the new evaluation reporting.  Someone is actually going to read the evaluation report.  Our ability to create a meaningful, accessible report means that it will have a better chance of supporting important decisions to come and improving the work being done.   Personally, I find that very exciting!

 
 
I've been collecting stories of Most Significant Change as part of the evaluation of a systems-change initiative in the healthcare sector.  It's innovative, complex and it's the first time anything of this scale is being attempted by my client. 

The Most Significant Change method calls for collecting stories within a certain domain of change, so we've been focusing on the area where there has been the greatest impact since the inception of the initiative in the fall of last year.  Stories have come from physicians, patients, partners and allied health professionals.  Up until now I was recording the stories, but there still wasn't clear buy-in about the process of using stories as part of the evaluation.

The magic happened last week when we went through the story selection process.  For the first time, the Evaluation Working Group had the opportunity to hear the stories of Most Significant Change in people's lives, and they realized what a difference they were making.  Not only did the stories show the incredible challenges of patients seeking care for complex conditions and care providers struggling to support patients who had social barriers clearly impacting their health but outside of the realm of what they could address, but they got to see how the changes came about and why they were significant to the storytellers.  The group discussed at length the significance of each story and it didn't take long before they identified themes in the stories that reflected their original motivation for getting involved in the initiative. 
As the conversation touched on the raw experiences of the initiative, there was an opportunity for deep reflection.

This was one of the most satisfying meetings I've had all year.  There was a "click" where the hard data started to take on faces and experiences, guiding us through the journey of change that has been happening.  The stories illustrated the change in a way that made our survey statistics and care data come to life. 

Here are some things to consider when using Most Significant Change:

1. Stories help illustrate the context.  It's complementary, though, and is most valuable when presented with hard data that shows the bigger picture.
2. Gather stories from a diversity of respondents.  The target interview groups should ideally be identified as part of the evaluation plan.
3. Be ready to facilitate!  The selection process is rewarding but needs guidance to maintain a safe, open space and help nudge the group towards a decision using a process they feel comfortable with.

Enjoy the process!

 
 
Picture
How do you deal with the complexity of collaborating organizations that are on different timelines, with power differentials, and varying levels of data quality?  Krishna Belbase of the Evaluation Office of UNICEF introduced the Resource Pack on Joint Evaluations developed by the UN Evaluation Group, at the CES 2015 conference in Montreal.  He suggested that it is structured for UN agencies, but could be adapted to suit other organizations.  The Resource Pack is a rich resource not only because of the simple yet comprehensive guide it provides for evaluation, but also because of the way it details the governance structures needed to support evaluation in organizations working together on evaluation. 

In today’s world many evaluations are done with some element of collaboration, and the Guidance Document and Toolkit that make up the Resource Pack can be used to help define the key functions, structures, and questions to ask when determining how to govern evaluation. 

The Guidance Document helps tease out the various functions like communication, management, technical input, and logistics.  The Toolkit then walks you through the steps from deciding to work together on an evaluation, preparing for the evaluation, implementing the evaluation, to utilizing the outcomes.  It addresses sticky issues like readiness and buy-in, and provides advice at every stage from developing terms of reference to disseminating findings.

Do you need a steering committee, management group, reference group, stakeholder group, or advisory group?  The Toolkit lays out the considerations for making important decisions about the most appropriate governance structure for your situation.  Overall, the Resource Pack on Joint Evaluations is a great resource for any organization looking to support decision-makers and leaders in structuring their governance, and provides tools such as checklists, examples and good practices to evaluation practitioners.

Check out this amazing resource: Resource Pack on Joint Evaluations