Top 5 Myths of Program Evaluation

These days, it can be hard to tell what’s true and what’s not.  So we’ve taken a look at the common misconceptions floating around the evaluation space.  What’d we learn? There are more myths about program evaluation than we realized!   But we know your time is precious, so we’re just bringing you the headliners.  Think you know what the top 5 are?  Read on to find out!

 
 

Myth #5: Pre-post surveys are always the best method for evaluating programs.

Oh, we get this one a lot!  We know that you may have heard how fabulous the pre-post method is and, like most methods, there is certainly a time and place for it. But that time and place may not always include your program.  Why?  Multiple surveys can be burdensome to staff and participants, matching responses and maintaining anonymity can be tricky, outside factors can cause responses to change in unexpected ways by the time you send out a post survey, and sometimes you may decide after a program begins that you want to evaluate it.  Rather than lock yourself into a mindset of pre-post or nothing, consider alternative survey options such as the Retrospective Post-then-Pre (RPT).

Myth #4: Qualitative data aren’t real data.

When you hear “data,” you might immediately think about numbers and statistics.  But words are powerful data, too!  If your organization is more of a story-telling community than a survey-taking one, or you’re simply looking to add a personal side to your data collection, then qualitative methods might be just for you!  To maximize your efforts, learn more about the art and science of in-depth interviews and focus groups -- even virtual focus groups.  Doing so can help you create effective tools and train your team in ways that will generate incredibly rich data about your program.

Myth #3: We should always measure outcomes.

We get it.  Outcomes are the flashy, exciting thing to report.  We know you want to share all about the difference you’ve made.  But here’s the deal – your outputs are truly foundational to your evaluation work, so it’s an ideal place to start.  Plus, they are often easier to collect and go a long way in sharing your program success.  So don’t discount that data!  Want to read more about the benefits of outputs and our list of “top 10” outputs from our partners?  Check out our Ode to Outputs

Myth #2: Evaluation is too risky because it might show we’ve failed.

One of our favorite evaluation quotes is by Michael Quinn Patton who shared, “Research seeks to prove, evaluation seeks to improve...”  That fundamental difference elevates the true purpose of program evaluation – to help your team get better at advancing your mission.  Please do not avoid evaluation for fear of failure!  Instead, collect enough data to help you describe what your program does and strategically learn about what’s working and what’s not.  That’s the only way you’ll be able to create the most effective, efficient, and (dare we say?) replicable initiatives possible. 

Myth #1: Funders are the main reason we should evaluate our programs.

No, we’re not naïve.  We know all about things like grant requirements and funder-grantee power dynamics.  But when it comes to the main reason to evaluate your work, we genuinely hope you’ll put your organization’s needs first. Why?  While funders are essential to your program’s sustainability, they are rarely the primary beneficiaries or users of your data.  When exploring the benefits that evaluation can bring to your program, start with the data that will benefit your program recipients, guide your program improvements, give your staff the key information they need, and tell your story. 

Ready to tackle program evaluation together?  Reach out -- we’d love to connect!

Jana SharpComment