AHELP Program Evaluation Webpage

This webpage has several parts:

  • The Primary Resources page has background information, links to tools and more information on the CDC Framework for Program Evaluation in Public Health and Mark Friedman’s Results Accountability
  • The CDC Framework for Program Evaluation in Public Health page has a step-by-step explanation of this evaluation strategy and links to useful tools for this strategy.
  • The Mark Friedman Results Accountability page has a step-by-step explanation of this evaluation strategy and links to useful tools for this strategy.
  • You can use the Tools Database to search for specific tools.
  • The Evaluation Resources page has links to organizations that promote program evaluation and documents that describe additional strategies for conducting program evaluation.
  • The Evaluation Glossary has definitions for some of the words and terms often used by program evaluators.

Below, you will find some background thoughts and information about program evaluation.

  • Why Evaluate?
  • Program Evaluation standards of practice
  • Before you start
  • What About Data?
  • Ethics

Why Evaluate?

Program evaluation gives you very helpful information

  • It can tell you if activities or services are happening the way they are supposed to.
  • It can tell you how to improve the program as you go along.
  • It can tell you if your program did what it was supposed to do.
  • It can tell you if your program helped people.

You can use this information to show your community, your funding sources and your program’s leaders why they need to support your program.

Alaska programs also need to share their success stories.  Many people are doing lots of great work, and other people need to know about it!  Program evaluation is a mechanism for collecting interpreting and sharing program information that shows what has been accomplished.

In the end, program evaluation is about this question: Were you successful?

Evaluation can and should happen throughout a program’s evolution.  “Success” means different things depending the stage being evaluated.

  • If you are testing pieces of a program before they are fully in place (a formative evaluation), the question is: Would this activity, service or material work for our program?
  • If you are assessing how an ongoing program functions (a process evaluation), the question becomes: Is the program happening the way it is supposed to?
  • If you want to find out if your program caused change (an outcome evaluation or summative evaluation), the question is then: What happened differently because of this program?

Note: you need to know how a program happened (process evaluation) to be able to talk about the changes it may have caused (outcome evaluation).

The program evaluation standards of practice

In 1994, sixteen professional organizations agreed that program evaluation should meet certain standards.  These standards are very practical, and will give your program evaluation a solid foundation.  The standards are organized into four categories, which are:

Utility
who needs the information from this evaluation, and what information do they need?
Feasibility
how much money, time and effort can we put into this?
Propriety
who needs to be involved in the evaluation for it to be ethical?
Accuracy
what design will lead to accurate information?

Complete list of standards:
ERIC/AE Digest. The program evaluation standards (1995) ERICDigests.org

Discussion of cultural issues and the standards:
Diversity Committee, American Evaluation Association. A Cultural Reading of The Program Evaluation Standards (2004) http://www.eval.org/aea05.cr.BoardApproved2.pdf (printed 9/25/07)

 

Before you start…

For evaluation, you need to decide:

  • Exactly what you are evaluating (check your program objectives or logic model),
  • Exactly what success by the program you are evaluating would mean (look at your program objectives or logic model for indicators), and
  • Specifically, what information (e.g., data related to your indicators) you have or could get that would show how successful you have been.

Know your audience – Who will use the results of the evaluation?  Find out what they need, and make sure the evaluation has that information.  If possible, include your audience in planning your evaluation.

Be realistic – What resources do you have for evaluation?  As you plan your evaluation, think about how much staff-time would be needed for each part, how much experience and skill in data collection/analysis you have available, and how easy it would be to obtain information from your possible sources.
Sometimes programs do a self-evaluation, and sometimes the evaluator comes from outside.  This choice depends partly on program resources and staff skills.  Also, it may be easier for all involved to speak openly about some of their experiences with a stranger.  If you want to find an evaluator, ask your colleagues and/or state program staff for recommendations.  The American Evaluation Association also has a list of their members that are available for consultation, although it does not include anyone from Alaska
What About Data?

Your data need to be accurate.

  • They need to measure what you think they measure (e.g., the data need to be valid).
  • If you used the same data collection method at about the same time, both sets of data need to produce the same results (e.g., the data need to be reliable).
  • If your evaluation question concerns a group and you collect data from part of that group, your sub-group needs to look as much like the larger group as possible (e.g., the group needs to be representative).

Your data do NOT need to be complicated.

  • If you focus on making sure that your data are the best you can get for answering your evaluation questions, you will have useful results.
  • Simple is better for many reasons, including:
    • the data are usually easier to collect, analyze and explain, and
    • most Alaska programs and communities are so small that it would not be feasible to do a complex statistical analysis of your data.
  • Collect information from several points of view; if you have similar results from all your sources, your conclusion will have a stronger foundation.  If they have different results, explore why that might be the case.
    For example, a smoking cessation program could count the number of cigarettes purchased and the number of smoke free homes in the community as well as the number of people that received smoking cessation services.  If the number of cigarettes purchased went down while the numbers of smoke-free homes and the numbers served increased, the program’s impact statement is stronger than it would be if the numbers served were the only source of information.
  • If it makes sense, use surveys and other data collection tools that have been developed by other programs.  But remember that most published tools were developed by big urban programs; you might need to make some changes for such tools to work well for your program.

Other thoughts on data collection

  • Keep it simple.  Focus only on your evaluation questions, and not on all of the other things you would like to find out about.  The easiest evaluations look at just one or two very specific questions.
  • If you use a survey that you have developed, ask an expert to review your questions.  Also ask someone who could be in the group you will survey to look at your questions, to be sure that he or she understands them the same way you do.
  • Ask about people’s own experiences and observations as much as possible.  Avoid asking someone about someone else’s reactions, thoughts or feelings.

Ethics

You need to:

  • Protect the privacy of people who give you data for your program evaluation.
    • Ask permission before you include someone’s name in a report or presentation.
    • Be sure that individuals cannot be identified from your descriptions in reports or presentations.
    • During a survey or interview, tell your respondent how you will protect their privacy and how you will use the information they give you.  Tell your respondents that they can choose not to answer your questions.
  • Include on your project oversight team at least one representative of each group of people affected by the program being evaluated to help you make sure everyone is treated fairly.

Most program evaluation is not formal research, so you do not need to seek approval from an Institutional Review Board.