Why do evaluations for programs




















Evaluation results are likely to suggest that your program has strengths as well as limitations. Your evaluation should not be a simple declaration of program success or failure. Evidence that your EE program is not achieving all of its ambitious objectives can be hard to swallow, but it can also help you learn where to best put your limited resources. A good evaluation is one that is likely to be replicable, meaning that someone else should be able to conduct the same evaluation and get the same results.

The higher the quality of your evaluation design, its data collection methods and its data analysis, the more accurate its conclusions and the more confident others will be in its findings.

Making evaluation an integral part of your program means evaluation is a part of everything you do. You design your program with evaluation in mind, collect data on an on-going basis, and use these data to continuously improve your program. Developing and implementing such an evaluation system has many benefits including helping you to:. As you set goals, objectives, and a desired vision of the future for your program, identify ways to measure these goals and objectives and how you might collect, analyze, and use this information.

This process will help ensure that your objectives are measurable and that you are collecting information that you will use. Strategic planning is also a good time to create a list of questions you would like your evaluation to answer. See Step 2 to make sure you are on track. Update these documents on a regular basis, adding new strategies, changing unsuccessful strategies, revising relationships in the model, and adding unforeseen impacts of an activity EMI, It describes features of an organizational culture, and explains how to build teamwork, administrative support and leadership for evaluation.

It discusses the importance of developing organizational capacity for evaluation, linking evaluation to organizational planning and performance reviews, and unexpected benefits of evaluation to organizational culture. If you want to learn more about how to institutionalize evaluation, check out the following resources on adaptive management.

Adaptive management is an approach to conservation management that is based on learning from systematic, on-going monitoring and evaluation, and involves adapting and improving programs based on the findings from monitoring and evaluation.

Downloaded September 20, from: www. Patton, M. Qualitative Research Evaluation Methods. Thomson, G. Measuring the success of EE programs. Canadian Parks and Wilderness Society.

Skip to main content. Evaluation: What is it and why do it? Table of Contents What is evaluation? Should I evaluate my program? What type of evaluation should I conduct and when? What makes a good evaluation? How do I make evaluation an integral part of my program? How can I learn more? What is evaluation? What do staff consider to be strengths of the product or program? Program evaluation with an outcomes focus is increasingly important for nonprofits and asked for by funders.

An outcomes-based evaluation facilitates your asking if your organization is really doing the right program activities to bring about the outcomes you believe or better yet, you've verified to be needed by your clients rather than just engaging in busy activities which seem reasonable to do at the time.

Outcomes are benefits to clients from participation in the program. Outcomes are often confused with program outputs or units of services, e. The following information is a top-level summary of information from this site. To accomplish an outcomes-based evaluation, you should first pilot, or test, this evaluation approach on one or two programs at most before doing all programs.

The general steps to accomplish an outcomes-based evaluation include to: 1. Identify the major outcomes that you want to examine or verify for the program under evaluation. You might reflect on your mission the overall purpose of your organization and ask yourself what impacts you will have on your clients as you work towards your mission.

For example, if your overall mission is to provide shelter and resources to abused women, then ask yourself what benefits this will have on those women if you effectively provide them shelter and other services or resources. As a last resort, you might ask yourself, "What major activities are we doing now? This "last resort" approach, though, may just end up justifying ineffective activities you are doing now, rather than examining what you should be doing in the first place.

Choose the outcomes that you want to examine, prioritize the outcomes and, if your time and resources are limited, pick the top two to four most important outcomes to examine for now. For each outcome, specify what observable measures, or indicators, will suggest that you're achieving that key outcome with your clients.

This is often the most important and enlightening step in outcomes-based evaluation. However, it is often the most challenging and even confusing step, too, because you're suddenly going from a rather intangible concept, e. It helps to have a "devil's advocate" during this phase of identifying indicators, i.

Specify a "target" goal of clients, i. Identify what information is needed to show these indicators, e. If your program is new, you may need to evaluate the process in the program to verify that the program is indeed carried out according to your original plans. Michael Patton, prominent researcher, writer and consultant in evaluation, suggests that the most important type of evaluation to carry out may be this implementation evaluation to verify that your program ended up to be implemented as you originally planned.

Decide how can that information be efficiently and realistically gathered see Selecting Which Methods to Use below. Consider program documentation, observation of program personnel and clients in the program, questionnaires and interviews about clients perceived benefits from the program, case studies of program failures and successes, etc. You may not need all of the above. Analyze and report the findings see Analyzing and Interpreting Information below.

The following table provides an overview of the major methods used for collecting data during evaluations. Also consider Appreciative Inquiry Survey Design. Note that if you plan to include in your evaluation, the focus and reporting on personal information about customers or clients participating in the evaluation, then you should first gain their consent to do so. They should understand what you're doing with them in the evaluation and how any information associated with them will be reported.

You should clearly convey terms of confidentiality regarding access to evaluation results. They should have the right to participate or not. Have participants review and sign an informed consent form. See the sample informed-consent form. The overall goal in selecting evaluation method s is to get the most useful information to key decision makers in the most cost-effective and realistic fashion.

Consider the following questions: 1. What information is needed to make current decisions about a product or program? Of this information, how much can be collected and analyzed in a low-cost and practical manner, e.

How accurate will the information be reference the above table for disadvantages of methods? Will the methods get all of the needed information? What additional methods should and could be used if additional information is needed? Will the information appear as credible to decision makers, e. Will the nature of the audience conform to the methods, e. Who can administer the methods now or is training required?

How can the information be analyzed? Note that, ideally, the evaluator uses a combination of methods, for example, a questionnaire to quickly collect a great deal of information from a lot of people, and then interviews to get more in-depth information from certain respondents to the questionnaires. Perhaps case studies could then be used for more in-depth analysis of unique and notable cases, e.

There are four levels of evaluation information that can be gathered from clients, including getting their: 1. Usually, the farther your evaluation information gets down the list, the more useful is your evaluation. Unfortunately, it is quite difficult to reliably get information about effectiveness. Still, information about learning and skills is quite useful. Analyzing quantitative and qualitative data is often the topic of advanced research and evaluation methods.

There are certain basics which can help to make sense of reams of data. Always start with your evaluation goals: When analyzing data whether from questionnaires, interviews, focus groups, or whatever , always start from review of your evaluation goals, i. This will help you organize your data and focus your analysis. For example, if you wanted to improve your program by identifying its strengths and weaknesses, you can organize data into program strengths, weaknesses and suggestions to improve the program.

If you wanted to fully understand how your program works, you could organize data in the chronological order in which clients go through your program. If you are conducting an outcomes-based evaluation, you can categorize data according to the indicators for each outcome. Basic analysis of "quantitative" information for information other than commentary, e.

Make copies of your data and store the master copy away. Use the copy for making edits, cutting and pasting, etc. Tabulate the information, i. For ratings and rankings, consider computing a mean, or average, for each question. For example, "For question 1, the average ranking was 2.

This is more meaningful than indicating, e. Consider conveying the range of answers, e. Basic analysis of "qualitative" information respondents' verbal answers in interviews, focus groups, or written commentary on questionnaires : 1. Read through all the data.

Organize comments into similar categories, e. Label the categories or themes, e. Attempt to identify patterns, or associations and causal relationships in the themes, e. Keep all commentary for several years after completion in case needed for future reference. Attempt to put the information in perspective, e. Consider recommendations to help program staff improve the program, conclusions about program operations or meeting goals, etc.

Record conclusions and recommendations in a report document, and associate interpretations to justify your conclusions or recommendations. The level and scope of content depends on to whom the report is intended, e. Be sure employees have a chance to carefully review and discuss the report. Translate recommendations to action plans, including who is going to do what about the program and by when.

Bankers or funders will likely require a report that includes an executive summary this is a summary of conclusions and recommendations, not a listing of what sections of information are in the report -- that's a table of contents ; description of the organization and the program under evaluation; explanation of the evaluation goals, methods, and analysis procedures; listing of conclusions and recommendations; and any relevant attachments, e.

The banker or funder may want the report to be delivered as a presentation, accompanied by an overview of the report. Or, the banker or funder may want to review the report alone. Be sure to record the evaluation plans and activities in an evaluation plan which can be referenced when a similar program evaluation is needed in the future. An example of evaluation report contents is included later on below in this document. Ideally, management decides what the evaluation goals should be.

Then an evaluation expert helps the organization to determine what the evaluation methods should be, and how the resulting data will be analyzed and reported back to the organization. Most organizations do not have the resources to carry out the ideal evaluation. Utilization-focused evaluation. Perrin, B. Effective use and misuse of performance measurement.

American Journal of Evaluation ;19 3 Perrin, E, Koshel J. Assessment of performance measures for community health and development, substance abuse, and mental health.

Phillips, J. Handbook of training evaluation and measurement methods. Poreteous, N. Program evaluation tool kit: a blueprint for community health and development management. Posavac, E. Program evaluation: methods and case studies. Preskill, H. Evaluative inquiry for learning in organizations. Public Health Functions Project. The public health workforce: an agenda for the 21st century.

Washington, DC: U. Public Health Training Network. Practical evaluation of public health programs. Reichardt, C. Rossi, P. Evaluation: a systematic approach. Rush, B. Program logic models: expanding their role and structure for program planning and evaluation. Canadian Journal of Program Evaluation; Sanders, J. Uses of evaluation as a means toward organizational effectiveness.

In A vision of evaluation , edited by ST Gray. Schorr, L. Common purpose: strengthening families and neighborhoods to rebuild America. Scriven, M. A minimalist theory of evaluation: the least theory that practice requires. American Journal of Evaluation. Shadish, W. Foundations of program evaluation. Evaluation theory is who we are. American Journal of Evaluation 1 Shulha, L. Evaluation use: theory, research, and practice since Evaluation Practice. Sieber, J. Planning ethically responsible research.

Steckler, A. Toward integrating qualitative and quantitative methods: an introduction. Health Education Quarterly; Taylor-Powell, E. Evaluating collaboratives: reaching the potential.

Teutsch, S. A framework for assessing the effectiveness of disease and injury prevention. Torres, R. Evaluation strategies for communicating and reporting: enhancing learning in organizations. United Way of America. Measuring program outcomes: a practical approach. General Accounting Office. Case study evaluations. General Accounting Office, Designing evaluations. Managing for results: measuring program results that are under limited federal control.

Washington, DC: Prospective evaluation methods: the prosepctive evaluation synthesis. The evaluation synthesis. Using statistical sampling. Wandersman, A. Comprehensive quality programming and accountability: eight essential strategies for implementing successful prevention programs. Journal of Primary Prevention ;19 1 Weiss, C. Nothing as practical as a good theory: exploring theory-based evaluation for comprehensive community initiatives for families and children.

Kubisch, A. Have we learned anything new about the use of evaluation? American Journal of Evaluation;19 1 How can theory-based evaluation make greater headway? Evaluation Review ;21 4 The W. Foundation Evaluation Handbook. Battle Creek, MI: W. Wong-Reiger, D. Using program logic models to plan and evaluate education and prevention programs. Ottawa, Ontario: Canadian Evaluation Society. Wholey, S. Handbook of Practical Program Evaluation. Jossey-Bass, This book serves as a comprehensive guide to the evaluation process and its practical applications for sponsors, program managers, and evaluators.

Yarbrough, B. Sage Publications. Yin, R. Case study research: design and methods. Skip to main content. Toggle navigation Navigation. Introduction to Evaluation » Section 1. Chapter Chapter 36 Sections Section 1. Community-based Participatory Research Section 3. Section 4. Choosing Evaluators Section 5. Developing an Evaluation Plan Section 6. Participatory Evaluation. The Tool Box needs your help to remain available.

Toggle navigation Chapter Sections. Section 1. Learn how program evaluation makes it easier for everyone involved in community health and development work to evaluate their efforts. Why evaluate community health and development programs? How do you evaluate a specific program? A framework for program evaluation What are some standards for "good" program evaluation? Applying the framework: Conducting optimal evaluations Around the world, there exist many programs and interventions developed to improve conditions in local communities.

Examples of different types of programs include: Direct service interventions e. For example, it complements program management by: Helping to clarify program plans Improving communication among partners Gathering the feedback needed to improve and be accountable for program effectiveness It's important to remember, too, that evaluation is not a new activity for those of us working to improve our communities.

Before your organization starts with a program evaluation, your group should be very clear about the answers to the following questions: What will be evaluated? What criteria will be used to judge program performance? What standards of performance on the criteria must be reached for the program to be considered successful?

What evidence will indicate performance on the criteria relative to the standards? What conclusions about program performance are justified based on the available evidence? What will be evaluated? Drive Smart, a program focused on reducing drunk driving through public education and intervention. The number of community residents who are familiar with the program and its goals The number of people who use "Safe Rides" volunteer taxis to get home The percentage of people who report drinking and driving The reported number of single car night time crashes This is a common way to try to determine if the number of people who drive drunk is changing What standards of performance on the criteria must be reached for the program to be considered successful?

A random telephone survey will demonstrate community residents' knowledge of the program and changes in reported behavior Logs from "Safe Rides" will tell how many people use their services Information on single car night time crashes will be gathered from police records What conclusions about program performance are justified based on the available evidence?

Are the changes we have seen in the level of drunk driving due to our efforts, or something else? Or if no or insufficient change in behavior or outcome, Should Drive Smart change what it is doing, or have we just not waited long enough to see results? The following framework provides an organized approach to answer these questions. A framework for program evaluation Program evaluation offers a way to understand and improve community health and development practice using methods that are useful, feasible, proper, and accurate.

The framework contains two related dimensions: Steps in evaluation practice, and Standards for "good" evaluation. Engage stakeholders Describe the program Focus the evaluation design Gather credible evidence Justify conclusions Ensure use and share lessons learned Understanding and adhering to these basic steps will improve most evaluation efforts.

There are 30 specific standards, organized into the following four groups: Utility Feasibility Propriety Accuracy These standards help answer the question, "Will this evaluation be a 'good' evaluation? Engage Stakeholders Stakeholders are people or organizations that have something to gain or lose from what will be learned from an evaluation, and also in what will be done with that knowledge.

Three principle groups of stakeholders are important to involve: People or organizations involved in program operations may include community members, sponsors, collaborators, coalition partners, funding officials, administrators, managers, and staff. People or organizations served or affected by the program may include clients, family members, neighborhood organizations, academic institutions, elected and appointed officials, advocacy groups, and community residents. Individuals who are openly skeptical of or antagonistic toward the program may also be important to involve.

Opening an evaluation to opposing perspectives and enlisting the help of potential program opponents can strengthen the evaluation's credibility. They shouldn't be confused with primary intended users of the program, although some of them should be involved in this group. In fact, primary intended users should be a subset of all of the stakeholders who have been identified. A successful evaluation will designate primary intended users, such as program staff and funders, early in its development and maintain frequent interaction with them to be sure that the evaluation specifically addresses their values and needs.

Describe the Program A program description is a summary of the intervention being evaluated. There are several specific aspects that should be included when describing a program. Statement of need A statement of need describes the problem, goal, or opportunity that the program addresses; it also begins to imply what the program will do in response.

Expectations Expectations are the program's intended results. Activities Activities are everything the program does to bring about changes. Resources Resources include the time, talent, equipment, information, money, and other assets available to conduct program activities.

Stage of development A program's stage of development reflects its maturity. Context A description of the program's context considers the important features of the environment in which the program operates. Logic model A logic model synthesizes the main program elements into a picture of how the program is supposed to work. Focus the Evaluation Design By focusing the evaluation design, we mean doing advance planning about where the evaluation is headed, and what steps it will take to get there.

Among the issues to consider when focusing an evaluation are: Purpose Purpose refers to the general intent of the evaluation. There are at least four general purposes for which a community group might conduct an evaluation: To gain insight. This happens, for example, when deciding whether to use a new approach e. Knowledge from such an evaluation will provide information about its practicality. For a developing program, information from evaluations of similar programs can provide the insight needed to clarify how its activities should be designed.

To improve how things get done. This is appropriate in the implementation stage when an established program tries to describe what it has done. This information can be used to describe program processes, to improve how the program operates, and to fine-tune the overall strategy. Evaluations done for this purpose include efforts to improve the quality, effectiveness, or efficiency of program activities. To determine what the effects of the program are.

Evaluations done for this purpose examine the relationship between program activities and observed consequences. For example, are more students finishing high school as a result of the program?

Programs most appropriate for this type of evaluation are mature programs that are able to state clearly what happened and who it happened to. Such evaluations should provide evidence about what the program's contribution was to reaching longer-term goals such as a decrease in child abuse or crime in the area.

This type of evaluation helps establish the accountability, and thus, the credibility, of a program to funders and to the community.

To affect those who participate in it. The logic and reflection required of evaluation participants can itself be a catalyst for self-directed change. And so, one of the purposes of evaluating a program is for the process and results to have a positive influence.

Such influences may: Empower program participants for example, being part of an evaluation can increase community members' sense of control over the program ; Supplement the program for example, using a follow-up questionnaire can reinforce the main messages of the program ; Promote staff development for example, by teaching staff how to collect, analyze, and interpret evidence ; or Contribute to organizational growth for example, the evaluation may clarify how the program relates to the organization's mission.

Users Users are the specific individuals who will receive evaluation findings. Uses Uses describe what will be done with what is learned from the evaluation. Methods The methods available for an evaluation are drawn from behavioral science and social research and development. Agreements Agreements summarize the evaluation procedures and clarify everyone's roles and responsibilities.

Gather Credible Evidence Credible evidence is the raw material of a good evaluation. The following features of evidence gathering typically affect how credible it is seen as being: Indicators Indicators translate general concepts about the program and its expected effects into specific, measurable parts.

Examples of indicators include: The program's capacity to deliver services The participation rate The level of client satisfaction The amount of intervention exposure how many people were exposed to the program, and for how long they were exposed Changes in participant behavior Changes in community conditions or norms Changes in the environment e. Sources Sources of evidence in an evaluation may be people, documents, or observations. Quality Quality refers to the appropriateness and integrity of information gathered in an evaluation.

Quantity Quantity refers to the amount of evidence gathered in an evaluation. Logistics By logistics , we mean the methods, timing, and physical infrastructure for gathering and handling evidence. Justify Conclusions The process of justifying conclusions recognizes that evidence in an evaluation does not necessarily speak for itself.

The principal elements involved in justifying conclusions based on evidence are: Standards Standards reflect the values held by stakeholders about the program.

Interpretation Interpretation is the effort to figure out what the findings mean. Judgements Judgments are statements about the merit, worth, or significance of the program. Recommendations Recommendations are actions to consider as a result of the evaluation.

Three things might increase the chances that recommendations will be relevant and well-received: Sharing draft recommendations Soliciting reactions from multiple stakeholders Presenting options instead of directive advice Justifying conclusions in an evaluation is a process that involves different possible steps.

Ensure Use and Share Lessons Learned It is naive to assume that lessons learned in an evaluation will necessarily be used in decision making and subsequent action. The elements of key importance to be sure that the recommendations from an evaluation are used are: Design Design refers to how the evaluation's questions, methods, and overall processes are constructed. Preparation Preparation refers to the steps taken to get ready for the future uses of the evaluation findings.

Feedback Feedback is the communication that occurs among everyone involved in the evaluation. Follow-up Follow-up refers to the support that many users need during the evaluation and after they receive evaluation findings. Dissemination Dissemination is the process of communicating the procedures or the lessons learned from an evaluation to relevant audiences in a timely, unbiased, and consistent fashion.

Additional process uses for evaluation include: By defining indicators, what really matters to stakeholders becomes clear It helps make outcomes matter by changing the reinforcements connected with achieving positive results. For example, a funder might offer "bonus grants" or "outcome dividends" to a program that has shown a significant amount of community change and improvement.

Standards for "good" evaluation There are standards to assess whether all of the parts of an evaluation are well -designed and working to their greatest potential.

The 30 more specific standards are grouped into four categories: Utility Feasibility Propriety Accuracy The utility standards are: Stakeholder Identification : People who are involved in or will be affected by the evaluation should be identified, so that their needs can be addressed.

Evaluator Credibility : The people conducting the evaluation should be both trustworthy and competent, so that the evaluation will be generally accepted as credible or believable. Information Scope and Selection : Information collected should address pertinent questions about the program, and it should be responsive to the needs and interests of clients and other specified stakeholders.

Values Identification: The perspectives, procedures, and rationale used to interpret the findings should be carefully described, so that the bases for judgments about merit and value are clear. Report Clarity: Evaluation reports should clearly describe the program being evaluated, including its context, and the purposes, procedures, and findings of the evaluation.

This will help ensure that essential information is provided and easily understood. Report Timeliness and Dissemination: Significant midcourse findings and evaluation reports should be shared with intended users so that they can be used in a timely fashion. Evaluation Impact: Evaluations should be planned, conducted, and reported in ways that encourage follow-through by stakeholders, so that the evaluation will be used. Feasibility Standards The feasibility standards are to ensure that the evaluation makes sense - that the steps that are planned are both viable and pragmatic.

The feasibility standards are: Practical Procedures: The evaluation procedures should be practical, to keep disruption of everyday activities to a minimum while needed information is obtained. Political Viability : The evaluation should be planned and conducted with anticipation of the different positions or interests of various groups. This should help in obtaining their cooperation so that possible attempts by these groups to curtail evaluation operations or to misuse the results can be avoided or counteracted.

Cost Effectiveness: The evaluation should be efficient and produce enough valuable information that the resources used can be justified. Propriety Standards The propriety standards ensure that the evaluation is an ethical one, conducted with regard for the rights and interests of those involved.

Service Orientation : Evaluations should be designed to help organizations effectively serve the needs of all of the targeted participants.

Formal Agreements : The responsibilities in an evaluation what is to be done, how, by whom, when should be agreed to in writing, so that those involved are obligated to follow all conditions of the agreement, or to formally renegotiate it.

Rights of Human Subjects : Evaluation should be designed and conducted to respect and protect the rights and welfare of human subjects, that is, all participants in the study. Human Interactions : Evaluators should respect basic human dignity and worth when working with other people in an evaluation, so that participants don't feel threatened or harmed.

Complete and Fair Assessment : The evaluation should be complete and fair in its examination, recording both strengths and weaknesses of the program being evaluated. This allows strengths to be built upon and problem areas addressed. Disclosure of Findings : The people working on the evaluation should ensure that all of the evaluation findings, along with the limitations of the evaluation, are accessible to everyone affected by the evaluation, and any others with expressed legal rights to receive the results.

Conflict of Interest: Conflict of interest should be dealt with openly and honestly, so that it does not compromise the evaluation processes and results. Fiscal Responsibility : The evaluator's use of resources should reflect sound accountability procedures and otherwise be prudent and ethically responsible, so that expenditures are accounted for and appropriate.

Accuracy Standards The accuracy standards ensure that the evaluation findings are considered correct. There are 12 accuracy standards: Program Documentation: The program should be described and documented clearly and accurately, so that what is being evaluated is clearly identified. Context Analysis: The context in which the program exists should be thoroughly examined so that likely influences on the program can be identified.

Described Purposes and Procedures: The purposes and procedures of the evaluation should be monitored and described in enough detail that they can be identified and assessed. Defensible Information Sources: The sources of information used in a program evaluation should be described in enough detail that the adequacy of the information can be assessed. Valid Information: The information gathering procedures should be chosen or developed and then implemented in such a way that they will assure that the interpretation arrived at is valid.

Reliable Information : The information gathering procedures should be chosen or developed and then implemented so that they will assure that the information obtained is sufficiently reliable. Systematic Information: The information from an evaluation should be systematically reviewed and any errors found should be corrected.

Analysis of Quantitative Information: Quantitative information - data from observations or surveys - in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.

Analysis of Qualitative Information: Qualitative information - descriptive information from interviews and other sources - in an evaluation should be appropriately and systematically analyzed so that evaluation questions are effectively answered.

Justified Conclusions: The conclusions reached in an evaluation should be explicitly justified, so that stakeholders can understand their worth. Impartial Reporting: Reporting procedures should guard against the distortion caused by personal feelings and biases of people involved in the evaluation, so that evaluation reports fairly reflect the evaluation findings. Metaevaluation: The evaluation itself should be evaluated against these and other pertinent standards, so that it is appropriately guided and, on completion, stakeholders can closely examine its strengths and weaknesses.

Applying the framework: Conducting optimal evaluations There is an ever-increasing agreement on the worth of evaluation; in fact, doing so is often required by funders and other constituents. Instead, the appropriate questions are: What is the best way to evaluate? What are we learning from the evaluation? How will we use what we learn to become more effective? In Summary Evaluation is a powerful strategy for distinguishing programs and interventions that make a difference from those that don't.

The article cites the following references: Adler. Chen, H. Theory driven evaluations.



0コメント

  • 1000 / 1000