Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Summative Evaluation

Author: Sophia

what's covered
In this lesson, you will learn about summative evaluation and how it differs from impact evaluation. You will also learn about the relationship of evaluation with adaptations of community programming. Specifically, this lesson will cover the following:

Table of Contents

1. Summative and Impact Evaluation Defined

Summative evaluation is a type of evaluation conducted at the end of a program or intervention to assess its effectiveness and impact. It helps determine whether the program achieved its goals and objectives and provides valuable insights for future planning and decision-making. The main elements of summative evaluation include assessing the extent to which the program’s intended outcomes and objectives were achieved. This is called outcome measurement and is described in the example below. Other elements are addressed later in this lesson.

key concept
Outcome measurement in summative evaluation involves various mechanisms to assess the effectiveness of a community program. It measures both intended and unintended outcomes, providing a comprehensive understanding of the intervention’s impact.
  • Surveys and questionnaires are the most common mechanisms for collecting quantitative and qualitative data from participants and partners about their experiences, knowledge, attitudes, and behaviors.
  • In-depth interviews or group discussions are used to gather detailed feedback and insights from participants, staff, and partners.
  • Systematically observing program activities and participant interactions is another mechanism for outcome measurement that assesses implementation and outcomes.

Outcome measurement focuses on quantifying program outcomes to see if the goals were met without judging their value. Outcome evaluation assesses the effectiveness of a program by judging the quality and importance of the outcomes. The following table outlines their key differences.

Outcome Measurement Outcome Evaluation
Focus It is part of the broader summative evaluation process, which assesses the overall effectiveness and impact of a program AFTER it has been completed, quantifying an outcome without judging its value or importance. It is a standalone evaluation that specifically examines the outcomes of a program, policy, or intervention. Typically, this makes a judgment about the value, importance, or quality of outcomes.
Purpose It measures specific outcomes to determine whether the program achieved its goals and objectives. It determines the extent to which the intended outcomes have been achieved without necessarily attributing what caused it.
Scope It includes both short-term and long-term outcomes, providing a comprehensive understanding of the program’s impact. It includes immediate, intermediate, and long-term outcomes but does not typically include the broader context of the program’s implementation.
Timing The timing of this can be conducted at the end of the program to summarize its overall success and inform future decisions. The timing of this can be conducted at various points during or after the program to assess progress and effectiveness.

Adapted from Approach to Program Evaluation, by Centers for Disease Control and Prevention, 2024. In the public domain.

In summary, outcome measurement is a component of summative evaluation, while outcome evaluation is a separate, standalone evaluation. Summative evaluation includes outcome measurement as well as other aspects like process and impact, whereas outcome evaluation focuses solely on outcomes. Summative evaluation aims to provide a comprehensive assessment of a program’s overall success, while outcome evaluation specifically measures the achievement of the intended outcomes.

Another element of summative evaluation is impact assessment (which is different from impact evaluation). The table below highlights the differences between the two.

Impact Assessment Impact Evaluation
Focus Identifies and predicts potential effects of an intervention, policy, or project BEFORE it’s implemented Measures the actual effects and outcome of an intervention AFTER it has been implemented
Purpose Informs decision-makers and partners about the potential consequences of an intervention, helping them make informed choices and mitigate negative impacts Determines the effectiveness of the intervention, establishes cause-and-effect relationships, and provides evidence for policy decisions and future program improvements
Scope Assesses both positive and negative effects of the program on various aspects such as health, environment, and socioeconomic conditions Assesses the long-term effects of the program, including intended and unintended outcomes, and examines the intervention’s design, process, cost, efficiency, and lessons for future interventions
Timing Conducted before and during the early stages of the intervention Conducted after the intervention has been implemented, often at the end of a program or project


In summary, impact assessment is more about predicting and planning for potential impacts and occurs during a summative evaluation, and impact evaluation is more about measuring and understanding the actual impacts after the implementation.

An additional element of a summative evaluation is cost-effective analysis (CEA). It refers to analyzing the program’s costs relative to its outcomes and benefits to determine its economic efficiency.

The following table highlights the key steps in conducting a CEA, with examples of each step:

CEA Step Description Examples
Define Clearly define the program’s goals and the specific outcomes that are to be measured.
  • A health program aims to reduce the incidence of diabetes in the community by 5% over 1 year.
List costs List all the costs associated with the program, including direct and indirect costs.
  • Direct costs include staff salaries, materials, and equipment.
  • Indirect costs include administrative expenses and overhead, like rent and electricity.
Measure Measure outcomes using appropriate indicators.
  • Efficiency indicators measure whether resources invested in a program are being used efficiently, and performance indicators measure a program’s performance against its goals and objectives (EvalCommunity, n.d.).
Calculate cost per outcome Divide the total program costs by the number of successful outcomes to determine costs per outcome.
  • straight left parenthesis straight T straight o straight t straight a straight l straight P straight r straight o straight g straight r straight a straight m straight C straight o straight s straight t straight s straight right parenthesis divided by straight left parenthesis straight N straight u straight m straight b straight e straight r straight o straight f straight S straight u straight c straight c straight e straight s straight s straight f straight u straight l straight O straight u straight t straight c straight o straight m straight e straight s straight right parenthesis
  • If the program costs $100,000 and results in 50 fewer diabetes cases, the cost per prevented case is $2,000.
Compare alternatives Compare the program’s cost-effectiveness to other similar programs or interventions.
  • Compare the cost per prevented diabetes case to the cost of the other diabetes prevention programs.
Interpret and report findings Interpret the results in the context of program goals and the broader community impact.
  • Highlight the program’s cost-effectiveness in preventing diabetes compared to other interventions and its overall benefits to the community.

By following these steps, a comprehensive CEA can evaluate the economic efficiency of a community program and inform future decisions.

Summative evaluation, like process evaluation, also evaluates program fidelity, which you may recall is assessing whether the program was implemented as planned and adhered to its original design and protocols. However, in process evaluation, the purpose of looking at fidelity is to see if any adaptations or adjustments may be needed before implementation. In summative evaluation, fidelity measurement examines whether the program was implemented with fidelity throughout the duration of the program. This helps in attributing observed outcomes to the program itself rather than to the adaptations or variations in implementation.

Summative evaluation also includes collecting input from stakeholders and partners, including participants, staff, and community members, to gauge their satisfaction and perceptions of the program. Summative evaluation also involves comparing postprogram data to baseline data collected before the program’s implementation, to assess changes and improvements.

The final element of summative evaluation is reporting and dissemination. This means compiling and sharing the evaluation findings with partners, funders, and the broader community to inform programming and decision-making. This step could include creating evaluation reports, presentations, or publications to communicate the program’s outcomes and impact. Overall, summative evaluation provides a comprehensive assessment of a program’s effectiveness and helps identify lessons learned and best practices for future initiatives.

You may have noticed that there are some crossover activities during different types of evaluation in community programming. Some steps seem the same, but please note that the reason behind these steps may be different. As indicated earlier, for example, the reason for the impact assessment during a summative evaluation is different than an impact evaluation. However, some of the ways an assessment is conducted may be similar. An evaluator will use the findings for different reasons.

The same concept applies to summative evaluation and impact evaluation. The table below compares the two types of evaluation. The summary will highlight the differences. Both evaluations are crucial to assessing programs but have distinct focuses and purposes.

Summative Evaluation Impact Evaluation
Focus Assessing the overall effectiveness of a program AFTER it has been completed Specifically examining the long-term effects and outcomes of a program
Purpose To determine whether the program achieved its goals and objectives, providing evidence for decision-making and future improvements To establish cause-and-effect relationships to determine whether the observed changes can be attributed to the program
Scope Includes outcome measurement, process assessment, and sometimes impact assessment to provide a comprehensive overall assessment Focuses on both intended and unintended outcomes, often using counterfactual analysis (asking what would have happened without the program) to control for other influencing factors
Timing Conducted at the end of the program to summarize its overall success Conducted during or after the program to assess its impact on participants and the broader community


big idea
In summary, the key differences between summative and impact evaluations are that summative evaluation provides a comprehensive assessment of a program’s overall success, while impact evaluation focuses on specific long-term effects and outcomes. Summative evaluation includes various components like outcome and process assessment, whereas impact evaluation is more narrowly focused on establishing causality. Finally, summative evaluation aims to inform future program improvements and decision-making, while impact evaluation seeks to understand the program’s specific impacts and their causes (Global Evaluation Initiative, 2022).

term to know
Summative Evaluation
A type of evaluation conducted at the end of a program or intervention to assess its effectiveness and impact. It helps determine whether the program achieved its goals and objectives and provides valuable insights for future planning and decision-making.


2. Evaluation With Adaptation

Recall that fidelity to the core components of a community program is measured and preferred while implementing a community program. However, sometimes, adaptations are required to ensure that the program is effective. When it comes to reporting those adaptations, a summative evaluation will typically include measuring the effectiveness of the adaptation as much as the program itself.

Evaluators assess how the adaptations contributed to the program’s success. This involves comparing the outcomes of the adapted program to the original goals and objectives. Gathering feedback from partners, including community members, program staff, and participants, helps in understanding the impact of the adaptations. This feedback is crucial for determining the relevance and acceptability of the changes.

IN CONTEXT
Community Health Needs Assessment (CHNA) With Technology: Adaptation

A CHNA was conducted in the Dominican Republic in two rural communities. The initial phase was conducted with teams consisting of one Dominican medical student and one American medical or nursing student. Each community had a promotores de salud or health promoter, who assisted with communicating with the community members to get permission to conduct the CHNA. The first round used paper and pencil administration.

Two years after the first round, the CHNA would be conducted again in two new communities. This time, tablets instead of paper and pencils were to be used for data collection. This required training the promotores de salud and local students on using the tablet technology to complete the data collection. The project planners created and conducted training following evidence-based practices for training community health workers. What happened during the training and the implementation of the CHNA created a tsunami of adaptations!

This image depicts the training on the use of the tablets that took place with the promotores and the students assisting with the CHNA in the Dominican Republic.


The promotores were not familiar enough with the use of technology other than smartphones. The students were of a generation that grew up with technology, so they were familiar with tablet technology. The trainers struggled to help the promotores understand how to use the tablet technology but did make progress while in training. Upon arrival in the communities, the data collection failed miserably. The promotores could not use their training on-site, and the connection to the cell towers nearby was not as robust as believed.

The data collection quickly changed to the paper-and-pencil collection method in the communities. The team had to quickly adapt or go home without data. The data were needed by the clinics to understand how best to serve the communities.

Had a summative evaluation been fully conducted, with the outcome measurements and impact assessment completed, it is likely the planners would have been able to predict this happening and adapt before going to the community. The outcomes were that very little data were collected, what was collected on the tablets was sparse and not consistent across collection teams, and the generational gap between the promotores assigned to teams with students led to a frustrating project!

Summative evaluation reports often include an analysis of the contextual factors that influenced the need for adaptations. This can involve examining local conditions, cultural considerations, and other environmental factors. Evaluators also consider whether the adaptations are sustainable in the long term. This includes assessing the resources required to maintain the changes and the potential for scaling up the adapted program. Summative evaluation reports highlight the lessons learned from the adaptations. This information can be used to inform future programming and improve the effectiveness of similar initiatives. By incorporating these elements, summative reports provide a comprehensive understanding of how adaptations impact community programming and offer valuable insights for continuous improvements.

Not reporting adaptations during the evaluation of a community program can have several disadvantages. If adaptations are not reported, it becomes difficult to understand what changes were made and why. This lack of transparency can hinder the ability to replicate or scale the program in other settings (Escoffery et al., 2018). Adaptations can significantly impact the outcomes of a program. If these changes are not documented, it can lead to misinterpretation of the program’s effectiveness and the factors contributing to its success or failure. The credibility of the evaluation can be compromised if adaptations are not reported. Partners may question the validity of the findings and the rigor of the evaluation results.

summary
In this lesson, you learned about summative and impact evaluation and that it is a type of evaluation conducted at the end of a program or intervention to assess its overall effectiveness and impact. It helps determine whether the program achieved its goals and objectives and provides valuable insights for future planning and decision-making. You also learned about evaluation with adaptation of a community program. When it comes to reporting adaptations, summative evaluation typically includes measuring the effectiveness of the adaptation as much as the program itself, and evaluators assess how the adaptations contributed to the program’s success.

Source: THIS TUTORIAL WAS AUTHORED BY SOPHIA LEARNING. PLEASE SEE OUR TERMS OF USE

Disclaimer: The use of any CDC and United States government materials, including any links to the materials on the CDC or government websites, does not imply endorsement by the CDC or the United States government of us, our company, product, facility, service, or enterprise.


REFERENCES

Centers for Disease Control and Prevention. (2024, August 18). CDC approach to program evaluation. www.cdc.gov/evaluation/php/about/index.html

Escoffery, C., Lebow-Skelley, E., Haardoerfer, R., Boing, E., Udelson, H., Wood, R., Hartman, M., Fernandez, M. E., & Mullen, P. D. (2018). A systematic review of adaptations of evidence-based public health interventions globally. Implementation Science, 13, Article 125. doi.org/10.1186/s13012-018-0815-9

EvalCommunity. (n.d.). Types of indicators: Theory, practice and job interview preparation. www.evalcommunity.com/career-center/type-of-indicators/

Attributions
Terms to Know
Summative Evaluation

A type of evaluation conducted at the end of a program or intervention to assess its effectiveness and impact. It helps determine whether the program achieved its goals and objectives and provides valuable insights for future planning and decision-making.