What is program evaluation?

What is program evaluation?

Program evaluation has been defined as “the systematic collection and analysis of information about program activities, characteristics, and outcomes to make judgements about the program, improve program effectiveness, and/or inform decisions about future programming” (Patten, 1997 in Poth et al., 2023). 

Program evaluation

  • Involves the systematic collection and analysis of information, 

  • Focuses on a broad range of topics (accessibility, comprehensiveness, integration, cost, efficiency, effectiveness), 

  • Is designed for a variety of uses (decision making, management, accountability, planning)” (Patten, 1997 in Poth et al., 2023). 

Program evaluation process

Various models of program evaluation exist, including Kirkpatrick Model, Context, Input, Process, Product (CIPP), and Systems Evaluation Protocol (SEP)—each differs slightly in purpose and process. 

Each model has unique terminology and may be more applicable in certain contexts than others. Despite these differences, the general program evaluation process in each model follows a similar process (Poorvu Center for Teaching and Learning, Yale University; HEQCO). 

1.     Planning

  • Identify purpose: Determine why an evaluation is needed and purpose of the information gathered during the evaluation process.  

  • Identify stakeholders: Identify key personnel, participants, and audiences of the program. During this process, determine who should be involved in the evaluation process. Think about whose perspective would improve the evaluation and whose voices would be absent if not included in the process.  

  • Identify program resources: Determine the components that contribute to the functioning of the program (e.g., time, money, human resources, facilities). 

  • Define an evaluation team: The evaluation team can consist of a variety of individuals internal or external from the university who can help to move the process forward more quickly. 

2.     Understanding program design

  • Describe program goals and outcomes: Write objectives and classify goals as being either short, medium, or long-term.  

  • Identify programmatic activities: List all the tasks that staff need to complete, and the courses, assignments, co-curriculars, and mandatory meetings, that participants of the program will be asked to complete. 

  • Connect the goals of the program with the activities and then the outcomes (alignment): This step will help identify gaps (i.e., goals and/or outcomes that are not associated with an activity) and ensures that all activities align with their intended purpose.

3.     Design the evaluation plan

  • Determine the scope of the evaluation: Identify what goals, outcomes, and activities will be included in the evaluation. Some outcomes may not be realized until long after students complete the program and may require an ongoing evaluation. 

  • Find or develop measures to collect data: Conduct a literature search to identify relevant measures. If measures do not exist, they can be systematically developed to address your initial questions about your program.  

  • Write an evaluation plan: The plan describes the data collection, data analyses, and reporting processes and outlines the responsibilities of each member of the evaluation team. Often an evaluation plan includes an examination of the fidelity of implementation, valuates whether the program was implemented exactly as written so that results can be based on the actual occurrences of the program, rather than on assumptions of how program components took place (Carroll et al., 2007). 

4.     Conduct the evaluation

  • Collect data: According to the evaluation plan.  

  • Analyze data: Depending on the type of data you collected, select appropriate statistical analyses that allow you to understand what the data are demonstrating. 

  • Report the results: Identify the audience and write an evaluation report. Include a description of the program as well as the results. 

5.     Revise the program and/or evaluation plan for continuous improvement

  • When done well, one-time program evaluation can lead to repeated and continuous measurement within the program. 

Internal vs. external evaluations

Depending on the reasons why an evaluation is being considered for a program, the program may pursue an internal or external evaluation (Goldstein & Ford, 2022).  

Internal evaluations are conducted by individuals within the program or institution, often as part of a formative effort of quality monitoring or continuous improvement. Internal self-evaluation can be directed to specific goals or intentionally aligned with program outcomes. Internal colleagues may also be familiar with the culture and context of the program being evaluated and can promote greater collegiality and collaboration among units. However, internal evaluation can have drawbacks related to subjectivity or bias. 

Individuals outside of the institution conduct external evaluations that are often focused on summative assessment. A benefit of using external evaluators is that the evaluation can often be broader in scope and design. Additionally, external review can provide greater credibility for the results with increased levels of accountability for those being evaluated. Grant funding agencies may require an external evaluator for these reasons. 

Internal or external reviews provide a comprehensive picture of a program and can help to determine where resources should be allocated for program improvement. 

Data collection

In program evaluation, measurement methods are best categorized into two broad groups: direct measures and indirect measures. Using both direct and indirect measures can provide a more holistic view of the impact of a program. 

  • Direct measures actual products such as papers, projects, and exams. Direct measures are often used to determine the degree to which students learned the content. 

  • Indirect measures are not actual products but can be used to examine perceptions, attitudes, and opinions about an educational program. 

There are four common types of data that are analyzed in program evaluation. How the data is collected will determine whether each type is a direct or indirect measure. 

  • Observation data is a type of qualitative data collection methodology. Observation allows the evaluator to systematically collect data as it occurs within a specific event or time frame. Observers need to be well trained to ensure that the results are reliable. 

  • Artifact data is a product or document that is generated as a part of a course or program. For example, an artifact of a course could be the syllabus that is used, or samples of student work completed inside or outside of the classroom. 

  • Historical or institutional records include demographic and enrollment data that is often recorded when a course or program is being conducted. This information is not necessarily generated by anyone associated with the course, but offer opportunities to examine trends over time and can serve as a foundation for other research questions. 

  • Self-report data requires participants to provide their thoughts or opinions in response to a specific prompt. Self-report data can be collected via interview, survey, and focus groups. Though commonly used, scholarly study on whether self-report is reliable has provided mixed reviews (Ross, 2006). Individuals do not always assess their own abilities accurately, and responses can be influenced by how the data collection instrument has been constructed. 

Relationship to teaching and learning research

Evaluators in educational settings use similar data collection methods and encounter many of the same methodological challenges as education researchers; however, the purpose underlying the work differs.  

Teaching and learning researchers are attempting to understand and explain how learning takes place for the purpose of increasing the wider knowledge base. Researchers are usually attempting to identify phenomenon that may occur or be applicable across different contexts, with a greater emphasis on sampling methods to ensure that results are not limited to a specific teaching and learning environment.  

Program evaluations may be context specific with less importance placed on generalization. The audience for an evaluation is usually a stakeholder(s) in a program with the aim of identifying ways to improve that program, although the results may be informative to other similar programs. 

The text above has been used in alignment with a Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic License. Resource materials on teaching strategies. Yale University

What we offer

Consultations and support

Contact us to discuss your evaluation questions and needs. We can meet with you and your team virtually or in-person. We can provide foundational information and send you on your way to complete your own program evaluation or we can collaborate with you on any aspect and stage of evaluating an educational program, pending capacity.  

Program evaluation projects do not need to be funded to access our assistance. We also appreciate receiving your request well in advance of the desired consultation date (one to two weeks). 

During our first consultation meeting, we will ask you questions about: 

  • Overall program evaluation goals  

  • Timelines  

  • Identifying the evaluation project team 

  • Collaborative evaluation, involving stakeholders in the evaluation process, as this creates opportunities that promote participatory evidence-based decision-making. 

  • Data collection methods 

  • Program evaluation support needs (consultation à guidance at each stage à full support) 

  • Professional development opportunities 

This initial consultation meeting will allow us to understand the general scope of the project, and identify next steps, pending capacity.  

Resources

We have curated a catalogue of resources to support teaching and learning program evaluation needs at the University of Manitoba. These resources include those related to approaches to program evaluation, planning an evaluation, common methods of evaluation, best practices in data collection, and analysis and interpretation. 

STL Team created resources:

Program Evaluation Sample Question Bank – sample questions for program, course, or lesson evaluation (viewable)

Sample Logic Model Template – this Excel workbook contains a version of the Kirkpatrick Logic Model (viewable)

Sample Report Template – this Word document guides those involved in program evaluations to create a plan (viewable)

Workload Estimator – using your course information, this questionnaire will provide an estimate of the academic workload that you are asking of your students, which can be helpful for deciding on a direction for program evaluation

Other resources

Professional development opportunities

We offer workshops about program evaluation of teaching and learning programs, survey design, focus groups/interviews, analyses, and more. Please contact us for more information. 

Contact us

The Centre for the Advancement of Teaching and Learning
65 Dafoe Road, Winnipeg, MB
University of Manitoba
Winnipeg, MB R3T 2N2 Canada

204-474-8708
204-474-7514