Deck 2: Program Evaluation: Promises and Pitfalls
Question
Question
Question
Unlock Deck
Sign up to unlock the cards in this deck!
Unlock Deck
Unlock Deck
1/3
Play
Full screen (f)
Deck 2: Program Evaluation: Promises and Pitfalls
Explain the term "implementation fidelity."
We learn information via a process evaluation, which focuses on monitoring the program and its implementation, assessing if it is conducted as planned or in a manner that is consistent with a particular model. This degree to which a program evidences faithful replication or alignment to an intended model or program reflects implementation fidelity. Quite simply, we need to know what the program or initiative is doing, and how those actions occur, in order try to gauge its outcomes, since without the process assessment, we may falsely conclude that the program as planned had particular effects, when we are really evaluating what may be a very different program (the version that was actually implemented). In instances where the practices and processes align strongly with the planned model, the effort is said to have high implementation fidelity.
After reading the chapter, analyze the role of an evaluation in providing support for a grant application.
Evaluation data can help an organization, program, or initiative to be competitive for grant funding. Rigorous evaluation data are viewed positively by funding entities, preliminary data can help demonstrate promise, and a well-designed evaluation plan can convey the value placed on data-guided decision-making, accountability, and a culture of learning and improvement. Unfortunately, many applications submitted for program or initiative funding tend to frame evaluation as an afterthought, with underdeveloped plans or underfunded evaluations. In an optimal circumstance, the plan for evaluation would be developed hand-in-hand with the program's plan. Doing so--with an understanding of program objectives and how specific components are designed or intended to support program goals--would ensure targeted collection of process and outcome data and, more generally, access to the information necessary to answer questions of interest to the program or initiative leadership as well as key stakeholders or partners. The development of a rigorous evaluation plan can help establish trust with stakeholders, demonstrating an investment in quality programming, thorough understanding, and fiscal stewardship. It can also help improve the program planning from the outset, by encouraging program planners to think in some detail about the ways they would detect changes, and the theory of change that would be expected to lead to those changes.
Evaluate and explain external factors that can contribute to evaluation data not being used.
Sometimes an evaluation has been conducted, but the data or findings do not get used. A diverse range of factors can contribute to that outcome, some attributable to evaluators, some to organizations, and others to a variety of external influences. In terms of external factors, non-data-related factors can keep a program alive or hurt it and have substantive implications for evaluations. There are different perspectives, agendas, and "stakes" at play for the varied parties involved in the evaluation of a program or initiative. It can, at times, be challenging to navigate waters that may be experienced as politically treacherous. These waters can reflect resource strains or needed budget cuts as well as shifting policy interests and needs, especially during difficult budget years.
Shifting policy interests can sometimes reflect what some of our colleagues have deemed the "shiny penny" phenomenon, in which a new (or sometimes revisited) topic or initiative can capture the attention (and dollars) of local leaders or funders. In the short term, this can lead to interest in and support for evaluation of efforts or programs that function in this new area. However, the other side of that "coin" is that such shifting priorities can also lead directly to funds being diverted from other evaluation or program-enhancement efforts. Those functioning in areas that are no longer high priority can see their evaluation findings or recommendations fail to garner interest or, in some scenarios, can even experience the loss of expected funding. In brief, even though the work had clearly been conceptualized as multiyear, system representatives conveyed in the latter part of year one that there would not be system resources to support the ongoing evaluation.
Shifting policy interests can sometimes reflect what some of our colleagues have deemed the "shiny penny" phenomenon, in which a new (or sometimes revisited) topic or initiative can capture the attention (and dollars) of local leaders or funders. In the short term, this can lead to interest in and support for evaluation of efforts or programs that function in this new area. However, the other side of that "coin" is that such shifting priorities can also lead directly to funds being diverted from other evaluation or program-enhancement efforts. Those functioning in areas that are no longer high priority can see their evaluation findings or recommendations fail to garner interest or, in some scenarios, can even experience the loss of expected funding. In brief, even though the work had clearly been conceptualized as multiyear, system representatives conveyed in the latter part of year one that there would not be system resources to support the ongoing evaluation.