Exam 16: Using Evaluation to Promote Improvements in Health Service Settings
After reading the chapter, please provide a brief review of health service delivery systems and health service support systems.
Individuals and organizations in the health service delivery system (HSDS) provide services/interventions directly to patients/consumers. Different types of services/interventions include health promotion, prevention, and treatment (e.g., implementing nutrition-education programs, administering vaccines, and setting broken bones). Types of organizations within the health service delivery system are wide ranging. They include organizations in the for-profit (e.g., private physician's offices, natural birthing centers, private substance use rehabilitation centers), nonprofit (e.g., free clinics, hospital systems), and government sectors (e.g., state health departments, Women, Infant, and Children clinics, Veterans Affairs hospitals). The HSDS also includes settings that deliver patient care but are not solely health care facilities (e.g., schools, retail health clinics, etc.). The breadth of HSDS facilities is considerable.
The health service setting is also comprised of individuals and organizations that influence population health outcomes by supporting health care delivery organizations. These individuals and organizations fall under the health service support system (HSSS) category. They provide services to health care organizations and health care providers, rather than directly to patients/consumers. Individuals in the HSSS aim to build the capacity of health care delivery systems and to improve the quality of care provided by health care organizations. HSSS organizations provide capacity-building resources (e.g., funding, educational materials, trainings, and technical assistance) to organizations in the health care delivery system. The Institute for Healthcare Improvement (IHI) is an example of an HSSS organization. IHI indirectly impacts health outcomes by promoting the use of evidence-based improvement methods like the Model for Improvement in hospital and community health settings. Philanthropy organizations such as the Robert Wood Johnson Foundation, March of Dimes, and Partners for Health are other examples of health service organizations within the support system. These HSSS organizations fund health promotion initiatives, facilitate partnerships among like-minded agencies and individuals, and actively disseminate practice-based evidence as means to promote health equity. Health-oriented professional societies such as the American Medical Association and the Alliance for Continuing Education in the Health Professions are also examples of HSSS organizations.
While organizations in the health service setting generally fall into the delivery system or support system category, some organizations provide both direct patient/consumer care services as well as capacity building services to other health care organizations. For example, the Mayo Clinic provides clinical care to patients and is also committed to health care education and research. University-based medical departments also provide services characterized by both the delivery system and the support system. Despite the wide range of methods and interventions, organizations in the health service setting share the common interest of promoting individual and collective health and well-being.
Compare and contrast the three main types of evaluation.
Process evaluation documents the implementation of an innovation (intervention program, policy, or practice). Process evaluation data are collected to monitor the activities that take place as an innovation is put into effect. In the health service delivery system, the innovation might be a treatment program, a clinic standing order, or a medical procedure. In the health service support system, the innovation could be a health education course for health care providers or a new quality improvement approach. In both the delivery and support system, a process evaluation can determine the extent to which an intervention was delivered as intended. It records the types of variations that occur and quantifies the magnitude of those variations. More than just auditing of services and outputs (a checklist approach to progress monitoring), when done well, a process evaluation helps to uncover implementation-related factors that may have facilitated or hindered the achievement of targeted outcomes. While a process evaluation is highly useful for understanding implementation issues, by design, it does not measure outcomes associated with an innovation. Additionally, the practical value of a process evaluation may be limited if data analyses do not occur until well after implementation. Therefore, this type of evaluation is almost always used in conjunction with other methods.
Summative evaluation is used to provide evidence about the worth or merit of an intervention and is typically conducted after an innovation has been implemented. It focuses on products, results, or impact. A summative evaluation, when used in conjunction with process evaluation data, can also address the bottom-line question of how well an intervention worked. Addressing these bottom-line questions requires that comprehensive process and outcome data of high quality are available.
Summative data are critical for accountability. However, there can be major challenges in using summative evaluation in health settings, particularly for evaluations pertaining to delivery system services. The complexity of the human condition makes it extremely challenging to isolate the effect of an intervention on an individual or population, given the multitude of factors that impact health and well-being. The types of evaluation designs that would allow one to examine such effects can be cumbersome, expensive, resource intensive, and in some cases, unethical (e.g., using a randomized control design that withholds a potential treatment from a group that could benefit from it).
Another significant consideration in health service settings is the time lag between health activities and health outcomes. We can measure what happened in a program, but may not be fully able to track whether or not the program ultimately made a difference in the target population without a significant investment in evaluation resources to track these changes over long periods of time. Potentially, it could be months or years before a program has its intended impact. Even in the interim, individual and community-level influences could moderate the effectiveness of what was developed and implemented.
Despite the limitations and challenges of summative evaluations, this type of evaluation bears its own merit. A summative evaluation answers the important question of "What actually works?," a critical input in promoting health outcomes and wellness. Using summative evaluation in many health service settings requires considering whether the goals of an initiative are feasible (i.e., Can these be achieved?), evaluable (i.e., Can we measure these?), and plausible (i.e., Is the attribution chain between intervention and outcomes realistic?).
A formative evaluation is designed to create information that can be used for program improvement. While a formative evaluation relies upon process evaluation data, it differs by objective. The aim of a formative evaluation is improvement, whereas the aim of a process evaluation is simply to monitor implementation activities. When done well, a formative evaluation illuminates areas and issues impacting outcomes as an intervention unfolds. A formative evaluation helps with identifying and addressing emergent challenges. As such, formative approaches are particularly useful for measuring and responding to the complexity of health service systems. Formative data can be collected and used by health service delivery and support systems to make evidence-informed decisions and mid-course corrections as needed. Formative evaluations can be used to complement process evaluations. For example, a process evaluation might first surface issues with implementation fidelity; then, a formative evaluation could be conducted with intervention stakeholders to identify opportunities for improvement.
Explain befriend accountability.
Evaluators can also help health service professionals befriend accountability (i.e., become more comfortable with the concept of accountability) by involving practitioners in the development, implementation, and interpretation of evaluations. Framing evaluations in the spirit of improvement and demonstrating how evaluation data can be used to facilitate improvement in health service settings are additional ways to help health service professionals befriend accountability and focus on the needs of the patients and clients. An improvement-oriented frame to accountability drives conversations toward exploring future opportunities for growth, rather than focusing discussions on past missed or lost opportunities. This frame minimizes anxiety that a recipient might feel about receiving performance-based feedback. It also makes evaluation data more practically useful for practitioners.
Filters
- Essay(0)
- Multiple Choice(0)
- Short Answer(0)
- True False(0)
- Matching(0)