Active Implementation Frameworks

Understanding fidelity assessment is enhanced by understanding Usable Innovations, Implementation Drivers, Implementation Stages, Improvement Cycles, and Implementation Teams. For readers who are not familiar with the Active Implementation Frameworks, please complete the other Modules before proceeding. At a minimum, we urge you to complete the following:

Continue with Module 7

Fidelity Assessment and Usable Innovations

The lack of adequately defined programs based on best available evidence is an impediment to implementation with good outcomes (e.g., Vernez and colleagues, 2006).  To improve student outcomes on a useful scale, innovations need to be teachable, learnable, doable, and assessable in typical education settings.  Usable Innovations provide the content that is the focus of training, coaching, and fidelity assessments.  Usable innovations provide the reasons for changing roles, functions, and structures in schools and districts to more efficiently, effectively, and consistently produce intended outcomes.

Module 6: Usable Innovations provides an overview of the four criteria that define an effective innovation.  A key criterion is:

4. Evidence of effectiveness; Practical performance assessment

  1. The performance assessment relates to the education innovation philosophy, values, and principles; essential functions; and core activities specified in the practice profiles; the performance assessment needs to be a feasible method (e.g. a 10-minute classroom walkthrough observation ratings) that can be done repeatedly in the context of typical education settings
  2. Evidence that the education innovation is effective when used as intended
    1. There are data to show the innovation is effective
    2. A performance (fidelity) assessment is available to indicate the presence and strength of the innovation in practice
    3. The performance assessment results are highly correlated (e.g. 0.50 or better) with intended outcomes for students, families, and society

Fidelity assessments should be standard practice in education.  From an implementation point of view, any innovation (evidence-based or standard practice) is incomplete without a good measure to detect the presence and strength of the innovation as it is used in education practice (4.b. above ). 

Fidelity assessments should be directly linked to the usable innovation components.  In particular, the essential functions and the Practice Profiles that operationalize those functions provide information to guide the development of fidelity assessment items.  Usable innovations are doable and assessable in practice. 

For example, fidelity of Positive Behavioral Innovations and Supports (PBIS) can be assessed because it is a well-defined innovation.  On the other hand, Annual Yearly Progress Standards describe student outcomes but not teacher instruction.  In cases like this, Implementation Teams can begin by relying on generic fidelity assessments that are related to evidence-informed teacher instruction practices generally.  Even though these measures are not related specifically to an innovation they can be used frequently (e.g., six times a year for every teacher) to assess the effectiveness of implementation supports for teachers.

 

Fidelity Assessment and Implementation Drivers

Fidelity assessment is a key outcome of implementation done well.  In Module 2: Implementation Drivers, it sits at the top of the implementation triangle. It is the link between implementation supports, consistent delivery of an innovation, and reliable outcomes for students.

An Implementation Team is accountable for assuring adequate supports for teachers and other practitioners using an innovation. Thus, while fidelity assessments include observations of teacher behavior in an education setting, the results reflect how well the Implementation Team is supporting teachers (e.g., with training and coaching and with efforts to improve administrative supports for teachers using an innovation as intended).

  • If teacher instruction is improving rapidly, the Implementation Team should be congratulated for assuring effective supports for teachers.
  • If teacher instruction is poor, the Implementation Team is accountable for providing more effective supports for teachers.
  • If the Implementation Team is struggling, state and district leaders are accountable for improving the functions and effectiveness of the Team.

For example, fidelity data may show teachers 1-8 are providing high quality instruction while teachers 9-18 are not faring very well. The Implementation Team can analyze the information to figure out why the differences are occurring. Did different trainers train the high and low performing teachers? Do they have different coaches? Did different assessors do one set of assessments versus the other? Perhaps in this example the high fidelity teachers had Coach #1 and the lower fidelity teachers had Coach #2. This clearly points to a coaching problem so the Implementation Team immediately goes to work on improving Coach #2 coaching skills. On the other hand, if teachers 1 – 18 were all struggling with particular instructional practices, to one degree or another, then the Implementation Team goes to work improving the training and coaching for all teachers.

For leaders in education, fidelity is not just of academic importance.  The use of a fidelity measure helps leaders and others discriminate implementation problems from innovation problems and helps guide problem solving to improve outcomes. As shown in Table 1, information about fidelity and outcomes can be linked to possible solutions to improve intended outcomes (Blase, Fixsen, and Phillips, 1984; Fixsen, Blase, Metz, & Naoom, 2014).

Without a fidelity assessment leaders and Implementation Teams have no idea where to direct their improvement efforts.  When good student outcomes are achieved, there is no clear idea of what should be repeated to achieve those outcomes for all students. When poor student outcomes occur, leaders are left wondering what to do to “fix the problem.” Fidelity assessments help leaders make effective and efficient use of scarce resources to improve education outcomes for students.
 

Table 2: Fidelity scores as a diagnostic tool.

 

 

High fidelity

Low fidelity

 

Good Outcomes

 

Celebrate!

Re-examine the innovation
and
Modify the fidelity assessment

 

Poor Outcomes

 

Modify the innovation

Start over

 

As shown in Table 2, the desired combination is high fidelity use of an effective innovation that produces good outcomes. When high fidelity is linked consistently with good outcomes it is time to celebrate and continue to use the innovation strategies and implementation support strategies with confidence. The second best quadrant is where high fidelity is achieved, but outcomes are poor. This clearly points to an innovation that is being done as intended, but is ineffective. In this case, the innovation needs to be modified or discarded.

The least desirable quadrants are those in the low fidelity column where corrective actions are less clear. Low fidelity in combination with good outcomes points to either a poorly described innovation or a poor measure of fidelity. In either case, it is not clear what is producing the good outcomes. Low fidelity associated with poor outcomes leaves users in a quandary. It may be a good time to start again — to develop or find an effective innovation and develop effective implementation supports.  The most efficient strategy may be to first improve fidelity and then reassess outcomes so that “babies are not thrown out with bathwater.”

Implementation is in service to using effective innovations to realize intended outcomes.  Implementation Drivers are designed to improve the skill levels of teachers, principals, and staff and produce greater benefits to students. Implementation Drivers “drive” successful implementation of innovations.

Fidelity assessments: Prognosis

A good fidelity assessment requires a demonstration that the measure is highly correlated with student outcomes. This is specified in 4.b of the Usable Innovation Criteria. This means that fidelity scores today predict student outcomes in the future.

If high fidelity today predicts student outcomes in the future, this is good news! This means fidelity measures bring the future into the present. Instead of lamenting poor student outcomes next year when the annual data come out, Implementation Teams can assess fidelity today and provide improved supports for better instruction today. Implementation Teams know how to help improve fidelity today by improving coaching for teachers and helping school administrators support teacher instruction more effectively. Thus, efforts to improve fidelity today help assure improved student outcomes in the future.

Fidelity Assessment and Improvement Cycles

On any school day, one person in every five (20% of the entire population) in the United States of America is in school.  In this massive enterprise, education happens when adult educators interact with students and each other in education settings. 

To be useful to students and functional for teachers, Implementation Teams need to know what to train, what to coach, and what performance to assess to make full and effective use of a selected effective practice.  Implementation Teams need to know WHAT is intended to be done (innovation components or instructional practices) so they can efficiently and effectively develop processes to assure high fidelity use of the innovation now and over time. 

Note that the need to specify WHAT is intended applies to current practices as well as evidence-based or innovative practices.  Current practices may be “standard practices” but WHAT are they?  Can they be done consistently?  Do they produce desired student outcomes when done as intended?  Standard practices must meet the same test if they are to be used, improved, and retained in education.

The PDSA Cycle

The process of establishing a fidelity assessment takes time and focused effort.  To establish fidelity assessments, Implementation Teams make intentional use of the plan, do, study, act (PDSA) cycle.  The benefits of the PDSA cycle in highly interactive environments have been evaluated across many domains including manufacturing, health, and substance abuse treatment.  As an improvement cycle, the PDSA trial-and-learning approach allows Implementation Teams to identify the essential components of the innovation itself.  The PDSA approach can help Implementation Teams evaluate the benefits of innovation components, retain effective components, and discard or de-emphasize non-essential components. 

  • Plan: The “plan” is the innovation or instruction as educators intend it to be used in practice. 
  • Do: The “plan” needs to be operationalized (what we will do and say to enact the plan) so it is doable in practice.  This compels attention to the core innovation components and provides an opportunity to begin to develop a training and coaching process (e.g. here is how to do the plan) and to create a measure of fidelity (e.g. did we “do” the plan as intended). 
  • Study: As a few newly trained educators begin working with students or others in an actual education setting, the budding fidelity measure can be used to interpret the outcomes in the “study” part of the PDSA cycle (e.g. did we do what we intended; did doing what we intended result in desired outcomes). 
  • Act: The Implementation Team uses the experience to help develop a new plan where the essential components are even better defined and operationalized.  In addition, the fidelity assessment is adjusted to reflect more accurately the essential components and the items are modified to make the assessment more practical to conduct in the education setting.
  • Cycle: The PDSA process is repeated until the innovation and the fidelity assessment are specified well enough to meet the Usable Innovation criteria.  At that point, the innovation is ready to be used by multiple educators, the fidelity assessment is deemed practical, and the correlation between the essential components and intended outcomes is high.

Implementation Teams may employ the PDSA cycle in a Usability Testing format to arrive at a functional version of an innovation that is effective in practice and can be implemented with fidelity on a useful scale (e.g. Akin et al., 2013; Fixsen et al., 2001; McGrew & Griss, 2005; Wolf et al., 1995).  Once the components of an innovation have been identified, functional analyses can be done to determine empirically the extent to which key components contribute to significant outcomes.  As noted above, the vast majority of standard practices and innovations do not meet the Usable Innovation criteria.  Implementation Teams will need to make use of PDSA improvement cycles and Usability Testing to establish the core innovation components before they can proceed with broader scale implementation.

Activity 7.2

Activity 7.2
Capstone: Developing a Fidelity Assessment

Has your team identified the core components of your intervention or innovation? Has your team clearly defined or operationalized them? If YES, use the Fidelity Assessment Brainstorming Worksheet (included), to complete this activity individually or as a team.

Download PDF