Topic 3: Usable Innovations and Implementation Drivers

Implementation is in service to effective innovations.  Implementation Drivers are designed to improve the skill levels of teachers, principals, and staff so that greater benefits to students can be achieved.  Implementation Drivers drive successful use of innovations.  In this section we note the importance of Usable Interventions when developing performance (fidelity) assessments, doing coaching, providing training, and conducting staff selection processes.

Usable Innovations and Performance (Fidelity) Assessment

Fidelity assessments are not yet a standard part of the education system. In addition, many programs developed by researchers and experts for use in classrooms do not include fidelity assessments that schools and districts can use.  From an implementation point of view, any innovation (evidence-based or otherwise) is incomplete without a good measure of fidelity to detect the presence and strength of the innovation in practice (see 4.b. in the Usable Innovation criteria). 

The Usable Innovation components are the basis for items included in a fidelity assessment.  In particular, the essential functions and the Practice Profiles that operationalize those functions provide information to guide the development of fidelity assessment items.  Usable innovations are doable and assessable in practice. 

To maximize benefits to students, fidelity data collection is:

  1. Frequent:  More frequent fidelity assessments mean more opportunities for improving.  Instruction, innovations and implementation supports, and school, district, and state supports for the program or practice benefit from frequent feedback.  The mantra for fidelity assessments in education is, “Every teacher every month.”
  2. Relevant:  Fidelity data are most informative when each item on the assessment is relevant to important supports for student learning.  That is, the fidelity assessment items are tied directly to the Practice Profile.
  3. Actionable:  Fidelity data are most useful when each item on the assessment can be included in a coaching service delivery plan and can be improved in the education setting.  After each assessment, the teacher and coach develop goals for improving instruction.  In addition, Implementation Teams work with leadership to ensure that teachers have access to the intensity of coaching supports needed for educators to be successful.

An important lesson of attending to implementation is that accountability moves from the individual practitioner to the organization and leadership.  Accountability is predicated on fidelity assessment. The focus of fidelity assessment is on teacher instruction since that is “where education happens.”  However, the accountability for teacher instruction remains with the Implementation Team and district and school leadership.

  • If student outcomes are improving, and the teachers are using the programs with fidelity, the teachers should be congratulated for their impact on students.
  • If teacher instruction is improving rapidly, the Implementation Team should be congratulated for assuring effective supports for teachers.
  • If teacher instruction is poor, the Implementation Team is accountable for providing more effective supports for teachers.
  • If the Implementation Team is struggling, state and district leadership are accountable for improving the functions, supports, and effectiveness of the Team.

For leaders in education, fidelity is not just of academic importance.  The use of a fidelity measure helps leaders and others discriminate implementation problems from intervention problems and helps guide problem solving to improve outcomes.  As shown below, information about fidelity and outcomes can be linked to possible solutions to improve intended outcomes (Blase, Fixsen, and Phillips, 1984; Fixsen, Blase, Metz, & Naoom, 2014).



High Fidelity

Low Fidelity


Good Outcomes


Celebrate and duplicate!

Re-examine the innovation
Modify the fidelity assessment


Poor Outcomes


Modify the innovation

Start over


As shown in the table, the desired combination is high fidelity use of an innovation that produces good outcomes.

  • When high fidelity is linked consistently with good outcomes it is time to celebrate and continue to use the innovation strategies and implementation support strategies with confidence. 
  • The second best quadrant is where high fidelity is achieved, but outcomes are poor.  This clearly points to an innovation that is being done as intended, but is ineffective.  In this case, the innovation needs to be modified or discarded. 
  • The least desirable quadrants are those in the low fidelity column where corrective actions are less clear.  Low fidelity in combination with good outcomes points to either a poorly described innovation or a poor measure of fidelity (or both).  In either case, it is not clear what is producing the good outcomes. 
  • Low fidelity associated with poor outcomes leaves users in a quandary.  It may be a good time to start again — to develop or find an effective innovation and develop effective implementation supports.

Usable Innovations and Coaching

For educators to make full, effective, and consistent uses of an innovation, coaching begins immediately after training.  Coaches are part of an Implementation Team, provide parts of training, and conduct fidelity assessments for teachers and staff in nearby schools (the integrated part of the Implementation Drivers).  Thus, building level coaches are well versed in the Usable Innovation and have expertise coaching at the individual level.  

Usable Innovation Criteria

  1. Clear description of the program
  2. Clear description of the essential functions that define the program
  3. Operational definitions of the essential functions
  4. A practical assessment of the performance of practitioners who are using the program

The focus of coaching is to help educators make full and effective use of a Usable Innovation.  Thus, the Usable Innovation criteria inform the content of coaching.  As part of the coaching supports for educators, building level coaches directly observe educators in action, review records, and interview those associated with the educator to see how the educator is doing in his or her work with students and others.  In essence, the coach is doing mini-fidelity assessments frequently and the educator becomes accustomed to being observed and acclimated to receiving positive, constructive, and helpful feedback to improve outcomes for students and others.

Coaching starts immediately after training and never ends (although the schedule and content of coaching may change as educators master all aspects of the evidence-based innovation).  This adjustment to coaching needs may occur through the use of coaching service delivery plans and align with the effectiveness data that comes from the trainers. Thus, coaching is an important key to achieving high fidelity amongst educators and desired outcomes for students.

Usable Innovation and Training

Best practices for training include providing information about history, theory, philosophy, and rationales for innovation components.  This information is conveyed through pre-reading, lecture and discussion formats geared to knowledge acquisition and understanding.  Skills and abilities related to carrying out the innovation components and practices are demonstrated (live or on tape) then followed by behavior rehearsal to practice the skills and receive feedback on the practice (Blase et al., 1984; Joyce & Showers, 2002; Kealey, Peterson, Gaul, & Dinh, 2000). 

Usable Innovation Criteria

  1. Clear description of the program
  2. Clear description of the essential functions that define the program
  3. Operational definitions of the essential functions
  4. A practical assessment of the performance of practitioners who are using the program

The content of training is based on the Usable Innovation criteria.  Innovations that meet those criteria are described in sufficient detail to provide the content for the training best practices.

New educators continuously enter the system, providing many opportunities to improve the effectiveness and efficiency of staff training.  Effective training that is focused on the essential components of an innovation is a key step toward the full and effective (high fidelity) use of an innovation.






Usable Innovations and Staff Selection

Best practices for staff selection were identified in a meta-analysis of research on selection (McDaniel, Whetzel, Schmidt, & Maurer, 1994).  The authors found that structured interviews that include inquiries about education and background, exchanges of information related to the work to be done, and role play/behavior vignettes (job samples) were effective interview techniques that related to later work outcomes for employees. 

The content for staff selection is based on the Usable Innovation criteria.  It is especially important to ask questions to explore the candidate’s philosophy and values and how well those fit with those embedded in the usable innovation.  Philosophy and values are viewed as “unteachable” within the limits of training and coaching.  Therefore, it is important to select for philosophy and values that match those of the Usable Innovation. 

In current work in a variety of states, the best practices for staff selection often are rated as “not in place.”  The same schools describe the difficulties they face with educators who already are employed and who are only mildly (if at all) interested in making use of innovations.  This is not a teacher problem; this is an implementation problem.  Implementation of innovations with fidelity begins with staff selection and mutually informed consent to engage in practices consistent with the innovation.  In addition, the interviewers should describe the training, coaching, and fidelity assessment practices and encourage questions and discussion to secure informed agreement to participate.

Usable Interventions, Staff Selection, and Creating Readiness for Change

With existing staff groups, an interview process can be used to select educators who will be the first to be prepared to use an evidence-based innovation.  According to Prochaska, Prochaska, and Levesque (2001), about 20% of the current staff might be ready for change, 60% might be willing to think about it and prepare for change, and 20% may not be ready for change anytime soon. 

Staff selection is seen as critical to success in any field (Macan, 2009).  A leader who insists on change when educators are not prepared for change will annoy the educators and frustrate those who are trying to support the use of an evidence-based innovation in the provider agency.

Usable Innovations and other Implementation Drivers

Leaders and facilitative administrators support selection, training, and coaching as outlined above.  The active implementation supports routinely help to produce the educator behavior required to deliver a usable innovation as intended.  Fidelity assessment, as a measure of the presence and strength of a usable innovation in practice, is used to inform coaching for educator improvement.  Fidelity assessment also helps inform leadership and helps schools continue to change to improve supports for educators’ full and effective use of a Usable Innovation.  Usable Innovations and Implementation Teams provide school, district, and state leaders the foundations for working together to achieve greatly improved outcomes for students (are we doing what we intend, is it producing desired outcomes).