Module 6: Usable Innovations

Welcome to the Active Implementation Module Series. In this module you will learn about usable innovations and how to begin applying them in your setting.

Learning Objectives

After this module, you should be able to:

Prerequisites

Terminology

Module Content Structure

This Module’s content is structured via three categories:

Introduction

Usable Innovations – “WHAT” are we trying to do?

To provide education and leadership effectively, we have to know WHAT we are doing to be effective.  Then we can do that on purpose in each classroom and school to reach all students.

WHAT is it?

WHAT we are trying to do for instruction, school and district supports for instruction, and leadership are important.  When it works, we want to be able to do it again and again.  To improve student outcomes on a useful scale, WHAT we are trying to do needs to be teachable, learnable, doable, and assessable in typical education settings.  Usable Innovation criteria define WHAT we are trying to do.  Usable Innovations provide the content that is the focus of selection, training, coaching, and fidelity assessments.  Usable innovations provide the reasons for changing roles, functions, and structures in schools and districts to more efficiently, effectively, and persistently produce intended outcomes.

 

Topic 1: Defining Usable Innovations

The lack of adequately defined programs is an impediment to use of an EBP or EII with good outcomes (e.g., Vernez and colleagues, 2006). Education researchers have developed standards for assessing the rigor with which innovations have been tested (e.g. What Works Clearinghouse).  However, educators are more interested in the innovations themselves (not standards for experimental rigor).  To begin to address this issue, the following criteria have been developed for usable innovations; that is, innovations that are teachable, learnable, doable, and can be assessed in classrooms and schools to produce good outcomes for students.(Fixsen, Blase, Metz, & Van Dyke, 2013). The usable innovation criteria used to determine what to support in districts are listed below. Click on each criterion to read more about it.

1. Clear description of the innovation
  1. Clear Philosophy, Values, and Principles
    1. The philosophy, values, and principles that underlie an education innovation provide guidance for all education decisions and are used to promote consistency, integrity, and sustainable effort across all districts and schools
  2. Clear inclusion and exclusion criteria define the population for which the education innovation is intended (e.g. middle school Algebra students who have passed Advanced Math)
    1. The criteria define who is most likely to benefit when the education innovation is used as intended

Not every education innovation is a good fit with the values and philosophy of a district or school.  In addition, many innovations were developed with particular populations of students.  Applications on the innovation with different populations of students may not be equally effective.  Thus, having a good description of an education innovation and its foundations is required so that leaders and others can make informed choices about what to use. 

2. Clear essential functions that define the education innovation
  1. Clear description of the features that must be present to say that an education innovation exists in a given location
  2. Essential functions sometimes are called core education innovation components, active ingredients, or practice elements

The speed and effectiveness of implementation may depend upon knowing exactly what has to be in place to achieve the desired results for students, families, and communities; no more, and no less.  Not knowing the essential innovation components leads to time and resources wasted on attempting to implement a variety of (if only we knew) nonfunctional elements.

3. Operational definitions of essential functions (practice profiles; do, say)
  1. Practice profiles describe the core activities that allow an education innovation to be teachable, learnable, and doable in practice; and promote consistency across teachers and staff at the level of actual interactions with students

Knowing the essential functions (criterion #2) is a good start.  The next step is to express each essential component in terms that can be taught, learned, done in practice, and assessed in practice.  The methods for developing operational descriptions (practice profiles) in education were established by Gene Hall and Shirley Hord as part of the Concerns-Based Adoptions Model (called intervention configurations in CBAM).

4. Evidence of effectiveness; Practical performance assessment
  1. The performance assessment relates to the education innovation philosophy, values, and principles; essential functions; and core activities specified in the practice profiles; the performance assessment needs to be a feasible method (e.g. a 10-minute classroom walkthrough observation ratings) that can be done repeatedly in the context of typical education settings
  2. Evidence that the education innovation is effective when used as intended
    1. There are data to show the innovation is effective
    2. A performance (fidelity) assessment is available to indicate the presence and strength of the innovation in practice
    3. The performance assessment results are highly correlated (e.g. 0.50 or better) with intended outcomes for students, families, and society

How well are teachers and staff saying and doing those things that are in keeping with the essential functions and with the intentions behind the education innovation?  If performance assessments do not exist, this becomes a developmental task for a skilled Implementation Team.  Note that the criterion for performance assessment includes the specification that a performance assessment should be highly predictive of intended outcomes.  If educators use an innovation as intended then students will benefit as intended.

 

 

Using effective innovations in context

Where evidence-based innovations can be or need to be used in education has been a vexing problem.  This is especially true in SEAs and LEAs where races, cultures, languages, economic conditions, current system services and functioning, and every other aspect related to human societies vary widely within and across communities and neighborhoods.  From a public education point of view this is especially daunting – is a different form of an education innovation needed to accommodate the uniqueness of each education setting and system? 

From an applied implementation perspective, the process of adjusting education innovations, organizations, and systems to fit and function together is expected and a part of good implementation practice.  This is what Implementation Teams do.  This is like a physician being overwhelmed with the infinite variation among individual human beings, each with his or her own unique DNA, physical characteristics, strengths, and weaknesses.  Yet, for the application of many pharmaceuticals, the variation is accounted for by a simple dosage calculation of so many milligrams per kilogram of body weight.  By stepping back a bit, implementation tools and methods have been established to sense contextual variations that matter and accommodate those infinite variations in the implementation process. 

Effective innovations are critical to education success, but they are not enough.  As noted in the formula for success, effective implementation supports and enabling system and organization contexts also are essential to moving the indicators for all students in education.  Nevertheless, the process of improving education begins with selecting/creating effective innovations.  SEAs, LEAs, and educators can select and support the implementation of innovations that meet the usable innovation criteria outlined above.

The effectiveness of WHAT we do in everyday practice is important – why waste resources on doing what does not work?  The effectiveness of programs is noted in Usable Innovation criterion #4, with effectiveness tied to a measure of the presence and strength of the program in practice (“4.b.  The performance (fidelity) assessment is highly correlated (e.g. 0.50 or better) with intended outcomes for children, families, individuals, and society.”)

Educators are cautioned that assertions by innovation developers and researchers about the essential components of innovations are no substitute for data linking those reported essential components to outcomes.  Without adequate descriptions of innovations, presumptive essential functions cannot be ruled in and alternative explanations cannot be ruled out.  Thus, educators must define “innovations” so they meet the Usable Innovation criteria and can be taught, used in practice, and assessed for fidelity and outcomes.

WHAT are you trying to do to improve student outcomes?  How well does WHAT you are trying to do meet the four criteria for a Usable Innovation?

 

Activity 6.1: Getting started with Usable Innovations

Activity 6.1
Getting started with Usable Innovations

To be usable, it’s necessary to have sufficient detail about an innovation. With detail, you can train educators to implement it with fidelity, replicate it across multiple settings and measure the use of the innovation. With your team, consider a current or upcoming initiative and work through the tasks provided.


Download PDF

Topic 2: Establishing Usable Innovations

“Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions.  According to this definition, implementation processes are purposeful and are described in sufficient detail such that independent observers can detect the presence and strength of the ‘specific set of activities’ related to implementation.  In addition, the activity or program being implemented is described in sufficient detail so that independent observers can detect its presence and strength.  When thinking about implementation the observer must be aware of two sets of activities (innovation-level activity and implementation-level activity) and two sets of outcomes (innovation outcomes and implementation outcomes)”
 — Fixsen, Naoom, Blase, Friedman, & Wallace. (2005). Implementation Research: A synthesis of the literature.

Usable Innovation criteria assure that “the activity or program being implemented is described in sufficient detail.”

For example, to be useful to students and functional across thousands of educators and schools operating in locations across states, Implementation Teams need to know what to train, what to coach, and what performance to assess to make full and effective use of an effective practice.  Implementation Teams need to know WHAT is intended to be done (innovation components) so they efficiently and effectively can assure proper use of the innovation now and over time.

The PDSA Cycle

To establish usable innovations, Implementation Teams make intentional use of the plan, do, study, act (PDSA) cycle.  As an improvement cycle in the Active Implementation Frameworks, the PDSA trial-and-learning approach allows Implementation Teams to identify the essential components of the innovation itself.  For example, in highly interactive education settings, the PDSA approach can help Implementation Teams evaluate the benefits of components, retain effective components, and discard non-essential components of an innovation or standard practice.

 

Plan

Identify barriers or challenges, using data whenever possible, and specify the plan to move programs or interventions forward as well as the outcomes that will be monitored.

The “plan” is the innovation as practitioners educators intend it to be used in practice.

Do

Carry out the strategies or plan as specified to address the challenges.

The “plan” needs to be operationalized (what we will do and say to enact the plan) so it is doable in practice.  This compels attention to the core innovation components and provides an opportunity to begin to develop a training and coaching process (e.g. here is how to do the plan) and to create a measure of fidelity (e.g. did we “do” the plan as intended).

Study

Use the measures identified during the planning phase to assess and track progress.

As a few newly trained practitioners begin working with children and families, the budding fidelity measure can be used to interpret the outcomes in the “study” part of the PDSA cycle (e.g. did we do what we intended; did doing what we intended result in desired outcomes).

Act

Make changes to the next iteration of the plan to improve implementation.

The Implementation Team uses the experience to help develop a new plan where the essential components are better defined and operationalized.  In addition, the fidelity assessment is adjusted to reflect more accurately the essential components and the items are modified to make the assessment more practical to conduct in the education setting.

Cycle

The PDSA process is repeated until the innovation is specified well enough to meet the usable innovation criteria.  At that point, the intervention is ready to be used by multiple educators, the fidelity assessment is deemed practical, and the correlation between the essential components and intended outcomes is high.

 

Implementation Teams may employ the PDSA cycle many times over to arrive at a functional version of an innovation that is effective in practice and can be implemented with fidelity on a useful scale (e.g. Fixsen et al., 2001; Wolf et al., 1995).  Once the components of an innovation have been identified, functional analyses can be done to determine empirically the extent to which key components contribute to significant outcomes.  As noted previously, the vast majority of standard practices and innovations do not meet the Usable Innovation criteria.  Implementation Teams will need to make use of PDSA improvement cycle to establish the essential innovation components before they can proceed with broader scale implementation.

 

Activity 6.2: Case Example – Usable Innovations and PDSA

Activity 6.2
Case Example: Usable Interventions and PDSA

This case provides an example of an approach to establishing usable innovations.  Note how PDSA is used to simultaneously develop the innovation and the implementation supports for the innovation.  Review the case example, then go through the discussion questions yourself, or with your team.


Download PDF

Topic 3: Usable Innovations and Implementation Drivers

Implementation is in service to effective innovations.  Implementation Drivers are designed to improve the skill levels of teachers, principals, and staff so that greater benefits to students can be achieved.  Implementation Drivers drive successful use of innovations.  In this section we note the importance of Usable Interventions when developing performance (fidelity) assessments, doing coaching, providing training, and conducting staff selection processes.

Usable Innovations and Performance (Fidelity) Assessment

Fidelity assessments are not yet a standard part of the education system. In addition, many programs developed by researchers and experts for use in classrooms do not include fidelity assessments that schools and districts can use.  From an implementation point of view, any innovation (evidence-based or otherwise) is incomplete without a good measure of fidelity to detect the presence and strength of the innovation in practice (see 4.b. in the Usable Innovation criteria). 

The Usable Innovation components are the basis for items included in a fidelity assessment.  In particular, the essential functions and the Practice Profiles that operationalize those functions provide information to guide the development of fidelity assessment items.  Usable innovations are doable and assessable in practice. 

To maximize benefits to students, fidelity data collection is:

  1. Frequent:  More frequent fidelity assessments mean more opportunities for improving.  Instruction, innovations and implementation supports, and school, district, and state supports for the program or practice benefit from frequent feedback.  The mantra for fidelity assessments in education is, “Every teacher every month.”
  2. Relevant:  Fidelity data are most informative when each item on the assessment is relevant to important supports for student learning.  That is, the fidelity assessment items are tied directly to the Practice Profile.
  3. Actionable:  Fidelity data are most useful when each item on the assessment can be included in a coaching service delivery plan and can be improved in the education setting.  After each assessment, the teacher and coach develop goals for improving instruction.  In addition, Implementation Teams work with leadership to ensure that teachers have access to the intensity of coaching supports needed for educators to be successful.

An important lesson of attending to implementation is that accountability moves from the individual practitioner to the organization and leadership.  Accountability is predicated on fidelity assessment. The focus of fidelity assessment is on teacher instruction since that is “where education happens.”  However, the accountability for teacher instruction remains with the Implementation Team and district and school leadership.

  • If student outcomes are improving, and the teachers are using the programs with fidelity, the teachers should be congratulated for their impact on students.
  • If teacher instruction is improving rapidly, the Implementation Team should be congratulated for assuring effective supports for teachers.
  • If teacher instruction is poor, the Implementation Team is accountable for providing more effective supports for teachers.
  • If the Implementation Team is struggling, state and district leadership are accountable for improving the functions, supports, and effectiveness of the Team.

For leaders in education, fidelity is not just of academic importance.  The use of a fidelity measure helps leaders and others discriminate implementation problems from intervention problems and helps guide problem solving to improve outcomes.  As shown below, information about fidelity and outcomes can be linked to possible solutions to improve intended outcomes (Blase, Fixsen, and Phillips, 1984; Fixsen, Blase, Metz, & Naoom, 2014).

 

 

High Fidelity

Low Fidelity

 

Good Outcomes

 

Celebrate and duplicate!

Re-examine the innovation
and
Modify the fidelity assessment

 

Poor Outcomes

 

Modify the innovation

Start over

 

As shown in the table, the desired combination is high fidelity use of an innovation that produces good outcomes.

  • When high fidelity is linked consistently with good outcomes it is time to celebrate and continue to use the innovation strategies and implementation support strategies with confidence. 
  • The second best quadrant is where high fidelity is achieved, but outcomes are poor.  This clearly points to an innovation that is being done as intended, but is ineffective.  In this case, the innovation needs to be modified or discarded. 
  • The least desirable quadrants are those in the low fidelity column where corrective actions are less clear.  Low fidelity in combination with good outcomes points to either a poorly described innovation or a poor measure of fidelity (or both).  In either case, it is not clear what is producing the good outcomes. 
  • Low fidelity associated with poor outcomes leaves users in a quandary.  It may be a good time to start again — to develop or find an effective innovation and develop effective implementation supports.

Usable Innovations and Coaching

For educators to make full, effective, and consistent uses of an innovation, coaching begins immediately after training.  Coaches are part of an Implementation Team, provide parts of training, and conduct fidelity assessments for teachers and staff in nearby schools (the integrated part of the Implementation Drivers).  Thus, building level coaches are well versed in the Usable Innovation and have expertise coaching at the individual level.  

Usable Innovation Criteria

  1. Clear description of the program
  2. Clear description of the essential functions that define the program
  3. Operational definitions of the essential functions
  4. A practical assessment of the performance of practitioners who are using the program

The focus of coaching is to help educators make full and effective use of a Usable Innovation.  Thus, the Usable Innovation criteria inform the content of coaching.  As part of the coaching supports for educators, building level coaches directly observe educators in action, review records, and interview those associated with the educator to see how the educator is doing in his or her work with students and others.  In essence, the coach is doing mini-fidelity assessments frequently and the educator becomes accustomed to being observed and acclimated to receiving positive, constructive, and helpful feedback to improve outcomes for students and others.

Coaching starts immediately after training and never ends (although the schedule and content of coaching may change as educators master all aspects of the evidence-based innovation).  This adjustment to coaching needs may occur through the use of coaching service delivery plans and align with the effectiveness data that comes from the trainers. Thus, coaching is an important key to achieving high fidelity amongst educators and desired outcomes for students.

Usable Innovation and Training

Best practices for training include providing information about history, theory, philosophy, and rationales for innovation components.  This information is conveyed through pre-reading, lecture and discussion formats geared to knowledge acquisition and understanding.  Skills and abilities related to carrying out the innovation components and practices are demonstrated (live or on tape) then followed by behavior rehearsal to practice the skills and receive feedback on the practice (Blase et al., 1984; Joyce & Showers, 2002; Kealey, Peterson, Gaul, & Dinh, 2000). 

Usable Innovation Criteria

  1. Clear description of the program
  2. Clear description of the essential functions that define the program
  3. Operational definitions of the essential functions
  4. A practical assessment of the performance of practitioners who are using the program

The content of training is based on the Usable Innovation criteria.  Innovations that meet those criteria are described in sufficient detail to provide the content for the training best practices.

New educators continuously enter the system, providing many opportunities to improve the effectiveness and efficiency of staff training.  Effective training that is focused on the essential components of an innovation is a key step toward the full and effective (high fidelity) use of an innovation.

 

 

 

 

 

Usable Innovations and Staff Selection

Best practices for staff selection were identified in a meta-analysis of research on selection (McDaniel, Whetzel, Schmidt, & Maurer, 1994).  The authors found that structured interviews that include inquiries about education and background, exchanges of information related to the work to be done, and role play/behavior vignettes (job samples) were effective interview techniques that related to later work outcomes for employees. 

The content for staff selection is based on the Usable Innovation criteria.  It is especially important to ask questions to explore the candidate’s philosophy and values and how well those fit with those embedded in the usable innovation.  Philosophy and values are viewed as “unteachable” within the limits of training and coaching.  Therefore, it is important to select for philosophy and values that match those of the Usable Innovation. 

In current work in a variety of states, the best practices for staff selection often are rated as “not in place.”  The same schools describe the difficulties they face with educators who already are employed and who are only mildly (if at all) interested in making use of innovations.  This is not a teacher problem; this is an implementation problem.  Implementation of innovations with fidelity begins with staff selection and mutually informed consent to engage in practices consistent with the innovation.  In addition, the interviewers should describe the training, coaching, and fidelity assessment practices and encourage questions and discussion to secure informed agreement to participate.

Usable Interventions, Staff Selection, and Creating Readiness for Change

With existing staff groups, an interview process can be used to select educators who will be the first to be prepared to use an evidence-based innovation.  According to Prochaska, Prochaska, and Levesque (2001), about 20% of the current staff might be ready for change, 60% might be willing to think about it and prepare for change, and 20% may not be ready for change anytime soon. 

Staff selection is seen as critical to success in any field (Macan, 2009).  A leader who insists on change when educators are not prepared for change will annoy the educators and frustrate those who are trying to support the use of an evidence-based innovation in the provider agency.

Usable Innovations and other Implementation Drivers

Leaders and facilitative administrators support selection, training, and coaching as outlined above.  The active implementation supports routinely help to produce the educator behavior required to deliver a usable innovation as intended.  Fidelity assessment, as a measure of the presence and strength of a usable innovation in practice, is used to inform coaching for educator improvement.  Fidelity assessment also helps inform leadership and helps schools continue to change to improve supports for educators’ full and effective use of a Usable Innovation.  Usable Innovations and Implementation Teams provide school, district, and state leaders the foundations for working together to achieve greatly improved outcomes for students (are we doing what we intend, is it producing desired outcomes).

Module 6 Summary

What we have outlined in this Module is the Formula for Success

Formula For Success

Innovations are teachable, learnable, doable, and can be assessed in classrooms and schools to produce good outcomes for students.

The usable innovation criteria used to determine what to support in districts are:

Capstone Quiz

Asha

 

Congratulations, you finished Module 6: Usable Innovations!  We invite you to assess your learning via the Capstone Quiz.

Your virtual coach Asha guides you through a quick set of questions
[approximate time: 5-10 minutes].

 

The Active Implementation Hub, AI Modules and AI Lessons are an initiative of the State Implementation & Scaling-up of Evidence-based Practices Center (SISEP) and
the National Implementation Research Network (NIRN) located at
The University of North Carolina at Chapel Hill's FPG Child Development Institute.
terms of use
copyright 2013
Website Policy and Terms of Use

Resources and References

Blase, K. A., Fixsen, D. L., & Phillips, E. L. (1984). Residential treatment for troubled children: Developing service delivery systems. In S. C. Paine, G. T. Bellamy & B. Wilcox (Eds.), Human services that work: From innovation to standard practice (pp. 149-165). Baltimore, MD: Paul H. Brookes Publishing.

Carrizales-Engelmann, D., Sadler, C., Tedesco, M., Horner, R., & Fixsen, D. (2011). “Scaleworthy” criteria for selecting innovations in education. Salem: Oregon Department of Education.

Crosse, S., Williams, B., Hagen, C. A., Harmon, M., Ristow, L., DiGaetano, R., . . . Derzon, J. H. (2011). Prevalence and implementation fidelity of research-based prevention programs in public schools: Final report. Washington, DC: U.S. Department of Education.

Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23-45.

Dobson, L., & Cook, T. (1980). Avoiding Type III error in program evaluation: results from a field experiment. Evaluation and Program Planning, 3, 269 - 276.

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350. doi: 10.1007/s10464-008-9165-0

Embry, D. D., & Biglan, A. (2008). Evidence-based kernels: Fundamental units of behavioral influence. Clin Child Fam Psychol Rev., 11(3), 75-113. doi: 10.1007/s10567-008-0036-x

Fixsen, D. L., Blase, K. A., Metz, A. J., & Naoom, S. F. (2014). Producing high levels of treatment integrity in practice: A focus on preparing practitioners. In L. M. Hagermoser Sanetti & T. Kratochwill (Eds.), Treatment Integrity: A foundation for evidence-based practice in applied psychology (pp. 185-201). Washington, DC: American Psychological Association Press (Division 16).

Fixsen, D. L., Blase, K. A., Timbers, G. D., & Wolf, M. M. (2001). In search of program implementation: 792 replications of the Teaching-Family Model. In G. A. Bernfeld, D. P. Farrington & A. W. Leschied (Eds.), Offender rehabilitation in practice: Implementing and evaluating effective programs (pp. 149-166). London: Wiley.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231).

Fixsen, D., Blase, K., Metz, A., & Van Dyke, M. (2013). Statewide implementation of evidence-based programs. Exceptional Children (Special Issue), 79(2), 213-230.

Hattie, J. A. C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge.

Joyce, B., & Showers, B. (2002). Student achievement through staff development (3rd ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

Kealey, K. A., Peterson, A. V., Jr., Gaul, M. A., & Dinh, K. T. (2000). Teacher training as a behavior change process: Principles and results from a longitudinal study. Health Education & Behavior, 27(1), 64-81.

Macan, T. (2009). The employment interview: A review of current studies and directions for future research. Human Resource Management Review, 19(3), 203-218.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79(4), 599-616.

Prochaska, J. M., Prochaska, J. O., & Levesque, D. A. (2001). A transtheoretical approach to changing organizations. Administration and Policy in Mental Health and Mental Health Services Research, 28(4), 247-261. doi: 10.1023/A:1011155212811

Vernez, G., Karam, R., Mariano, L. T., & DeMartini, C. (2006). Evaluating comprehensive school reform models at scale: Focus on implementation. Santa Monica, CA: RAND Corporation.

Wolf, M. M., Kirigin, K. A., Fixsen, D. L., Blase, K. A., & Braukmann, C. J. (1995). The Teaching-Family Model: A case study in data-based program development and refinement (and dragon wrestling). Journal of Organizational Behavior Management, 15, 11-68.