Module 7: Fidelity Assessment

Welcome to the Active Implementation Module Series.

Learning Objectives

After this module, you should be able to:



Module Content Structure

This Module’s content is structured via three categories:


The reader is urged to complete Module 6: Usable Innovations, Lesson 3: Practice Profiles, and Module 3: Implementation Teams before beginning the Fidelity Assessment Module. The information in those Modules and Lesson is critical to understanding the information that follows in this Module.


Fidelity assessment is defined as indicators of doing what is intended. This definition requires a) knowing what is intended, and b) having some way of knowing the extent to which a person did what was intended. Knowing what is intended is the subject of the Module 6: Usable Innovation. Knowing the extent to which a person did what was intended is the subject of this Module.

Education and human services are based on people interacting with each other in ways that are intended to be helpful whether individually (e.g., clinician with client, teacher with student) or in teams (e.g., IEP team with a student and family members). Given the complexities of human behavior, there is no expectation that people will be the same from moment to moment or day to day. With such variation in mind, fidelity assessments are designed to help detect and support consistent and relevant instruction and innovation behavior. When evidence-based approaches or other effective innovations are being used in education, fidelity assessments measure the presence and strength of an innovation as it is used in daily practice.

It has been noted elsewhere that the Initial Implementation Stage of using an innovation is the most fragile (Module 4: Implementation Stages). It is during this time that new Implementation Teams are forming and learning to function in districts and schools that have recently decided to try an innovation. Under these conditions, the Implementation Team's supports for teachers may be inconsistent and not always immediately effective. The idea that teacher fidelity assessment results are the products of Implementation Team supports for practitioners (teachers, clinicians) will be detailed in a later section of this module.

Research and Rationales

Accountability and
administrative reviews
Fidelity assessments
Accountability and administrative reviews concern teacher employment, salary increases, and access to state and federal funding for education.  These are evaluations of teachers. In contrast, fidelity assessments concern the effectiveness of implementation supports for teachers who are expected to use identified innovations and other instructional practices in their interactions with students.

Given that Implementation Teams are accountable for effectively supporting teacher instruction, fidelity assessments are evaluations of Implementation Teams. This difference significantly impacts how the process is explained and introduced to teachers and school administrators and how information is collected and used.

For outcome studies showing positive results, the lack of definition of the innovation and lack of information about fidelity means success is not repeatable.

The lack of fidelity assessment is a problem in human services generally, including education. Dane & Schneider (1998), Durlak & DuPre (2008), and others summarized reviews of over 1,200 outcome studies. They found that investigators assessed the presence or strength (fidelity) of the independent variable (the innovation) in about 20% of the studies. In addition, only about 5% of those studies used those assessments in the analyses of the outcome data. Without fidelity assessment information about the presence and strength of the practices being studied it is difficult to know what the innovation is and know what produced the outcomes in a study (Dobson & Cook, 1980; Naleppa & Cagle, 2010). For outcome studies showing positive results, the lack of definition of the innovation and lack of information about fidelity means success is not repeatable. The only thing worse than trying an innovation and failing is succeeding and not knowing what was done to produce the success. Achieving good outcomes once is laudable – achieving good outcomes again and again is educationally significant.

For outcome studies showing a lack of positive results, the absence of fidelity data makes improvement difficult and confusing. Did poor outcomes result from poor use of the innovation (an implementation problem), or is the innovation itself in need of modification (an innovation problem)? Without fidelity data, we cannot separate the two and our efforts to improve will be inefficient and probably ineffective.

Crosse and colleagues (2011) surveyed a national representative sample of 2,500 public school districts and 5,847 public schools. In response to the survey, principals reported using an average of 9 innovations per school. Crosse and colleagues investigated the innovations and found that 7.8% had evidence to support their effectiveness. They further found that only 3.5% of the innovations met minimum standards for fidelity when used in schools. Without an assessment of fidelity educators do not know what the adults are doing to produce good outcomes or poor outcomes. They have no systematic way to detect effective innovations or improve innovations as they evolve in education settings.

Based on these data, the best guess is that about 1% of the schools in the United States use fidelity assessments on a regular basis. Poor practices in current use can go undetected and resources are invested in instruction and innovation strategies that may be effective on paper, but are not actually being used in practice. Names and claims are poor substitutes for actually using effective practices fully and competently in daily interactions with students to produce educationally significant results.

The critical features of fidelity assessments that do exist have been summarized (Fixsen et al., 2005; Sanetti & Kratochwill, 2014; Schoenwald & Garland, 2013).  The assessments can be categorized as shown in Table 1 (some examples are provided in the Table).

Table 1: Method to categorize fidelity assessment items.


Type of Assessment

Direct Observation

Record Review

Ask Others


Organization of the classroom and student groups

Lesson plan is available for review

Interview the Master Teacher re: teacher’s planning and preparation activities


Lesson plan followed during class period

Lesson plan contains key elements of an effective approach to teaching the subject

Interview Master Teacher re: use of agreed upon curriculum; ratings of Master Teacher regarding reliable inclusion of effective instructional approaches in lesson plans


Observation of teachers’ engagement of students with information and questions; and teachers’ use of prompt and accurate feedback

Review Coaching notes to see progress on instructional skill development

Interview students re: instruction methods used by the teacher.


Context and content are important aspects of fidelity assessment.  The critical dimension is direct observation of competence as a practitioner (teacher, therapist) interacts with others (student, patient) in the service delivery setting (classroom, clinic).

Fidelity assessment and Active Implementation Frameworks

The Active Implementation Frameworks (AIF) help distinguish fidelity assessment from teacher certification, accountability, or administrative review processes. The Active Implementation Frameworks are universal and apply to any attempt to use any innovation.  Some innovations are evidence-based and some have been operationalized, but the vast majority does not meet either of these conditions.

Fidelity assessments are most reliable when the core innovation features have been identified, operationalized, and shown through research and evaluation studies to correlate positively with outcomes. It can take many years to conduct the studies needed to validate a measure of fidelity by demonstrating that the items or measure correlates well with positive outcomes.

Many innovations are evidence-informed and consist of individual practices that are predicted to produce improved learning and outcomes.  But there are no validated fidelity assessments. In these instances, it is important to develop a fidelity assessment of some kind so that you can “get started and get better” at understanding and detecting the core features needed to produce outcomes.

Example: The PBIS School-wide Evaluation Tool

Download: Handout 19: The PBIS School-wide Evaluation Tool

An example of a fidelity assessment is provided by Positive Behavioral Interventions and Supports (PBIS). PBIS is a well-developed and researched approach to reducing discipline problems and suspensions and improving academic achievement in schools.  State and national networks of trainers and coaches and a national data collection and reporting system support PBIS. The data supporting PBIS require specific features to be present if PBIS is to be effective. And research has been to demonstrate that higher fidelity is correlated with better student outcomes (PBIS,

In this example, fidelity assessment items are designed to detect the presence and strength of each PBIS core feature in a school environment. Notice that the data source may be a product, interview, or observation.  These ways of assessing fidelity are summarized in the handout.

Fidelity assessment and improving student outcomes

  • Frequent More frequent fidelity assessments mean more opportunities for improving instruction, innovations, and implementation supports (i.e., at school, district, and state levels). Fidelity assessments in education should be done for every teacher six times per academic year for the first 2-3 years after innovations are put in place. This will help assure frequent feedback to inform and focus improvement cycles to support teachers for effectively and efficiently.
    For more information see Module 5: Improvement Cycles

    Relevant: Fidelity data are most informative when each item on the assessment is relevant to important supports for student learning. That is, fidelity assessment items are tied directly to the Practice Profile for the innovation, and the Practice Profile is based on best research and evaluation evidence to date.
    For more information see Lesson 6: Practice Profiles
  • Actionable: Fidelity data are most useful when each item on the assessment can be included in an action plan and can be improved in the education setting. The teacher and coach/Implementation Team develop action plans after each assessment to improve supports for effective instruction.
    For more information see Implementation Drivers: Action Plan & Implementation Stages: Action Plan

These three dimensions combine to produce useful information for improving educational practices and student outcomes. If they are to be used frequently, fidelity assessments must be practical (e.g. a 10-minute observation). To be useful for improving supports for teachers, fidelity assessment items need to be relevant and actionable by Implementation Teams. Data linking fidelity information with student outcomes will help to sustain the assessment system.

As noted earlier, few fidelity assessments exist in education and human services.  This situation has persisted unchanged for decades (Moncher & Prinz, 1991; Sanetti & Kratochwill, 2014). A search of the What Works Clearinghouse in education yields few examples of fidelity measures, and very few of those relate to instruction practices in typical classrooms.

and administrative reviews
Fidelity assessments

There are accountability and administrative reviews and other evaluations of teachers that are conducted by principals and other school staff.  Typically, these assessments are used to determine teacher status (promotion, pay increase, retention) or meet state and federal standards for access to funding or to meet state and federal legislative mandates.


There are teacher fidelity assessments that are comprehensive and require extensive preparations of assessors and longer observations in order to ensure observers are obtaining valid and reliable information (Danielson, 2013; Marzano, 2007). 

Comprehensive teacher assessments have much to recommend them and provide relevant and actionable information.  Because they are comprehensive they also are cumbersome.  The time and effort required to conduct them mean they are not practical for frequent administrations for continual improvement of ongoing supports for teacher performance.

Again, very few fidelity measures are used in daily practice in education and human services (Crosse, Williams, Hagen, Harmon, Ristow, DiGaetano, . . . Derzon, 2011).  This presents a problem for Implementation Team members who are accountable for producing high levels of fidelity linked to good student outcomes in practice settings. 

The lack of fidelity measures also poses a problem for administrators and directors in education.  Instead of random acts of improvement based on best guesses, fidelity measures help diagnose problems and target attention on specific actions that lead to improved student outcomes.  Strategies to improve implementation supports (e.g. more coaching or more training from the Implementation Team) are very different from strategies to improve innovations (e.g. modify instruction or curriculum to produce better outcomes when used with high levels of fidelity).  Having fidelity assessment data helps to focus the use of limited resources in districts and schools. 


Activity 7.1

Activity 7.1
Designing a Fidelity Assessment

Implementing a fidelity assessment often poses a number of challenges for implementation teams. In this activity we provide an initial fourā€step approach for identifying, categorizing, and discussing challenges, then completing action planning.

Download PDF

Active Implementation Frameworks

Understanding fidelity assessment is enhanced by understanding Usable Innovations, Implementation Drivers, Implementation Stages, Improvement Cycles, and Implementation Teams. For readers who are not familiar with the Active Implementation Frameworks, please complete the other Modules before proceeding. At a minimum, we urge you to complete the following:

Continue with Module 7

Fidelity Assessment and Usable Innovations

The lack of adequately defined programs based on best available evidence is an impediment to implementation with good outcomes (e.g., Vernez and colleagues, 2006).  To improve student outcomes on a useful scale, innovations need to be teachable, learnable, doable, and assessable in typical education settings.  Usable Innovations provide the content that is the focus of training, coaching, and fidelity assessments.  Usable innovations provide the reasons for changing roles, functions, and structures in schools and districts to more efficiently, effectively, and consistently produce intended outcomes.

Module 6: Usable Innovations provides an overview of the four criteria that define an effective innovation.  A key criterion is:

4. Evidence of effectiveness; Practical performance assessment

  1. The performance assessment relates to the education innovation philosophy, values, and principles; essential functions; and core activities specified in the practice profiles; the performance assessment needs to be a feasible method (e.g. a 10-minute classroom walkthrough observation ratings) that can be done repeatedly in the context of typical education settings
  2. Evidence that the education innovation is effective when used as intended
    1. There are data to show the innovation is effective
    2. A performance (fidelity) assessment is available to indicate the presence and strength of the innovation in practice
    3. The performance assessment results are highly correlated (e.g. 0.50 or better) with intended outcomes for students, families, and society

Fidelity assessments should be standard practice in education.  From an implementation point of view, any innovation (evidence-based or standard practice) is incomplete without a good measure to detect the presence and strength of the innovation as it is used in education practice (4.b. above ). 

Fidelity assessments should be directly linked to the usable innovation components.  In particular, the essential functions and the Practice Profiles that operationalize those functions provide information to guide the development of fidelity assessment items.  Usable innovations are doable and assessable in practice. 

For example, fidelity of Positive Behavioral Innovations and Supports (PBIS) can be assessed because it is a well-defined innovation.  On the other hand, Annual Yearly Progress Standards describe student outcomes but not teacher instruction.  In cases like this, Implementation Teams can begin by relying on generic fidelity assessments that are related to evidence-informed teacher instruction practices generally.  Even though these measures are not related specifically to an innovation they can be used frequently (e.g., six times a year for every teacher) to assess the effectiveness of implementation supports for teachers.


Fidelity Assessment and Implementation Drivers

Fidelity assessment is a key outcome of implementation done well.  In Module 2: Implementation Drivers, it sits at the top of the implementation triangle. It is the link between implementation supports, consistent delivery of an innovation, and reliable outcomes for students.

An Implementation Team is accountable for assuring adequate supports for teachers and other practitioners using an innovation. Thus, while fidelity assessments include observations of teacher behavior in an education setting, the results reflect how well the Implementation Team is supporting teachers (e.g., with training and coaching and with efforts to improve administrative supports for teachers using an innovation as intended).

  • If teacher instruction is improving rapidly, the Implementation Team should be congratulated for assuring effective supports for teachers.
  • If teacher instruction is poor, the Implementation Team is accountable for providing more effective supports for teachers.
  • If the Implementation Team is struggling, state and district leaders are accountable for improving the functions and effectiveness of the Team.

For example, fidelity data may show teachers 1-8 are providing high quality instruction while teachers 9-18 are not faring very well. The Implementation Team can analyze the information to figure out why the differences are occurring. Did different trainers train the high and low performing teachers? Do they have different coaches? Did different assessors do one set of assessments versus the other? Perhaps in this example the high fidelity teachers had Coach #1 and the lower fidelity teachers had Coach #2. This clearly points to a coaching problem so the Implementation Team immediately goes to work on improving Coach #2 coaching skills. On the other hand, if teachers 1 – 18 were all struggling with particular instructional practices, to one degree or another, then the Implementation Team goes to work improving the training and coaching for all teachers.

For leaders in education, fidelity is not just of academic importance.  The use of a fidelity measure helps leaders and others discriminate implementation problems from innovation problems and helps guide problem solving to improve outcomes. As shown in Table 1, information about fidelity and outcomes can be linked to possible solutions to improve intended outcomes (Blase, Fixsen, and Phillips, 1984; Fixsen, Blase, Metz, & Naoom, 2014).

Without a fidelity assessment leaders and Implementation Teams have no idea where to direct their improvement efforts.  When good student outcomes are achieved, there is no clear idea of what should be repeated to achieve those outcomes for all students. When poor student outcomes occur, leaders are left wondering what to do to “fix the problem.” Fidelity assessments help leaders make effective and efficient use of scarce resources to improve education outcomes for students.

Table 2: Fidelity scores as a diagnostic tool.



High fidelity

Low fidelity


Good Outcomes



Re-examine the innovation
Modify the fidelity assessment


Poor Outcomes


Modify the innovation

Start over


As shown in Table 2, the desired combination is high fidelity use of an effective innovation that produces good outcomes. When high fidelity is linked consistently with good outcomes it is time to celebrate and continue to use the innovation strategies and implementation support strategies with confidence. The second best quadrant is where high fidelity is achieved, but outcomes are poor. This clearly points to an innovation that is being done as intended, but is ineffective. In this case, the innovation needs to be modified or discarded.

The least desirable quadrants are those in the low fidelity column where corrective actions are less clear. Low fidelity in combination with good outcomes points to either a poorly described innovation or a poor measure of fidelity. In either case, it is not clear what is producing the good outcomes. Low fidelity associated with poor outcomes leaves users in a quandary. It may be a good time to start again — to develop or find an effective innovation and develop effective implementation supports.  The most efficient strategy may be to first improve fidelity and then reassess outcomes so that “babies are not thrown out with bathwater.”

Implementation is in service to using effective innovations to realize intended outcomes.  Implementation Drivers are designed to improve the skill levels of teachers, principals, and staff and produce greater benefits to students. Implementation Drivers “drive” successful implementation of innovations.

Fidelity assessments: Prognosis

A good fidelity assessment requires a demonstration that the measure is highly correlated with student outcomes. This is specified in 4.b of the Usable Innovation Criteria. This means that fidelity scores today predict student outcomes in the future.

If high fidelity today predicts student outcomes in the future, this is good news! This means fidelity measures bring the future into the present. Instead of lamenting poor student outcomes next year when the annual data come out, Implementation Teams can assess fidelity today and provide improved supports for better instruction today. Implementation Teams know how to help improve fidelity today by improving coaching for teachers and helping school administrators support teacher instruction more effectively. Thus, efforts to improve fidelity today help assure improved student outcomes in the future.

Fidelity Assessment and Improvement Cycles

On any school day, one person in every five (20% of the entire population) in the United States of America is in school.  In this massive enterprise, education happens when adult educators interact with students and each other in education settings. 

To be useful to students and functional for teachers, Implementation Teams need to know what to train, what to coach, and what performance to assess to make full and effective use of a selected effective practice.  Implementation Teams need to know WHAT is intended to be done (innovation components or instructional practices) so they can efficiently and effectively develop processes to assure high fidelity use of the innovation now and over time. 

Note that the need to specify WHAT is intended applies to current practices as well as evidence-based or innovative practices.  Current practices may be “standard practices” but WHAT are they?  Can they be done consistently?  Do they produce desired student outcomes when done as intended?  Standard practices must meet the same test if they are to be used, improved, and retained in education.

The PDSA Cycle

The process of establishing a fidelity assessment takes time and focused effort.  To establish fidelity assessments, Implementation Teams make intentional use of the plan, do, study, act (PDSA) cycle.  The benefits of the PDSA cycle in highly interactive environments have been evaluated across many domains including manufacturing, health, and substance abuse treatment.  As an improvement cycle, the PDSA trial-and-learning approach allows Implementation Teams to identify the essential components of the innovation itself.  The PDSA approach can help Implementation Teams evaluate the benefits of innovation components, retain effective components, and discard or de-emphasize non-essential components. 

  • Plan: The “plan” is the innovation or instruction as educators intend it to be used in practice. 
  • Do: The “plan” needs to be operationalized (what we will do and say to enact the plan) so it is doable in practice.  This compels attention to the core innovation components and provides an opportunity to begin to develop a training and coaching process (e.g. here is how to do the plan) and to create a measure of fidelity (e.g. did we “do” the plan as intended). 
  • Study: As a few newly trained educators begin working with students or others in an actual education setting, the budding fidelity measure can be used to interpret the outcomes in the “study” part of the PDSA cycle (e.g. did we do what we intended; did doing what we intended result in desired outcomes). 
  • Act: The Implementation Team uses the experience to help develop a new plan where the essential components are even better defined and operationalized.  In addition, the fidelity assessment is adjusted to reflect more accurately the essential components and the items are modified to make the assessment more practical to conduct in the education setting.
  • Cycle: The PDSA process is repeated until the innovation and the fidelity assessment are specified well enough to meet the Usable Innovation criteria.  At that point, the innovation is ready to be used by multiple educators, the fidelity assessment is deemed practical, and the correlation between the essential components and intended outcomes is high.

Implementation Teams may employ the PDSA cycle in a Usability Testing format to arrive at a functional version of an innovation that is effective in practice and can be implemented with fidelity on a useful scale (e.g. Akin et al., 2013; Fixsen et al., 2001; McGrew & Griss, 2005; Wolf et al., 1995).  Once the components of an innovation have been identified, functional analyses can be done to determine empirically the extent to which key components contribute to significant outcomes.  As noted above, the vast majority of standard practices and innovations do not meet the Usable Innovation criteria.  Implementation Teams will need to make use of PDSA improvement cycles and Usability Testing to establish the core innovation components before they can proceed with broader scale implementation.

Activity 7.2

Activity 7.2
Capstone: Developing a Fidelity Assessment

Has your team identified the core components of your intervention or innovation? Has your team clearly defined or operationalized them? If YES, use the Fidelity Assessment Brainstorming Worksheet (included), to complete this activity individually or as a team.

Download PDF

Module 7 Summary

  1. A fidelity assessment is defined as indicators of doing what is intended.  This definition requires:
    • Identifying what is intended, and
    • Identifying the extent to which a practitioner has done that
  2. Fidelity assessments concern the effectiveness of supports for teachers who are implementing identified programs or practices
  3. Fidelity ensures that success is repeatable
  4. Three critical features of fidelity assessments are:
    • Context – pre-requisite conditions that need to be in place regarding setting, qualifications and preparation
    • Content – extent to which the required core content is used, referenced, monitored or accessed by the teacher in his or her work and/or activity
    • Competence – extent to which the core content and competencies are skillfully modeled, relevant feedback is provided, and sensitively reviewed  
  5.  To produce useful information for improving practices and outcomes, fidelity assessments should be
    • Frequent
    • Relevant
    • Actionable


Capstone Quiz



Congratulations, you finished Module 7: Fidelity Assessment! 
We invite you to assess your learning via the Capstone Quiz.

Your virtual coach Asha guides you through a quick set of questions
[approximate time: 5-10 minutes].


The Active Implementation Hub, AI Modules and AI Lessons are an initiative of the State Implementation & Scaling-up of Evidence-based Practices Center (SISEP) and
the National Implementation Research Network (NIRN) located at
The University of North Carolina at Chapel Hill's FPG Child Development Institute.
terms of use
copyright 2013
Website Policy and Terms of Use

Resources and References

Akin, B.A., Bryson, S.A., Testa, M.F., Blase, K.A., McDonald, T., Melz, H. (2013). Usability testing, iniOTISSl implementation and formative evaluation of an evidence-based intervention: Lessons from a demonstration project to reduce long-term foster care. Evaluation and Program Planning (41), 19-30.

Blase, K. A., Fixsen, D. L., & Phillips, E. L. (1984). ResidenOTISSl treatment for troubled children: Developing service delivery systems. In S. C. Paine, G. T. Bellamy & B. Wilcox (Eds.), Human services that work: From innovation to standard practice (pp. 149-165). Baltimore, MD: Paul H. Brookes Publishing.

Carrizales-Engelmann, D., Sadler, C., Tedesco, M., Horner, R., & Fixsen, D. (2011). “Scaleworthy” criteria for selecting innovations in education. Salem: Oregon Department of Education.

Crosse, S., Williams, B., Hagen, C. A., Harmon, M., Ristow, L., DiGaetano, R., . . . Derzon, J. H. (2011). Prevalence and implementation fidelity of research-based prevention programs in public schools: Final report. Washington, DC: U.S. Department of Education.

Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18(1), 23-45.

Dobson, L., & Cook, T. (1980). Avoiding Type III error in program evaluation: results from a field experiment. Evaluation and Program Planning, 3, 269 - 276.

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350. doi: 10.1007/s10464-008-9165-0

Embry, D. D., & Biglan, A. (2008). Evidence-based kernels: Fundamental units of behavioral influence. Clin Child Fam Psychol Rev., 11(3), 75-113. doi: 10.1007/s10567-008-0036-x

Fixsen, D. L., Blase, K. A., Metz, A. J., & Naoom, S. F. (2014). Producing high levels of treatment integrity in practice: A focus on preparing practitioners. In L. M. Hagermoser Sanetti & T. Kratochwill (Eds.), Treatment Integrity: A foundation for evidence-based practice in applied psychology (pp. 185-201). Washington, DC: American Psychological Association Press (Division 16).

Fixsen, D. L., Blase, K. A., Timbers, G. D., & Wolf, M. M. (2001). In search of program implementation: 792 replications of the Teaching-Family Model. In G. A. Bernfeld, D. P. Farrington & A. W. Leschied (Eds.), Offender rehabilitation in practice: Implementing and evaluating effective programs (pp. 149-166). London: Wiley.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231).

Fixsen, D., Blase, K., Metz, A., & Van Dyke, M. (2013). Statewide implementation of evidence-based programs. Exceptional Children (Special Issue), 79(2), 213-230.

Hattie, J. A. C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge.

Joyce, B., & Showers, B. (2002). Student achievement through staff development (3rd ed.). Alexandria, VA: Association for Supervision and Curriculum Development.

Kealey, K. A., Peterson, A. V., Jr., Gaul, M. A., & Dinh, K. T. (2000). Teacher training as a behavior change process: Principles and results from a longitudinal study. Health Education & Behavior, 27(1), 64-81.

Macan, T. (2009). The employment interview: A review of current studies and directions for future research. Human Resource Management Review, 19(3), 203-218.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79(4), 599-616.

Naleppa, M. J., & Cagle, J. G. (2010). Treatment fidelity in social work intervention research: A review of published studies. Research on Social Work Practice, 20(6), 674-681. doi: 10.1177/1049731509352088

Prochaska, J. M., Prochaska, J. O., & Levesque, D. A. (2001). A transtheoretical approach to changing organizations. Administration and Policy in Mental Health and Mental Health Services Research, 28(4), 247-261.

Vernez, G., Karam, R., Mariano, L. T., & DeMartini, C. (2006). Evaluating comprehensive school reform models at scale: Focus on implementation. Santa Monica, CA: RAND Corporation.

Wolf, M. M., Kirigin, K. A., Fixsen, D. L., Blase, K. A., & Braukmann, C. J. (1995). The Teaching-Family Model: A case study in data-based program development and refinement (and dragon wrestling). Journal of Organizational Behavior Management, 15, 11-68.