Research and Rationales

Accountability and
administrative reviews
Fidelity assessments
 
Accountability and administrative reviews concern teacher employment, salary increases, and access to state and federal funding for education.  These are evaluations of teachers. In contrast, fidelity assessments concern the effectiveness of implementation supports for teachers who are expected to use identified innovations and other instructional practices in their interactions with students.

Given that Implementation Teams are accountable for effectively supporting teacher instruction, fidelity assessments are evaluations of Implementation Teams. This difference significantly impacts how the process is explained and introduced to teachers and school administrators and how information is collected and used.

For outcome studies showing positive results, the lack of definition of the innovation and lack of information about fidelity means success is not repeatable.

The lack of fidelity assessment is a problem in human services generally, including education. Dane & Schneider (1998), Durlak & DuPre (2008), and others summarized reviews of over 1,200 outcome studies. They found that investigators assessed the presence or strength (fidelity) of the independent variable (the innovation) in about 20% of the studies. In addition, only about 5% of those studies used those assessments in the analyses of the outcome data. Without fidelity assessment information about the presence and strength of the practices being studied it is difficult to know what the innovation is and know what produced the outcomes in a study (Dobson & Cook, 1980; Naleppa & Cagle, 2010). For outcome studies showing positive results, the lack of definition of the innovation and lack of information about fidelity means success is not repeatable. The only thing worse than trying an innovation and failing is succeeding and not knowing what was done to produce the success. Achieving good outcomes once is laudable – achieving good outcomes again and again is educationally significant.

For outcome studies showing a lack of positive results, the absence of fidelity data makes improvement difficult and confusing. Did poor outcomes result from poor use of the innovation (an implementation problem), or is the innovation itself in need of modification (an innovation problem)? Without fidelity data, we cannot separate the two and our efforts to improve will be inefficient and probably ineffective.

Crosse and colleagues (2011) surveyed a national representative sample of 2,500 public school districts and 5,847 public schools. In response to the survey, principals reported using an average of 9 innovations per school. Crosse and colleagues investigated the innovations and found that 7.8% had evidence to support their effectiveness. They further found that only 3.5% of the innovations met minimum standards for fidelity when used in schools. Without an assessment of fidelity educators do not know what the adults are doing to produce good outcomes or poor outcomes. They have no systematic way to detect effective innovations or improve innovations as they evolve in education settings.

Based on these data, the best guess is that about 1% of the schools in the United States use fidelity assessments on a regular basis. Poor practices in current use can go undetected and resources are invested in instruction and innovation strategies that may be effective on paper, but are not actually being used in practice. Names and claims are poor substitutes for actually using effective practices fully and competently in daily interactions with students to produce educationally significant results.

The critical features of fidelity assessments that do exist have been summarized (Fixsen et al., 2005; Sanetti & Kratochwill, 2014; Schoenwald & Garland, 2013).  The assessments can be categorized as shown in Table 1 (some examples are provided in the Table).
 

Table 1: Method to categorize fidelity assessment items.

 

Type of Assessment

Direct Observation

Record Review

Ask Others

Context

Organization of the classroom and student groups

Lesson plan is available for review

Interview the Master Teacher re: teacher’s planning and preparation activities

Content

Lesson plan followed during class period

Lesson plan contains key elements of an effective approach to teaching the subject

Interview Master Teacher re: use of agreed upon curriculum; ratings of Master Teacher regarding reliable inclusion of effective instructional approaches in lesson plans

Competence

Observation of teachers’ engagement of students with information and questions; and teachers’ use of prompt and accurate feedback

Review Coaching notes to see progress on instructional skill development

Interview students re: instruction methods used by the teacher.

 

Context and content are important aspects of fidelity assessment.  The critical dimension is direct observation of competence as a practitioner (teacher, therapist) interacts with others (student, patient) in the service delivery setting (classroom, clinic).