Module 5: Improvement Cycles

Welcome to the Active Implementation Module Series. In this module you will learn about improvement cycles and how to begin applying them in your setting.

Learning Objectives

After this module, you should be able to:


Module Content Structure

This Module’s content is structured via three categories:

In this module, we will be discussing the key functions related to Improvement Cycles, and examples of their use in education.

Module 1 Table of Contents


Change is great!  You go first!

When changes are planned and initiated, we often launch into the work in a big way with lots of moving parts, communication challenges, confusion, and ‘push back’ as the result.  In addition we often try to layer the work onto the existing structures with people picking up extra responsibilities.  Trying to get the current system to just go faster or layer on responsibilities rarely yields results. 

Such approaches seldom work well because the current system is designed to achieve the current results (on purpose or unwittingly).  This means that innovations are not likely to fare well in the current system.  Typically, the current system exerts pressure on innovations to change to fit the status quo.  It is rare to have an existing system automatically change to support more effective education practices and frameworks.  An innovation that has been altered may be a better fit with the current system but it is likely that the EBP or EII will lose its effectiveness as a result of the adaptation process.

At the beginning of a change process, leaders and Implementation Teams cannot know everything that will be required to successfully use and sustain the innovation. However, with each step forward, the next step becomes clearer.  Steps forward result in discoveries about what’s working and what’s not.  And some discoveries will be surprising!  The challenge is to engage in a ‘trial and learning’ process of rapid improvement not a ‘trial and error’ process characterized by random acts of innovation. This module reviews the forms of PDSA improvement cycles that can guide these more systematic improvement processes.

Change and challenge go hand in hand -

“We can’t get our arms around all the people and processes that need to change – this is just too big.”

“Hey, the students show up every day.  It’s not like we can stop education, re-tool and then reopen the school.  We have to make changes as the work of educating our children goes on.”

“What if we get it wrong!  We’d better have all the pieces well thought out and totally in place before we move forward.”

Commitment to Organized Improvement Processes

People involved in change initiatives, particularly leaders and Implementation Teams, need to become comfortable and effective at learning as they go.  No matter how well planned or carefully monitored, change efforts typically do not go well at first.  Instead of floundering, change efforts can be done using improvement cycles.  By using improvement cycles, each new attempt to solve a problem or improve a practice adds value and information.  Each attempt to use an innovation leads to new learning, whether or not the outcome is completely successful.  Applying and embedding
the new learning leads to improved processes and better outcomes and increased competence and confidence for managing change.  

With clear, shared goals of ensuring that students benefit from effective practices, everyone is motivated to CHANGE to support the use of the effective practice as intended.  That means that organizational and systems change will be required to support new instructional practices, content, and frameworks.

New or repurposed structures, processes, positions, and functions will be needed at multiple levels.   Often these functional changes are possible using current resources and funding to repurpose the positions and job functions to support new ways of doing things. 

Get Started, Get Better!

Underlying the improvement cycles described in this module is the Plan, Do, Study, Act Cycle or PDSA Cycle.  PDSA is a process derived from industrial quality control research.  Deming (1986) built on an earlier process by Shewhart (1931), and then Deming and Juran used the process extensively in post-war Japan to bring their devastated manufacturing and economic system to the forefront of production capacity and quality in a relatively short time (DeFeo & Barnard, 2005). The process is now used widely in human services (Varkey, et al, 2007; Daniels & Sandler, 2008; IHI, 2010). The PDSA Cycle is used for making small incremental improvements as well as for significant ‘breakthroughs’ in performance.  The process can be used to make a small test of change, help define and refine new innovations and ways of work, be applied to scale-up efforts, and can be used to better align policies and guidelines to support new ways of work. 

Let’s take a brief look at each phase of the PDSA Cycle. 

When We Know Better We Can Do Better

Improvement Cycles help us identify challenges, solve problems, improve practices, and create hospitable environments for new ways of work. In this module on Improvement Cycles we discuss different uses of the PDSA Cycle.  Improvement Cycles are functional for improving fidelity and outcomes and help to increase our comfort with the fact that we don’t need to know everything to get started….because we will get better through the PDSA various processes.

Three Types of Improvement Cycles

The three types of PDSA Improvement Cycles are as follows:

The Practice-Policy Communication Cycle increases feedback from the classroom to the administration at school, district, and state levels.  Implementation Teams, via PDSA Cycles, assure work at the practice level (e.g. feedback from teachers) informs administrative support and resource allocation.  And it gives administrators the information they need to actively support the use of new practices as intended. (For more information, see Module 3: Implementation Teams.)

Everyday Examples

Many of us engage informally in these cycles as we think through and test out hypotheses or go about improving our lives.  How many of you have tried to get to know or inform a new leader, a new family, a new partner, or a new teacher.  We start by getting clear about what we want to do with whom (e.g. information to provide, relationship to develop).  Then we make a plan (PLAN) to get to know them better or to provide information; we engage in behavior as planned (DO) and we evaluate (STUDY) how effective our behavior was in communicating, getting to know, and helping to inform that person.  Then we make more plans based on how well our first engagement with them went (ACT). Or, we PLAN to ‘tinker’ with a favorite recipe and add, take away, or change the amount of certain ingredients.  We DO the new recipe and then STUDY the results by asking our family and friends to eat it and observing or asking them how they liked it.  Then we ACT and write down the new recipe that everyone loved, or we go back to the original recipe or try a new variation.

Transformation Zones

All three types of Improvement Cycles work well in Transformation Zones designed to help transition the system “as is” into the system “to be.”  The transition process begins with a manageable cohort that represents a small ‘slice’ of the system (e.g. two rural, two suburban, two high needs, two urban districts).  The diversity combined with the manageable scope help to uncover challenges and solve problems with innovations and implementation supports before moving ahead with scale-up or expanded use.  

The next four topics in this module take you more deeply into the PDSA work by discussing the three types of PDSA Improvement Cycles, Transformation Zones, and improvement tools and processes you can use.


Topic 1: Rapid – Cycle Problem-Solving

We will go into a bit more detail related to the PDSA process as we discuss Rapid-Cycle Problem-Solving.  The subsequent sections will rely on the processes described here and will focus on what’s different about them and the tools that will be helpful. 


Rapid-cycle problem-solving is one type of improvement cycle that uses the Plan, Do, Study, Act process.  It is typically used to solve emergent or urgent problems that are impacting the roll-out or use of the innovation or to make quick, incremental improvements. 


Rapid-cycle problem-solving helps us get comfortable with ‘enough’ planning and avoids having the perfect become the enemy of the good.  In short, no roll out of an innovation will be perfect and not all problems can be anticipated.  We have to get started and then get better.  That’s where rapid-cycle problem-solving comes into play.  The inevitable challenges and problems associated with using a new set of practices or a new program can be quickly detected, defined, and addressed.  Prompt attention and the use of a Plan, Do, Study, Act process helps to avoid letting problems grow or abandoning the new way of work and retreating to familiar but less effective approaches. 

Key Functions and Processes

“Let Rosa solve this, she’s been in on it from the beginning.”

“Somebody needs to fix this.”

“This is so frustrating; they just dump things on us and expect us to work miracles.”

“Well, that’s not the problem anyway.  They are just resisting the change.”

Problems are more likely to be detected, defined, addressed and resolved (or re-solved) when….

Problems are more likely to be detected, defined, addressed and resolved (or re-solved) when….

So let’s learn more about rapid-cycle problem-solving by answering these questions:

Who Engages in Rapid-Cycle Problem-Solving?

Handing challenges off to a single individual or inadvertently letting a challenge linger is not likely to be helpful.  Most challenges that benefit from rapid-cycle problem-solving require quickly pulling together the right team to engage in the PDSA process.  An Implementation Team is accountable for forming this PDSA team and supporting their work. The first step in creating a Rapid Cycle Problem Solving Team is to identify a team lead who will take responsibility for pulling together the team, organizing the process, and seeing it through to a “successful” conclusion.   

The team lead needs to gather the ‘right people’ to solve the particular problem under consideration.  These are people who have a stake in the outcome, who have expertise and information relevant to the problem at hand, and who have authority to make necessary changes to solve the problem or access to decision-makers.  The problem-solving team might be a selected subset of the Implementation Team and it might involve inviting some additional “right people” to join the rapid-cycle problem-solving team.  Those invitations occur because there may be people who have the knowledge, authority, or linkages that are needed to solve the problem at hand.  For example if there are resource issues then people with the authority to allocate or reallocate resources might need to be on the rapid-cycle problem-solving team. 

The team formed for rapid cycle problem-solving often is an ad hoc group that has a time-limited role focused on analyzing the problem, developing a plan, executing the plan, using data to determine if the problem has been solved and if called for, repeating the process and then ‘embedding’ the solution.

Using the Plan, Do, Study, Act process allows the team to maintain focus, engage in productive problem-solving, and understand when their work is done so they can disband.  As a result, this can be a very efficient method to solve a clearly defined problem or make an incremental improvement.

A problem-oriented example might be generating more timely reports for monitoring the progress of students who are engaged in receiving a new math curriculum and new instructional practices.

An incremental improvement example might be improving the integration of meaningful parent input into the selection of a school-wide anti-bullying initiative. 

When and How Do Problems Get Detected and Reported?

“This is so frustrating; they just dump things on us and expect us to work miracles.”

The important message here is that undetected and unreported problems cannot be solved….but they will fester.

Thinking back to the Module 4: Implementation Stages, you will recall that Installation and Initial Implementation can be particularly challenging and bumpy because the new ways of engaging with each other and with students are bumping up against the status quo.  However, problems amenable to rapid-cycle problem-solving can emerge during any stage.  

The Implementation Team needs to establish communication protocols for detecting and reporting challenges.  Some questions the Implementation Team may want to answer are below.  For more information see Handout 8: Communications Protocols Worksheet.

Confidence and persistence of implementers improve when we have simple, clear reporting processes and messages that normalize the problem detection and resolution process (e.g. designated email address for reporting problems, review meetings after specified number of days of implementation, email access to named implementation team members).  Here is an example of a reporting form from the MiBLSi project in Michigan: Handout 16: PDSA Worksheet (Michigan Integrated Behavior and Learning Support Initiative Program)

A balance also needs to be achieved by asking people to report what is going well!  The only thing worse than failing and not knowing why you failed, is succeeding and not being able to succeed again!

What Information Is Needed for a Rapid-Cycle PDSA Cycle?

“Well, that’s not the problem anyway.  They are just resisting the change.”

Rapid-cycle problem-solving requires clarity about the problem at hand or the area requiring improvement.  This clarity begins during the PLAN phase of the PDSA Cycle.

Rapid Cycle Teams can be formed to address on-going improvement efforts through the analysis of fidelity and outcome data and the development and implementation of long-term and systemic solutions.

If an Implementation Team has been formed to guide a change process, the practice improvement function is built into their on-going responsibilities. The Implementation Team may be engaged directly in conducting the rapid-cycle plan or they may create the conditions and supports for the work to occur.  Regardless, the Implementation Team remains accountable for improvement occurring.


Example: Rapid Cycle Problem Solving


Problem Definition Only 10% of expected role play activities (fidelity issues) occurred as teachers used a social-emotional intervention in their classrooms to prevent bullying.

Goal: Improve frequency of teachers’ use of role play during an anti-bullying intervention (fidelity).  At least 80% of role play events occur as scheduled in the lessons.

Hypothesis: Teachers are not skilled at introducing role plays and are concerned about addressing challenges students present.

Plan: Have experienced teachers practice with new teachers and provide classroom feedback and support on how to introduce role play and handle challenges


Experienced teachers provide additional 1 hour session for new teachers to practice introducing role play and handling challenges, receiving feedback, and re-practicing.  Experienced teachers visit classrooms at least once to observe, provide support, and encourage implementation.


Measure the percent of expected role play events that occurred in classrooms over a three-week period following the practice sessions.


Determine if the desired outcome was achieved (80% or more) and make a decision about the next right steps

Goal met – embed solution into training and coaching routines

Goal not achieved – make a new plan with teacher input and try again (Cycle)


Activities 5.1 and 5.2

Activity 5.1
Getting "Ready for Change"

“Readiness” is defined as a developmental point at which a person, organization, or system has the capacity and willingness to engage in a particular activity. Creating readiness for change is a critical component of both initiating and scaling up the use of evidence‐based practices and other innovations in education. Use this activity to explore aspects of change with your Team.

Download PDF



Activity 5.2
PDSA - Who am I?

Implementation Teams use PDSA Cycles to help them make meaningful changes, alleviate
barriers, and achieve expected outcomes. This activity is designed to help you understand your PDSA strengths, recognize strengths in others, and identify potential team gaps.

Download PDF


Topic 2: Usability Testing


Usability testing is used to test the feasibility and impact of a new way of work prior to rolling it out more broadly.”

Usability testing consists of a planned series of tests of an innovation, components of an innovation, or implementation processes.  Usability testing makes use of a series of PDSA cycles to refine and improve the innovation elements or the implementation processes.  It is used proactively to test the feasibility and impact of a new way of work prior to rolling out the innovation or implementation processes more broadly and/or prior to conducting an evaluation of the innovation.


Programs and practices are more likely to be adopted and sustained when they can be implemented as intended in real world settings – our classrooms and schools. Educators deserve supports to implement programs and practices that are “classroom ready”.  But how do we know if programs, practices, and educators are “ready”?

Usability testing is a method Implementation Teams use to test an innovation or the implementation methods with a larger sample under more typical conditions, as opposed to research or ‘pilot’ conditions characterized by special resources and conditions.

Usability testing originally was developed by computer scientists as a very efficient way to develop, test, and refine new software programs or websites, both very complex endeavors. The idea is to use the PDSA processes with small groups of 4 or 5 typical users. Computer scientists found that the first group would identify most of the errors in the first version of the program. Once the errors were corrected, the next group would find different and deeper errors. By repeating this process 4 or 5 times (involving about 20 typical users in total), the program would be nearly error free and ready for general use. Researchers have found that the end user experience is improved much more by 4 tests with 5 users each than by a single test with 20 users.

Key Functions and Processes

“How do I fit this into the weekly schedule?”

“Where are the data reports for monitoring progress? I entered my data on time.”

“Just fake it for a while, this too shall pass.”

“This coaching process is a great idea, but is it practical?”

It takes time and expertise to conduct a series of valid usability tests related to either the core components of an innovation or key implementation processes.  Below are the steps to consider.  Many of these should be familiar to you because they are built on the PDSA process detailed previously in Topic 1: Rapid-Cycle Problem-Solving.  

Keep in mind that you are creating improved processes not perfect processes. There will always be some variation around the ‘ideal’ – we can’t let the perfect be the enemy of the good.


1. Choose ‘worthy’ elements to test. 
Worthy elements are instructional, innovation, setting, or implementation processes that increase the likelihood of getting good outcomes.  They might be any of the following that the Implementation Team thinks will be challenging to do well.


Core components of instruction or innovation that are deemed or demonstrated to be necessary to getting good outcomes

90 minutes of instruction; use of evidence-based literacy practices

Core contextual components necessary to get good outcomes amount of time spent in academic instruction
Implementation processes that are necessary to getting good fidelity (e.g., processes that help educators and other staff change their instructional practices to support the innovation) coaching with observation and feedback occurs at least twice a month; fidelity assessments are reported monthly

2. Decide on the dimensions of the “test” by considering the following questions:

What criteria will be used to identify the first group of ‘testers’ and subsequent groups? “The four elementary schools that have at least three Grade 3 classrooms.  Four Grade 3 teachers, each with at least one year of teaching experience in each of four different schools will participate.  This will give us enough teachers to run the test 3 times.”
What is the scope of the test? “The test of the feasibility of weekly progress monitoring will occur over the course of two weeks of instruction using the new math curriculum.”
What data will be reported to whom, on what schedule? “At the end of each week the teachers involved will complete the six questions related to progress monitoring and will email the form to the District Curriculum Specialist. The data summary across the two testing weeks will be produced by the District Curriculum Specialist and emailed to the Implementation Team usability subgroup.”

What are the criteria for a successful test?

“If, on average, 75% or more of the possible ‘yes’ items are marked and there are no significant negative comments, the process is ready to roll out to the next cohort.  If the second cohort has similar results, the process will be used in all the Grade 3 classrooms.  Below criteria scores result in redeveloping and retesting the process.”


3. Engage in just enough preparation so that the participants can get started.

The goal is to build the successful processes not to develop an all-encompassing, perfect process “Each participating teacher will receive a two-page handout and will have access to a designated coach who can answer questions or visit the classroom once.”
4. Keep testing improved processes until the data indicate that most of the “bugs” have been found and fixed and the success criteria have been met.
5. Install the improved processes by considering which of the Implementation Drivers will be used to successfully scale-up the innovation, instructional, or implementation processes that were tested.


In summary, usability testing is a variation of the PDSA improvement cycle process.  It requires tests of ‘worthy’ processes using repeated PDSA cycles by small cohorts of participants. The goal is to work out the challenges and improve the processes before more widely implementing the innovation, instructional practice, or implementation process.

Topic 3: Practice - Policy Feedback Loops


Often PDSA cycles are carried out at the practice level.  However, Practice to Policy Feedback Loops are carried out on a larger scale in a more complex environment.  This process occurs less frequently and at slower pace than rapid-cycle problem-solving and usability testing. 

Practice to Policy Feedback Loops are PDSA cycles designed to provide organizational leaders and policy makers with information about implementation barriers and successes so that a more aligned system can be developed. Feedback from the practice level (Practice Informed Policy) engages and informs organizational leaders so that they can ensure that policy, procedures, resources, etc. enable innovative practices to occur in classrooms, schools, and districts (Policy Enabled Practice) as intended.

Critical to any effort to coordinate the implementation of the new practice, program, or policy, is the need to intervene actively, at multiple levels of implementation to help increase the likelihood that such meta-contingencies as funding, licensing, referral mechanisms, policies, regulations, and reporting requirements are aligned to support the new way of work.

This graphic illustrates the PDSA cycle in the context of a Practice-Policy Communication Cycle.  Policymakers often execute plans in the form of laws, guidelines, regulations and funding opportunities.   At some point “doing” the policy impacts the practice (e.g., at the teacher, school, or district level).  That’s often the end of the PDSA cycle. We Plan-Do and Plan-Do without feedback about how the policy is impacting practice.  The Study-Act Feedback arrow represents bi-directional communication; the direct feedback of information and data to inform policymakers about policy impact on practice.  Communication Cycles enable policies, structures, procedures and practices to become better aligned to support effective educational programs and practices.



Using policy to mandate change and provide incentives works best when:

“Haven’t we always used policy to promote and support change?”

However, when it comes to adopting, effectively using, and scaling evidence-based innovations and instructional practices, we typically need to support educators in gaining confidence and developing new competencies.  Certainly policies should be passed and incentives provided that ‘enable’ the new practice to be used as intended.  But such policies and incentives are seldom sufficient for making transformative or systemic change that results in academically or socially significant outcomes. 

For example, Federal and State governments make new policies designed to improve practices every year so this is common across all States. What distinguishes successful system change efforts from the many failures that occur in education, health, and human services is the FEEDBACK from the practice level to inform the policy makers of the enabling or inhibiting aspects of the policy – right side of the graphic. 

And, not surprisingly, priorities related to using evidence-based innovations at the district, school, and classroom level do not always neatly align with the latest federal requirements, state statutes or school board mandates.  The system is a complex one with many adaptive challenges.  This means that there are many unintended or unanticipated consequences related to the adoption of any innovation. 

A process to ensure that ‘policy enables practice’ and that ‘practice informs policy’ can help improve our chances to make change and achieve outcomes.  Enabling policies set the stage for implementation, reduce perceived risk, and promote the new ways of work.  And when you add the opportunity for the practice level to inform policy about the impact there is a much greater likelihood that we will create ‘hospitable’ environments over time.  Recurring feedback loops that involve policy enabling practices and the practice level informing policy will create conditions that support, rather than hinder, the use of evidence-based practices.     

Key Functions and Processes

The System ‘As Is’

“What were they thinking when they mandated this?”

“These higher standards and incentives should get things rolling.”

“It was tough getting these regulations approved.  I wonder what support people will need to implement well?”

In most systems, there are no formal mechanisms for the practice level to inform the policy level.  Instead there typically are layers of managers and administrators between those implementing the practice and the ‘policy’ makers.  This ‘layering’ makes sense for solving the right problems at the right level.  However, some problems need to be ‘lifted up’ to the next level for resolution. 

Without a known and transparent process for communicating challenges to the right level, the layers serve to buffer the organizational leaders and policymakers from hearing about or experiencing the challenges and unintended consequences of the new policy, guidelines, incentives, or reporting requirements.  Or one-way communication prevents understanding other variables that maybe preventing implementation from occurring as intended.


The following processes can be helpful when creating this policy to practice PDSA loop.

Get Ready - Exploration

Make no mistake about it, creating and productively using two-way communication channels between policy levels and practice levels is a paradigm shift.  You are challenging the status quo.  This means that time is required for discussion, buy-in and creating the process.  The Implementation Team can help navigate among levels and gain consensus and agreement to develop, try out, and improve the Policy – Practice Feedback Process.  Here are some questions to guide your work.

Do we have our rationales ready as we work to set up processes?  And are we able to tailor them to the people with whom we are communicating?

Do we have agreement between or among the ‘levels’ to productively participate in this type of communication?  For example…

  • Will principals or school leadership teams agree to ‘receive’ information from the classroom teachers in an organized way and on a regular basis?
  • Will the District Implementation Team agree to hear from building leadership and teachers directly about how implementation is progressing?  
  • Will everyone involved agree to celebrate the process of communication regardless of whether a challenge or strength is being highlighted (e.g. “Thank you so much for bringing this to our attention.”)?

Get Set - Installation

Do we have a transparent, shared process for communication?
Do we have agreement that issues will be addressed?

  • Will we agree to address the issue in a timely manner and/or communicate with those who can address it in a timely manner?
  • Will we get back to the originators of the information to let them know what’s happening?
  • Are we committed to understanding how the issues are impacting successful use of the innovation and persisting in finding solutions?

Do we have a process for orienting the participating groups to the forms, frequency, processes?  Do we have a way to get feedback on how it is working? 

Here is a link to an example of a linking communication protocol that may be helpful:
Handout8: Communication Protocols Worksheet

Go Initial Implementation

Engaging in PDSA work as the process is being used will result in an increasingly productive, timely, and functional process.  It may seem obvious, but the first few uses of the process should be treated as a usability test.  Or there may be emergent issues that stop the process in its tracks and call for rapid-cycle problem-solving.  The Implementation Team needs to be prepared to support the parties involved in engaging in what may be perceived as ‘risky behavior’.

Keep Going – Full Implementation

The end goal is to have the communication processes between and among levels routinized, transparent, reinforced, and functional.  It will take time and energy to get there.  The rounds of PDSA work will help get you there.  And an annual review of how well the processes are working also will help adjust the processes over time.


Other Processes

There are two other processes that can be used together or separately to help improve communication and feedback between and among levels.  These are linked-in agenda items and linked team membership.

Linked–In Agenda Items

A linked-in agenda items refers to beginning each meeting with agenda items that ask the following:

At the beginning of the agenda, the chair asks:

At the end of the agenda, the chair asks:

Having these questions posed by the chair at the beginning and the end of the meeting ensures that that the feedback loops are on the agenda. 

Linked Team Membership

Another logical way to encourage linking communication and policy-practice feedback loops is by ensuring that membership on each team (e.g. building, district, state), to the extent practicable, includes designated representatives from other levels.  Building teams could have a member of the district office who attends regularly or who serves as the point of communication.  District teams can and should have representation from building level staff and regular communication with relevant regional entities.  Embedding other levels in a single team helps facilitate communication. 


To conclude, practice-policy feedback loops are an example of an improvement cycle process.  Practice-Policy Feedback Loops are established to ensure that barriers to effective practice are brought to the attention of policy makers and to assist in the development of policy enable practices and practice informed policies.  Improvement cycles share a common framework, the PDSA Cycle, and can be used in various ways to facilitate the necessary adjustments to the system to support effective innovations.

Activity 5.3: Linking Communication Protocols

Activity 5.3
Linking Communication Protocols

By “linking” communication protocols, organizations form a practice-policy communication cycle.  These feedback processes provide supportive policy, funding, and operational environments for new initiatives, as well as systems changes.

Download PDF

Topic 4: Transformation Zones

A Transformation Zone is a “vertical slice” of the education or service system (from the classroom to the Capitol)
  • The “slice” is small enough to be manageable
  • The “slice” is large enough to represent key aspects of the system (urban, rural, high needs, frontier, diverse communities)
  • The “slice” must be large enough to “disturb the system” so that a “ghost” system or “work arounds” won’t be feasible.
  • The intention is to develop the systems and infrastructure that will be needed for successful implementation, sustainability, and scale-up


A Transformation Zone represents a vertical slice of the system from the practice level to the policy level (e.g. from the classroom to the Capitol). The entities (e.g. districts, schools, classrooms) are representative of the larger system.  The slice is small enough to be manageable but large enough to be representative of the system as a whole (e.g. urban, suburban, rural, frontier, high needs, etc.). The vertical slice represents the system as it functions today and the implementation sites within the zone serve as the first cohort to participate in the change processes necessary to become the system of the future. All three types of improvement cycles can be useful in the Transformation Zone as you ‘change on purpose’ from the system as is to the system you need to host the new ways of work.


Pilots and initiatives come and go.  Islands of excellence rise and sink. The immediate results may be excellent but the end results are unsustainable pockets of innovation.  Efforts to ‘train everyone’ result in little lasting change.  Work in the Transformation Zone is designed to avoid or address such challenges. 

Demonstrations or ‘pilots’ are a place to start with an innovation.  These first few tests of the “good idea” are an important start to the process.  We have to be able to do it once to be able to do it many times.  However, the first test of the good idea won’t lead to more systemic change.  In most cases, successful demonstrations have not been staged with replication or sustainability in mind. Often extraordinary individuals have implemented their desired change by developing ‘work arounds’ or calling in favors to combat systemic problems.  A ‘ghost system’ develops to support these one-time heroic efforts.

The Transformation Zone work is designed not only to improve instructional and innovation practices but it is also a purposeful approach to developing a sustainable, replicable, and effective infrastructure.  By making changes in a defined Transformation Zone, leaders have the opportunity to develop the system as they want it to be; the system as it will function in the future.  The Transformation Zone provides the opportunity to align system components to support the new way and work. 

In addition, the Transformation Zone provides the opportunity to develop capacity and understand the implementation infrastructure needed to support the selection, training, coaching, and fidelity assessment of individuals who will be implementing new ways of work.  We need to understand what is required to support and sustain change over time and across staff so that the innovation can be used beyond the first cohorts in the Transformation Zone and beyond the Transformation Zone on the way to full scale-up.

In short, an effective way to move from a few successful pilots to “readiness” for scale-up is the use of a Transformation Zone.

What about Demonstrations and Pilots?

Demonstrations or “pilots” are a place to start for innovations (“it’s possible!”). Pilots don’t usually lead to sustainable practice and system change. Demonstrations and pilots can lead to:

  • Random acts of innovation
  • Efforts that are person and passion dependent
  • An initiative that can “ghost” system its way to success
  • Requiring execution by the “extraordinary” and heroic
  • Absence of replicable or sustainable implementation infrastructure


Key Functions and Processes

“Another demonstration project!  We know it’s not going to last.”

“By the time we get everyone up to speed there will be so much staff turnover we’ll have to start over.”

“We don’t have a choice; we have to roll this out statewide.” 

 “We need to hold their feet to fire, they need to meet these benchmarks.”


Transformation Zones are used to establish simultaneously new ways of work (e.g. the evidence-based innovation or instructional practice - EBP) and to develop the capacity to support the new ways of work (an implementation infrastructure to assure effective use of the EBP).

Using All Forms of PDSA

As noted previously, all the PDSA processes we’ve reviewed are likely to be used in a Transformation Zone.  There will be a need for rapid-cycle problem-solving as challenges to effective implementation emerge.  Usability testing may be needed to be sure core components of the instructional practices or innovations are well-operationalized, improved, and can be used as intended in the full range of settings in the Transformation Zone.  Practice to policy feedback loops will be needed to communicate systemic challenges that need to be addressed to better align system requirements, resources, and supports with the new ways of work.   

Transformation Zone Dimensions

It is impossible to make significant change simultaneously and successfully in all parts of a system. The Institute of Medicine examined large scale reform efforts and concluded that, “Inducing major change in large organizations is much more difficult than simple behavioral changes because organizations themselves are problematic. Additionally, most organization designs are outdated and do not reflect current environments, requiring more comprehensive organizational change” (Chao, 2007).

Implementation Teams begin their work in a Transformation Zone to have a realistic shot at making a difference.  The size and location of the Transformation Zone is determined by considering the following factors:

The change agents, often the first Implementation Team(s), need to consider what it will take to be successful and simultaneously expose the effort to the challenges of real-world implementation, sustainability, and system change.  Overall, the Transformation Zone size and characteristics need to be sufficiently diverse in terms of representing the overall system and scoped to be successful. 

As challenges to uses of the innovation arise, these issues are brought to the attention of district leaders, regional support systems, or if needed to the State Management Team through the use of Policy – Practice Feedback Loops.  Monthly meetings with these Leadership Teams are essential to making the organization and system changes needed to support and sustain effective Implementation Teams and effective education practices for all students. They are the vehicle for removing barriers and institutionalizing facilitators to support improved educational practices and improved student outcomes.

This process is very different from typical pilot tests, demonstrations, or broad brush exhortations to use innovations or make significant systemic change.  PDSA improvement cycles help teams attend to what is working and what is not working and focus on developing supports and infrastructure needed to assure intended outcomes. The whole process is done with an eye on defragmenting the system, removing barriers to effective outcomes, and creating the future capacity to make use of a variety of evidence-based approaches and other innovations statewide.


The first adaptive challenges are dealt with in a constructive way as a result of activities in the Transformation Zone. As the Transformation Zone expands to include more districts and their schools, new challenges will arise resulting in more changes to the current systems.  As this process continues, the system itself is reinvented to more precisely and functionally support evidence-based innovations and implementation infrastructure within districts and the entire State education system. This is in contrast to effective innovations changing to survive in the current system and as a result, ‘adapting out’ the very components that make them effective.  As implementation capacity expands and adaptive issues are resolved, the Transformation Zone encompasses all districts in the State and the ‘ghost’ system has become a ‘host’ system for continual improvement of education outcomes for generations to come.

Real World Requirements

But what about Federal, state, or grant requirements that require large-scale roll-outs?  It is a fact of life that mandates, state statutes, contracts, and grant requirements are not necessarily informed by implementation best practices!  What to do?

Even when large-scale roll outs are required it is often possible to use Transformation Zone concepts.  You might think of it as selecting a ‘virtual transformation zone’.  You can recruit and select a cohort of districts, agencies, or entities in the larger system that want to work with you more deeply.  You can then provide them with the kind of training, coaching, feedback, support, and problem-solving that would have gone into a more explicit use of a Transformation Zone.  By paying attention, through interactions with Implementation Teams and by more carefully monitoring challenges and process data, you can learn what it will take to improve processes and outcomes over time in the rest of the state.

Think of the less intensive work and lower levels of attention for the broader system as exploration stage work to create readiness and improve knowledge.  The provision of information, access to assessments and materials, and web-based tool kits can all contribute to broader system buy-in and preparation.  And we know from the literature that 5% to 15% of the entities not receiving formal implementation support will find a way to be somewhat successful.


Student outcomes can be improved with greater effectiveness and increased efficiency. An infrastructure for implementation can be established to support the successful uses of multiple evidence-based programs or other innovations statewide. This infrastructure can be tested, improved, and organized on a limited scale in a Transformation Zone.  The ‘bugs’ in the process can be safely and more quickly detected and resolved by using all the forms of Improvement Cycles; rapid-cycle problem-solving, usability testing, and policy – practice feedback loops.  The Transformation Zone itself is a large-scale Plan, Do, Study, Act cycle with the next “act” including the next cohort of implementation settings and sites (e.g. regions, districts, schools).

Activity 5.4: Transformation Zone Elevator Speech

Activity 5.4
Transformation Zone Elevator Speech

Review Module 5, Topic 4: Transformation Zones. Then, create a 2‐3 minute elevator speech for leadership
in your organization explaining the difference between a “pilot” and a “transformation zone”.

Download PDF

Module 5 Summary

Improvement Cycles support the purposeful process of change. Implementation teams use improvement cycles to change on purpose. Improvement cycles are based on the Plan, Do, Study, Act process. Improvement Cycles help us identify challenges, solve problems, improve practices, and create hospitable environments for new ways of work.


Key Takeaways

  1. Rapid-cycle problem-solving is one type of improvement cycle that uses the Plan, Do, Study, Act process.  It is typically used to solve emergent or urgent problems that are impacting the roll-out or use of the innovation or to make quick, incremental improvements.
  2. Usability testing is used to test the feasibility and impact of a new way of work prior to rolling it out more broadly. Usability testing consists of a planned series of tests of an innovation, components of an innovation, or implementation processes for improvement.
  3. Practice-policy feedback loops are another example of an improvement cycle process. Practice-Policy Feedback Loops are established to ensure that barriers to effective practice are brought to the attention of policy makers, sound policy that strengthens implementation is maintained, and transparent processes exist to support the development of policy enabled practices and practice informed policies.
  4. A Transformation Zone is “vertical slice” of the system; small enough to be manageable and large enough to ‘disturb’ and impact key aspects of the system, yet not impact the entire system. The intention is to develop the systems and infrastructure that will be needed for successful implementation, sustainability, and scale-up.

Capstone Quiz


Congratulations, you finished Module 5: Improvement Cycles!  We invite you to assess your learning via the Capstone Quiz.

Your virtual coach Asha guides you through a quick set of questions
[approximate time: 5-10 minutes].


The Active Implementation Hub, AI Modules and AI Lessons are an initiative of the State Implementation & Scaling-up of Evidence-based Practices Center (SISEP) and
the National Implementation Research Network (NIRN) located at
The University of North Carolina at Chapel Hill's FPG Child Development Institute.
terms of use
copyright 2013
Website Policy and Terms of Use

Resources and References

Akin, B.A., Bryson, S.A., Testa, M.F., Blase, K.A., McDonald, T., Melz, H. (2013). Usability testing, initial implementation, and formative evaluation of an evidence-based intervention: Lessons from a demonstration project to reduce long-term foster care. Evaluation and Program Planning (41), 19-30.

Allen, B. L. (1996). Information tasks: Toward a user-centered approach to information systems. New York: Academic Press.

Blase, K. A., Fixsen, D. L., Naoom, S. F., & Wallace, F. (2005). Operationalizing implementation: Strategies and methods. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute.

Brinson, D., Kowal, J., & Hassel, B. C. (2008). School turnarounds: Actions and results The Center on Innovation and Improvement (pp. 1-30). Lincoln, IL: The Center on Innovation and Improvement.

Chao, S. (Ed.). (2007). The state of quality improvement and implementation research: Expert views workshop summary. Washington, D.C.: Institute of Medicine of the National Academies: The National Academies Press.

Dale, N., Baker, A. J. L., & Racine, D. (2002). Lessons Learned: What the WAY Program Can Teach Us About Program Replication. Washington, DC: American Youth Policy Forum.

Deming, W. E. (1986). Out of the Crisis: MIT Press.

Elmore, R. (2002). Bridging the gap between standards and achievement: The imperative for professional development in education. Washington, DC: The Albert Shanker Institute.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231).

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network. (FMHI Publication No. 231).

Fixsen, D., Blase, K., & Van Dyke, M. (2012). From ghost systems to host systems via transformation zones (pp. 3-7). Washington, DC: U.S. Department of Education Office of Vocational and Adult Education.

Frick, T., Elder, M., Hebb, C., Wang, Y., & Yoon, S. (2006). Adaptive usability evaluation of complex web sites: How many tasks? Unpublished manuscript, Indiana University, W.W. Wright Education 2276, 201 N. Rose Ave., Bloomington, IN 47405-1006.

Genov, A. (2005). Iterative usability testing as continuous feedback: A control systems perspective. Journal of Usability Studies, 1(1), 18-27.

Greenhalgh, T., Robert, G., MacFarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82(4), 581-629.

Khatri, G. R., & Frieden, T. R. (2002). Rapid DOTS expansion in India. Bulletin of the World Health Organization, 80(6), 457-463.

Langley, G. L., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (1996). The improvement guide: A practical approach to enhancing organizational performance. San Francisco, CA: Jossey-Bass Publishers.

Manna, P. (2008). Federal aid to elementary and secondary education: Premises, effects, and major lessons learned (pp.61). Williamsburg, VA: College of William and Mary, Department of Government and the Thomas Jefferson Program in Public Policy.

McCarty, D., Gustafson, D. H., Wisdoma, J. P., Ford, J., Choia, D., Molfenter, T., et al. (2007). The Network for the Improvement of Addiction Treatment (NIATx): Enhancing access and retention. Drug and Alcohol Dependence, 88(2-3), 138-145.

McGrew, J. H., Bond, G. R., Dietzen, L., & Salyers, M. P. (1994). Measuring the fidelity of implementation of a mental health program model. Journal of Consulting & Clinical Psychology, 62(4), 670-678.

Moss, F., Garside, P., & Dawson, S. (1998). Organisational change: the key to quality improvement. Quality Health Care, 7(S1-2).

Nielsen, J. (2000). Why You Only Need to Test With 5 Users. Retrieved April 22, 2007,

Nielsen, J. (2005). Usability for the masses. Journal of Usability Studies, 1(1), 2-3.

Nutt, P. C. (1986). Tactics of Implementation. Academy of Management Journal, 29(2), 230-261.

Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests. New York: John Wiley & Sons. Bauman, L. J., Stein, R. E. K., & Ireys, H. T. (1991). Reinventing fidelity: The transfer of social technology among settings. American Journal of Community Psychology, 19, 619-639.

Schoenwald, S. K., Sheidow, A. J., Letourneau, E. J., & Liao, J. G. (2003). Transportability of Multisystemic Therapy: Evidence for Multilevel Influences. Mental Health Services Research, 5(4), 223-239.

Schofield, J. (2004). A Model of Learned Implementation. Public Administration, 82(2), 283-308.

Shewhart, W. A. (1931). Economic control of quality of manufactured product. New York: D. Van Nostrand Co.

Wallace, F., Blase, K., Fixsen, D., & Naoom, S. (in press). Implementing educational innovations to benefit students. Washington, DC: Education Research Services.

Winter, S. G., & Szulanski, G. (2001). Replication as Strategy. Organization Science, 12(6), 730-743.