A signature practice in improvement science is the Plan-Do-Study-Act (PDSA) cycle. Often credited to Dr. Edwards Deming, the PDSA cycle is intended to be a practitioner-friendly version of the scientific method: plan a test of change, do it, study what happened, and then act to test further changes or try this change at a larger scale. The PDSA cycle is a structure for disciplined inquiry— that is, a tool for answering the question “How do we know if the change led to an improvement (Langley et al., 2009)?”
While the PDSA cycle is a conceptually powerful tool, in my experience participating in and leading a number of school improvement networks, I have become wary of a widespread assumption that doing improvement science is synonymous with doing PDSA cycles. To be clear, the idea that practitioners will test small changes and see whether the changes led to an improvement is unobjectionable. My concerns are both practical and philosophical. First, a PDSA is, literally, more paperwork: a document educators need to fill in and “turn in” to someone else, and this kind of task will always tend towards “box-ticking.” At its worst, this turns improvement coaches into the “improvement police,” chasing down harried educators to fill out PDSA paperwork. As noted by one prominent supporter of improvement science in education, “the dirty little secret in healthcare is that nobody fills out the PDSA forms.” More substantively, while the idea of every individual practitioner acting as a scientist studying their craft is intuitively appealing, in practice I observe busy practitioners being asked to design experiments on their own, often without a ready source of data to answer the question “How will we know if the intervention is a success?”
But though I’ve seen PDSA cycles devolve into box-ticking exercises (or not get done at all), I continue to believe that disciplined practitioner inquiry is a worthy endeavor. In pursuit of a more consistently achievable alternative to the classic “PDSA cycle,” some of my colleagues recently conducted what is known in improvement science as a “planned experiment.” Based on what I’ve seen, I believe that the planned experiment (Moen et al., 2012) leverages the strengths of the PDSA and side-steps some of its design flaws.
Our planned experiment was conducted as part of the CARE network, a collection of middle schools in Southern California working on helping more eighth grade students who are African American, Latinx, Indigenous, and/or experiencing poverty to be on track for successful college and career life outcomes. An element of that work is helping more middle school students feel a sense of belonging in their classes. The teachers from four CARE network schools engaged in a three-week planned experiment to test if particular practices were more successful at increasing student belonging as measured by a survey given to students before and after the intervention. The questions, modified from the PERTS (Project for Education Research that Scales) Elevate survey, asked students to rate the degree to which they agreed with the following statements, using a Likert scale:
Before they began the planned experiment, the CARE team had identified two promising practices for increasing student belonging in middle school math classrooms: establishing a consistent welcoming routine at the start of class and conducting a systematic one-on-one check-in with every student. While these two practices had initial evidence of impact, the team had three specific questions about them:
The goal here was to work smarter not harder. If one of these two practices was more effective, the team wanted to figure that out quickly so they could focus on that practice. And so, the CARE team approached school teams and asked if at least four teachers at each site were willing to participate. Four teams agreed. School teams learned about the two practices, which the CARE team had documented in detail. The school teams then thought about how they would like to implement these practices in their context, which required both pedagogical and logistical adaptations. After developing how the two practices would be conducted at their school, every teacher was randomly assigned to one of four conditions in a 2×2 matrix:
Teacher 1: Systematic student check-in | Teacher 2: Daily welcoming routine |
Teacher 3: Both check-in and welcoming routine | Teacher 4: Control (1 welcoming routine per week) |
Now it was time to take the pre-intervention measure: students answered the three survey questions. Next, the teachers implemented the practices as outlined above, and three weeks later, it was time for the post-intervention measure: students answered these questions a second time.
So that was the process. The obvious question is, “what made this easier to implement than a PDSA cycle?” There are three specific elements of the process that I want to examine in turn:
1. Paradoxically, the planned experiment is likely to have a more rigorous design while also being a more realistic ask for practitioners
The planned experiment is more likely to be designed in a way that increases confidence in the efficacy of the change ideas being tested, both because the experiment is tested across multiple contexts and because the measures are more likely to be robust. At the same time, because the individual practitioner does not need to design all aspects of the test of change, it is more feasible for busy practitioners to help design and execute the test rather than designing it from scratch by themselves.
2. The planned experiment puts educators at the heart of implementation conversations
Contrary to more traditional forms of educational research, the planned experiment is not research being “done to” practitioners, but rather is a practice that includes educators in important implementation conversations. At the same time, while the PDSA is intended to be a liberating structure (“you’re in charge of your own learning”), being left to your own devices is not necessarily supportive of the busy practitioner.
3. The planned experiment gives a clear role to those further from practice
In traditional education research, those further from practice can be guilty of doing research “to” practitioners. Meanwhile, traditional PDSA cycles can lead to those further from practice merely nagging practitioners or criticizing their work. In contrast, the planned experiment positions those further from classrooms (e.g. network hub leaders, school administrators, higher education faculty members) to help with experimental design and measure selection while also doing research with practitioners, not to them.
In some improvement circles, the PDSA cycle has been elevated as the only form of inquiry. This view is insufficient. For educators interested in the tools and thinking of improvement science, the planned experiment is an important tool to add to our toolbox. With more disciplined planned experiments in our schools, we can do more of what works and less of what doesn’t.
Special thanks to Daisy Sharrock, Alicia Grunow and Sandra Park for their work on and insight into planned experiments.
Tags: