This is an edited excerpt from “Continuous Improvement: Teacher Induction” on the High Tech High Unboxed Podcast
(Season 1, Episode 14). Text in italics is adapted from the narration.
Our improvement group first met in October. Our topic was “close reading,” and when we were sharing strategies that were working for us, we started to realize that there were a lot of strategies just in the room (regardless of grade level, actually) and when people were sharing strategies and challenges, I realized the number one thing induction is going to give me is this environment of educators who are working together to improve. We try to do this in so many ways at High Tech High, but literally sitting in a classroom together once a month and checking in was really useful for that. It was just very concentrated. In that October meeting we set our aim for the year. Here was our group’s aim:
By May, all our students will recognize when to use a reading strategy, know what strategy to use, know they are capable of using it, and actually choose to use it.
Between October and November, I interviewed three of my students in order to get a sense of what reading in my class was like from their perspective. Then, in November, it was time to work out the root causes of the problem. So, we started with a “fishbone diagram” (see Figure 1). I love a fishbone. It’s a buffer against your assumptions and I really like that, because I make assumptions.
Figure 1: Fishbone Diagram
On the fishbone, you put your “problem” where the head would be. Our problem was “kids are not approaching texts with confidence and motivation.” Then everyone writes down what you think is driving your problem on post-its. We did a ton of post-its and then we had rounds of categorizing the post-its. The bones of the fish are the categories that we realized we had come up with.
After we’ve identified some causes in the fishbone diagram, we can, in a very systematic way, try to identify which cause might affect most of the other causes. And the one that affects the others has the highest leverage—that is, if we were to influence it we might really make a difference in a lot of the other causes.
You figure out that high-leverage cause using an “interrelationship digraph” (see Figure 2). Now, the interrelationship digraph blows my mind because I always have a preconception of the cause of a problem that’s probably informed by my race, my gender, my first language and my experience in the classroom. All of these things might inform the way I’m seeing reading in my classroom and I just have to pick that apart.
Figure 2: Interrelationship Digraph
In December there’s no meeting, because so many educators are busy with work related to the end of the semester and Project-based Learning exhibitions. In the January meeting, teachers look at existing research and identify possible solutions, so we write them down on post-its (of course) and put them on an “effort-impact chart” (see Figure 3) so we’re asking “Will this take a lot of effort? and “Will this have a big impact?”
Figure 3: Effort-Impact Chart
One of the things I love about looking at impact versus effort is that as a teacher, sometimes I’ll think of a change that I know would be a slam dunk for students. I know if I did it, it would make such a big impact. So, I’ll get started on it and it will be so high-effort that I never actually put it into effect. And I’ll harbor a lot of guilt, which will create other problems in my teaching. And if I just took a moment to think about how much effort it was going to take versus how much impact it might have, I’d be so much more likely to choose a quick change that I could put into effect right away.
It just makes a lot of sense to organize things this way. We’re all about doing the biggest thing, but there are some small things we can do that we actually will do because they’re accessible.
At the end of the January session, every teacher chooses a change idea to try. Then they decide what data they’re going to collect to see whether it works or not. Over the next month they’ll test it out, collect the data and write up what actually happened. This process is called a PDSA cycle, which stands for “plan, do, study, act.” The teacher writes up their PDSA on a single PowerPoint slide and during the spring they do four PDSA cycles.
Julie focused all her PDSAs on “quick jots.” That is, brief notes that students jot down in their notebooks when they’re done reading in class.
For her first change idea, Julie gave students a specific focus for their quick jots. For example, one day she gave a mini lesson on character development and then instructed students to focus on character development in their quick jots for data.
So, she checked the notebooks with students she’d chosen to focus on and found that a few of her students weren’t just confused about character development, they weren’t writing quick jots at all. But she didn’t focus her second PDSA on these students, because she was trying to work out a more efficient means of checking her students’ quick jots. So, Julie had her students mark the quick jots they wanted her to read with post-it flags when they handed in their notebooks.
My question for the second PDSA was “Will the post-it flags help me check this faster and easier?” And the answer’s no, FYI. The flags fall off, and I’m looking through pages and pages to figure out where their post-it flags are and I don’t know if they fell off or if I’ve just missed it. This was just a fail, and I really needed to adapt it.
So, I realized “OK, that supposedly efficient checking system is just not going to work, and the priority right now is that three kids aren’t doing it at all. I’m not going to try to figure out another checking system until I address these three kids.
For PDSA three, Julie tried providing a worksheet to scaffold the activity for the three kids who weren’t jotting. Two kids started doing their quick jots, but the other student still wasn’t doing it.
Okay, so now two kids are back in the system where we’re all reading together and for that one kid, that kid is telling me “This isn’t working for me.” That kid actually went into a book club with another student where they were reading aloud to one another because that was what their attention required to track a book at that time. So, it turned out I needed to keep trying stuff until it was clear that they needed this level of support. So, it was still good data.
I think if I were doing this in sort of the casual way I did before, I would not have necessarily picked up on that one kid and changed the structure as quickly. I would have, you know, half-heartedly tried a bunch of different things and then been like, “Well, there’s always one.” But that’s not okay! And so, it’s really good to systematize it to where I’m like, “Yes, there is one and now another structure supports them.”
And not only that, but every successful tweak I came up with in these PDSAs, I am still using today: I’m using the worksheet accommodation for kids who struggle setting it up in their notebook. I’m using the book club for kids who struggled even still with a worksheet accommodation. And we haven’t even talked about the final PDSA, about kids not getting their books out quickly enough for independent reading, but I’m still using the “walk around and tap the desk” instead of verbal reminders, and it’s really neat to see that. It’s not like I thought, “Oh, these are the results of my PDSA, let me put them in my plan.” It’s just that they taught me something and I use what I learned.
What I realized doing “Continuous Improvement” is that to an extent it’s what I was already doing as a teacher: I was already noticing a problem, trying out this quick tweak and collecting data. But naming all of the steps and doing them in a more formalized way helps me to check where before I was doing it in too informal a way to catch my unexamined, oppressive assumptions. Data collection is a good example of this. I’m collecting data all the time in my classroom, but when I’m formally doing Continuous Improvement, I have to stop and ask myself, “Does the data I’m collecting actually match the interpretations I’m making, and is it unbiased data, or am I using assumptions about what’s happening in my classroom?” Just that tweak of thinking that way is a good reminder so that as I’m doing it now, I’m not just using my assumptions as data, but I’m instead thinking, “Okay, how do I know that each of my students is getting what they need?”
Tags: