Talent wins games, but teamwork and intelligence win championships.
There’s a been a lot of talk again recently about collaborative planning – or as John Hattie might call it, collective teacher efficacy. I’ve done some work in this area recently, and I’m going to share what I discovered.
The problem of individual planning
A main scale teacher – roughly – has to plan about 20 lessons a week, give or take. If a teacher takes 15 minutes per lesson to do so, that’s 5 hours of planning time needed. Now, 15 minutes is a short amount of time to plan the right questions, make any slides, choose retrieval and checking activities and select resources for practice. This 15 minutes can grow quickly if things aren’t quite right. Suddenly it’s not out of the question for planning a lesson to take half an hour or longer, and the total amount of time spent planning to go over 10 hours a week – a significant amount of time given time needed for marking, feedback, meetings, duties, etc.
Allied to this are the eternal problems a subject leader faces – are lessons being planned well enough? Are students being given opportunity to develop their knowledge? Does the member of staff have the subject knowledge and experience to plan the right questions? After 13-14 years of being in the Maths teaching game I know I’ve still got areas that I need to work on – one is forever learning – so just think about the needs of an NQT/RQT!
These two areas were at the forefront of my mind 12 months ago when Maths department was a team of extremely enthusiastic and hard working staff, but who I felt were exceeding what was a sustainable workload. With over half of the department being NQTs/RQTs and members of the department having roles beyond the classroom, the amount of time that staff could dedicate to developing subject knowledge, planning questions, putting lesson resources together and still have something of a life struck me as being problematic.
So, I resolved to do something about it. Our department meetings had always been subject focused – agreed ways of modelling, developing topic specific knowledge, formative assessment practice, etc – but something more structured needed to take place. Despite being after school, staff were always happy to go past the allotted hour, and as such, the stage was set. It was time to collaboratively plan lessons.
Objectives and rationale
The objective was simple. Reduce the amount of time planning by spending 90 minutes creating outline lesson plans and resources for each year group, for each week.
How would this objective be achieved?
- Staff would work in pairs, trying to mix experience and subject knowledge where possible.
- Each pair would focus on a particular year group. How it worked out was that I ended up planning Year 7 alone, but this was fine. I would try and match the pair to who they taught the most – thus they felt greater ownership of the planning.
- The pair would plan a series of ‘I do (teaching modelling), We do (formative assessment), You Do (independent practice)’ cycles for a particular learning intention – see Tom Sherrington’s Silver Arrows for the rationale of the cycle structure. How many cycles this was would depend on what the learning intention was. What was important that each cycle did not necessarily correspond to a lesson; instead, the cycle was specific to an elemental step in the development of the overall learning intention. For example if the learning intention is adding and subtracting fractions, one cycle would focus on finding equivalent fractions with common denominators.
- The Do Now activities were purely retrieval practice – questions taken from Mr Carter Maths, MathsBox or Corbett Maths that went over what had been covered in the previous scheme of work unit. This meant that staff didn’t have to come up with their own lesson starter, and thus time was saved.
- As with point 4, assessments were already selected by me for each learning intention, so staff didn’t have to come up them. This had a two fold outcome – more time saved for staff, but also staff had a sense of what they needed to work to in order for students to have some level of confidence in the summative assessment. This sounds like teaching to the test, but I think this is different in the sense that each learning intention was part of a wider unit of learning focusing on a particular theme, rather than prepping students for a final exam. By breaking down the unit of learning into a series of skills that can be assessed, one can get a picture of if the theme as a whole has been understood. Masking these skills in exam questions tells you nothing about if the skills themselves are understood – see Daisy Christodolou’s Making Good Progress for more on this.
- I tried to limit the sources of independent practice. CGP and Elmwood Press textbooks, the CIMT, Corbett Maths and Median websites were the main sources. If we were really struggling, Jo Morgan’s Resourceaholic or Ed Southall’s Solve My Maths worksheets were the next port of call. It was rare we had to make something ourselves.
- Exit tickets to check students had understood usually came in the form of a straightforward exam question – a classic AO1 type – or something from Diagnostic Questions – this goes back being able to identify if the skill had been understood in the first instance. I’ll go into more detail about how we checked learning, rather than teaching, at DTA at another point.
- Each learning cycle had a premise. This was all about linking the learning to the bigger picture, reminding students of why this element of learning was important, and identifying where in the syllabus this fitted. Saves a whole load of ‘whats the point’ questions from darling students.
- Each week’s set of plans for a year group were planned for the middle to higher end of the ability spectrum. We wanted to show high expectations for all classes, and if lower attaining students were making rapid progress through the content, then they weren’t being held back. Likewise if a higher attaining group had some gaps in assumed knowledge, we weren’t papering over the cracks. More on that below.
What this meant for individual teachers
The result for teachers wasn’t a set of complete lesson plans where all that member of staff had to do was blithely rattle through a set of slides, without ownership of the material or care in checking the resources before hand. I made it absolutely clear that whilst this would save a huge amount of time in planning, there was still a need for tailoring – which for a week’s of lessons for a class would only take between 30 minutes and an hour. This might involve adding in extra questions and checking for understanding for lower attaining groups, or adding in more challenging tasks and problem solving for higher attaining/rapidly progressing groups. But staff were guaranteed a core set of lesson plans that they could develop through the week to suit the needs of their learners, and a lot more time to do so.
What this meant for me
I knew that the Maths department were planning quality lessons, rapidly, and having more time to think about teaching and learning. There was less stress, more clarity and tighter integration as a team to achieve a shared mission. It did take a while at first, but by the 3rd or 4th week in, a pair could have a week’s learning planned in the space of 90 minutes, all the while having proper conversations about the syllabus, developing their subject knowledge and winding up the ‘opposition’ (as Usman and Saila would call the rest of us).
What this meant for students
I’ll keep this simple. In September, using the June 2017 grade boundaries, I could compare outcomes for the Class of 2019 (Y11 in 2017) with the Class of 2020 (Y10 in 2017). The Class of 2020 were already markedly above their peers in the year above by 5-8% at grades 4+, 5+ and 7+, and most notably, this was a term ahead of their peers too (I compared the Secure Mocks for purposes of standardisation, which was used for the first time in November of Y11 for the Class of 2019, if that makes sense!). In other words, Y10 were not only performing significantly better, this was a term ahead too.
Does this justify collaborative planning?
Of course, this is not a double-blind RCT with replication studies built in. You could claim that the quality of staff on Y10 might be better than those in Y11, but the same teachers taught both year groups. There’s an element of knowing the syllabus and assessment criteria better, but to be 5-8% better at key measures, and a term ahead of the previous year isn’t something I think can be purely attributable to this.
Even if you worked purely on the fact that all students were ensured a consistent diet of modelling, questioning, checking, practice and assessment, planned in a shorter space of time and delivered by a less stressed, more focused and confident teacher, collaborative planning can’t be argued against. Just look at Hattie’s effect size if you don’t believe me.
Thoughts and questions welcome via Twitter and the comments section, as always.