Dylan Wiliam (@dylanwiliam)
Leadership for teacher learning
www.dylanwiliam.net
Outline: Four questions
What determines how quickly students learn?
Curriculum
Curriculum matters
Agodini, Harris, Seftor, Remillard, and Thomas (2013)
Teacher quality:�Why it matters
Teaching quality and teacher quality
Teacher quality and student achievement
| | Correlation with progress in | |
Study | Location | Reading | Math |
Rockoff (2004) | New Jersey | 0.10 | 0.11 |
Nye, Konstantopoulos, Hedges (2004) | Tennessee | 0.26 | 0.36 |
Rivkin, Hanushek, and Kain (2005) | Texas | 0.15 | 0.11 |
Aaronson, Barrow, and Sander (2007) | Chicago | | 0.13 |
Kane, Rockoff, and Staiger (2008) | New York City | 0.08 | 0.11 |
Jacob and Lefgren (2008) | | 0.12 | 0.26 |
Kane and Staiger (2008) | | 0.18 | 0.22 |
Koedel and Betts (2009) | San Diego | | 0.23 |
Rothstein (2010) | North Carolina | 0.11 | 0.15 |
Hanushek and Rivkin (2010) | | | 0.11 |
Chetty et al. (2014) | | 0.12 | 0.16 |
Hanushek and Rivkin (2010)
What does this mean for student progress?
Teacher quality:�How to get more of it
Strategies for improving teacher quality
Teacher preparation and selection
Evaluating teaching
Do we know a good teacher when we see one?
Distribution of total correct ratings | |||||||
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
1% | 11% | 29% | 36% | 13% | 9% | 1% | 0% |
Strong, Gargani, and Hacifazlioğlu (2011)
Ratings by rater type
Rater | Number | Accuracy (%) |
Teachers | 10 | 37 |
Parents | 7 | 37 |
Mentors | 10 | 47 |
University professors | 9 | 41 |
Administrators | 10 | 31 |
Teacher educators | 10 | 31 |
College students | 11 | 36 |
Math educators | 10 | 34 |
Other adults | 11 | 43 |
Primary school students | 12 | 50 |
Rater | Number | Accuracy (%) |
Teachers | 10 | 37 |
Parents | 7 | 37 |
Mentors | 10 | 47 |
University professors | 9 | 41 |
School leaders/deputes | 10 | 31 |
Teacher educators | 10 | 31 |
College students | 11 | 36 |
Math educators | 10 | 34 |
Other adults | 11 | 43 |
What if the difference is larger?
Distribution of total correct ratings | ||||||||
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
1% | 3% | 11% | 25% | 25% | 24% | 9% | 1% | 0% |
Can we identify good teachers after training?
Framework for teaching (Danielson 1996)
Observations and teacher quality
Sartain, Stoelinga, Brown, Luppescu, Matsko, Miller, Durwood, Jiang, and Glazer (2011)
So, the highest rated teachers are 30% more productive than the lowest rated
But the best teachers are 400% more productive than the least effective
Unreliability in lesson observations
Achieving a reliability of 0.9 in judging teacher quality through lesson observation is likely to require observing a teacher teaching 6 different classes, and for each lesson to be judged by 5 independent observers.
Hill, Charalambous and Kraft (2012)
Bias in lesson observations
Bias in lesson observations
Steinberg and Garrett (2016)
Bias in lesson observations
Can we identify good teachers from test scores?
24
Short-term and long-term effects
25
Carrell and West (2010)
Instructors | |
Less qualified, less experienced | More qualified, more experienced |
Higher end of course scores | Lower end of course scores |
Lower scores on follow-on courses | Higher scores on follow-on courses |
Higher end of course evaluations | Lower end of course evaluations |
Instructors | |
Less qualified, less experienced | More qualified, more experienced |
Higher end of course scores | Lower end of course scores |
Lower scores on follow-on courses | Higher scores on follow-on courses |
Instructors | |
Less qualified, less experienced | More qualified, more experienced |
Higher end of course scores | Lower end of course scores |
Instructors | |
Less qualified, less experienced | More qualified, more experienced |
Instructors |
Can we identify good teachers by combining evidence from different sources?
Measures of Effective Teaching project
27
Bill and Melinda Gates Foundation (2012)
28
For secondary English teachers (S1 to S3) | |
Correlation with standardized test score gains | 0.69 |
Correlation with higher-order assessments | 0.29 |
Reliability | 0.51 |
To get a 90% reliable prediction of a teacher’s quality, you would need to collect data on each teacher for 9 years
This is what a correlation of 0.69 looks like…
29
Actual
Predicted
…and this is a correlation of 0.29…
30
Actual
Predicted
What is the impact of removing less effective teachers?
31
What if we remove low-performing teachers?
32
Winters and Cowen (2013)
System-wide impact
33
Policy | Severity (percentile) | Increase in teacher valued-added | Extra weeks of learning per student per year |
Consecutive | 5th | .003 | 0.0 |
10th | .006 | 0.1 | |
25th | .020 | 0.3 | |
Two-year average | 5th | .020 | 0.3 |
10th | .031 | 0.4 | |
25th | .050 | 0.7 |
What does this all mean?
34
Evaluation vs. improvement
The ‘next big thing’
Things that don’t work
Things that might work
Things that do work—a bit
There is no ‘next big thing’
Just lots of small, mostly old, things
Understanding meta-analysis
Meta-analysis in education
So what does this mean?
Learning from research
44
Classroom formative assessment
Formative assessment
46
Span
Length
Impact
Long-cycle
Medium-cycle
Short-cycle
Across terms, teaching units
Four weeks to
one year
Monitoring, curriculum alignment
Within and between lessons
Minute-by-minute and day-by-day
Engagement, responsiveness
Within and between teaching units
One to four weeks
Student-involved assessment
46
Unpacking Formative Assessment
47
| Where the learner �is going | Where the learner�is now | How to get �the learner there |
Teacher | | | |
Peer | | | |
Student | | | |
Clarifying, sharing, and understanding learning intentions
Eliciting evidence of learning
Providing feedback that moves learners forward
Activating students as learning
resources for one another
Activating students as
owners of their own learning
47
Unpacking Formative Assessment
48
| Where the learner �is going | Where the learner�is now | How to get �the learner there |
Teacher | | | |
Peer | | | |
Student | | | |
Clarifying, sharing, and understanding learning intentions
Eliciting evidence of learning
Providing feedback that moves learners forward
Activating students as learning
resources for one another
Activating students as
owners of their own learning
Responsive teaching
The learner’s role
Before you can begin
48
Strategies and techniques
50
Strategies and techniques
So much for the easy bit�
Reasons not to do formative assessment
53
What makes effective teacher learning?
A model for teacher learning
Supportive accountability
A “signature pedagogy” for teacher learning
57
Every TLC needs a leader
58
Peer observation
59
We’ll know when it’s working when…
60
The empirical evidence: Large-scale trials
61
Evaluation
62
Speckesser, Runge, Foliano, Bursnall, Hudson-Sharpe, Rolfe, and Anders (2018)
To find out more…
63
…and even more...