I am enjoying reading all
the summaries of the interventions for learning in Hattie's Visible Learning . They are a little like reading “Wittgenstein in ninety minutes”
– the summaries are extraordinarily useful overviews of the key research
papers informing learning interventions from homework, to acceleration, to
bilingual programs, to simulations and gaming etc.
In truth I think Routledge
missed a marketing moment here. The research
summaries and natty little d=0.33 effect size coloured dials are crying out to
be captured on collectible trading cards.
They could be effortlessly developed into a “Magic: The Gathering” type
trading card game to be played online or with decks of collectible cards by teachers, educators, parents and students all over the world. I am
already imagining what influences I’d want to collect for my deck.
I am trying to make
connections between Hattie’s thinking about the effect sizes of different influences
on achievement, and the Best Evidence Synthesis on Professional Development, to
better inform what I do with teachers in the ICT_PD Cluster programme.
All the reading makes me realise
that;
Almost everything has an influence on learning.
As a teacher I can be a significant influence on the
learning of others
The strategies I use to teach with vary widely in their
effectiveness.
Learning results in some kind of a learning outcome.
Learning outcomes vary – let us call it a continuum
between shallow and deep.
Effective teaching and learning occur when both my
students and I can tell you; what we are doing, how well it is going, and what we
need to do next.
Effective teaching and learning requires that my
students and I can distinguish between surface and deep learning outcomes.
Part of the day job sees
me working within the structure of the ICT_PD cluster model to change teacher
practice when using ICTs so that the teacher practice that results makes significant
changes to student learning outcomes.
Changing teacher
practice is an interesting challenge. There
is an irony in this activity that is kind of like being a life coach for a life
coach.
When thinking about the effectiveness of any professional learning I do,
or do to others, I have always been a fan of Thomas Guskey’s thinking. I have
used his 5 levels thinking as a personal effectiveness audit for many years. Evaluating Professional Development by T.R Guskey
1.
Participants’ Reactions
2.
Participants’ Learning
3. School
Organisation Support and Change
4.
Participants’ Use of Knowledge and Skills
5. Student
Learning Outcomes over the period of the professional learning
The Teacher Professional
Learning and Development Best Evidence Synthesis Iteration (BES) by H. Timperley, A. Wilson, H. Barrar and I.
Fung (published in December 2007) means I now know heaps more about how best to change teacher practice in a New
Zealand context.
Timperley’s work means we
also know heaps about what doesn’t change teacher practice For example, the BES suggests that bringing in a
charismatic speaker, giving teacher’s release time, holding a TOD, taking
everyone away to a conference, enrolling people in an online community, or
having visionary leadership may well be valued activity but they are the
equivalent of “busy work” when it comes to professional learning for they do
not lead to changes in teacher practice that are reflected in changes in
student learning outcomes.
Being able to live with
paradox means that within the ICTPD cluster programme, I am charged with and
encourage all of the above.
When you read Hattie and
Timperley et al it is apparent that, despite all the literature framing adult
learners as having different learning needs from students stuff, there are even
bigger similarities between changing teacher practice and helping kids achieve
deep learning outcomes – both require hard work.
I want to learn how to teach
teachers in ways that are more effective, in ways that are most likely to make
big shifts in improving teacher and student learning. The BES helps but not enough. Without some clarity, over the strategies that
make the biggest differences to student learning outcomes when using ICTs, it
is hard for me to tell what I should be targeting, and then what I am doing,
how it is going and what I should do next.
I don’t want anecdote to
drive the decisions I make about the way I teach. But, after reading Hattie and Timperley et al
I acknowledge that I do not have enough research based evidence to judge the
effectiveness of what I do when teaching teachers how to teach using ICTs in
classrooms and schools.
The
BES suggests that if I want the teachers I work with to have deep learning
about the strategies that are most effective when using ICTs in teaching and
learning I need to provide them with,
1. multiple opportunities to
learn through a range of activities.
2. activities
focused on content aims, eg
translating theory into practice or demonstrating how assessment could be used
to focus and refine teaching.
Part
1 is achievable but part 2, well part 2 is problematic.
The
thing is that the facilitators in the ICTPD clusters don’t have any agreed upon content aims, theory,
practice or demonstration of how assessment could be used to focus and refine
teacher practice when teaching through ICTs.
I guess it is a whole lot easier to provide a
conference for teachers to attend each year than it is to provide clearly
identified professional standards for effective teacher practice when using
ICTs in teaching and learning.
Unlike
the numeracy, literacy or assessment for learning contracts the ict_pd cluster
programme pretty much leaves clusters and facilitators to develop their own
content aims.
Our
cluster’s content aims are based around helping teachers;
- identify the student learning outcome
in the learning experiences they plan for their students,
- identify the ICTs that might
enhance the conditions of value of that student learning outcome.
- learn how to use the ICTs
identified to help student learn, and
- develop self assessment rubrics
and success criteria to allow students to self assess their learning
outcome and know what to do next.
They
had always seemed defensible until I read Hattie.
Reading
"Visible Learning" makes me realise that my content aims and process are based upon
probability not evidence.
It is
probable/ likely that the approach I take will enhance student learning
outcomes but I have no evidence based practice to support this in the context
of using ICTs. I have no professional standards or success criteria to let me
know if I am successful or what I should do next.
However,
more significant was the realisation that what I usually take as proof of our
success – all those pages and pages of milestone reporting on improvements in
student learning outcomes as a consequence of using ICTs with students in our
cluster schools ... is actually no big deal ... for improvements in learning
outcome are a given when innovations are introduced to schools.
In
fact the only result worth reporting on in a milestone would be if there was no
improvement in student learning outcome as a result of using ICTs in teaching
and learning.
Which means that providing evidence in a
milestone report that suggests that student academic achievement has been
enhanced because I have encouraged teachers to use an IWB, student blogs, an
online community, or VoiceThread is not as exciting as we make it out to
be. At best it only affirms that
teachers are in a cluster looking at how best to use ICTs in teaching and
learning. Because the very act of looking at what we do enhances learning outcomes .
To put it another way, providing evidence of
enhanced student learning outcomes is no big deal because Hattie’s research
into influences on student learning outcomes identifies that most everything I
do as a teacher will improve learning outcomes.
Realising
that “everything works” is a bit of a wakeup call. It is also why the trading card game idea is a
winner.
If as Hattie describes just “having a
teacher’s pulse” in a classroom can be shown to improve student learning
outcomes, with an average effect size of d= 0.2 to d= 0.4 growth per year, then
it is uncomfortably apparent that we need to
look for other ways to discriminate between all the things that influence
learning outcomes.
Because if everything works, then everything
can be and is defended, and we have no professional yardstick to tell us what
to do next, or how to improve the effectiveness of our practice.
I reckon we are especially vulnerable to
this “everything works” effect in the ICT_PD Cluster programme. This is because in the ICTPD Cluster
programme we are asking teachers to change their existing practice, and as
Hattie shows just being involved in an innovation that alters an existing
practice can lead to improvements in student learning outcomes.
“the mere involvement in asking questions about the
effectiveness of any innovation may lead to an inflation of the results”
Hattie uses this to argue that we should be
looking for an effect size of d=0.4 or greater when deciding on where to put
our energies in teaching and learning, when deciding on the strategies to best
influence our effectiveness. Noting that effect sizes may not be uniform across
all students, Hattie suggests that “effects lower than d=0.4 indicate the need
more consideration (costs, interaction, facts and so on)”.
Look at the effect sizes associated with ICts and teaching and learning.
Computer
assisted instruction: d= 0.37
Web-
based learning: d= 0.18
Interactive
video methods: d= 0.52
Audio/Visual
methods: d=0.22
Simulations:
d=0.33
Programmed
instruction: d= 0.24
From Hattie, J. (2009) Visible
Learning p 220 to 232
I sense some great discussions ahead. Also worth thinking about is that the effect size for computer based instruction has remained much the same for the last thirty years - between 1975
and 2007. Meaning that all the
improvements in technologies, hardware, software, infrastructure etc, that we
have purchased and introduced to schools, have made no significant improvements
in the effect size on student learning outcomes. How will the paradigm shifters and marketers make
sense of that one?
There
are four questions that I particularly want to hang onto, before I read more.
Englemann (cited in Hattie 2009 p253)
challenges teachers and schools to ask four critical questions about the
innovations we are asked to adopt in school. The kicker is in the third question.
- Precisely
where have you seen this practice installed so that it produces effective
results?
- Precisely
where have you trained teachers so they can uniformly perform within
guidelines of this new system?
- Where
is the data that show you have achieved performance that is superior to
that achieved by successful programmes (not simply the administrations
last unsuccessful attempt).
- Where
are your endorsements from historically successful teachers (those whose
students outperform demographic predictions)?
When we are thinking about ICTs in teaching
and learning they become:
- Precisely
where have you seen teaching and learning through ICTs installed so that
it produces effective results in enhancing student achievement?
- Precisely
where have you trained teachers so they can uniformly perform within
guidelines of this new system?
- Where
is the data that show teaching and learning using ICTs achieves
performance that is superior to that achieved by other successful
strategies used in teaching and learning?
- Where
are your endorsements from historically successful teachers – those whose
students outperform demographic predictions for student achievement?
Recent Comments