I am enjoying reading all
the summaries of the interventions for learning in Hattie's Visible Learning . They are a little like reading “Wittgenstein in ninety minutes”
– the summaries are extraordinarily useful overviews of the key research
papers informing learning interventions from homework, to acceleration, to
bilingual programs, to simulations and gaming etc.
In truth I think Routledge
missed a marketing moment here. The research
summaries and natty little d=0.33 effect size coloured dials are crying out to
be captured on collectible trading cards.
They could be effortlessly developed into a “Magic: The Gathering” type
trading card game to be played online or with decks of collectible cards by teachers, educators, parents and students all over the world. I am
already imagining what influences I’d want to collect for my deck.
I am trying to make
connections between Hattie’s thinking about the effect sizes of different influences
on achievement, and the Best Evidence Synthesis on Professional Development, to
better inform what I do with teachers in the ICT_PD Cluster programme.
All the reading makes me realise
that;
Almost everything has an influence on learning.
As a teacher I can be a significant influence on the learning of others
The strategies I use to teach with vary widely in their effectiveness.
Learning results in some kind of a learning outcome.
Learning outcomes vary – let us call it a continuum between shallow and deep.
Effective teaching and learning occur when both my students and I can tell you; what we are doing, how well it is going, and what we need to do next.
Effective teaching and learning requires that my
students and I can distinguish between surface and deep learning outcomes.
Part of the day job sees
me working within the structure of the ICT_PD cluster model to change teacher
practice when using ICTs so that the teacher practice that results makes significant
changes to student learning outcomes.
Changing teacher
practice is an interesting challenge. There
is an irony in this activity that is kind of like being a life coach for a life
coach.
When thinking about the effectiveness of any professional learning I do,
or do to others, I have always been a fan of Thomas Guskey’s thinking. I have
used his 5 levels thinking as a personal effectiveness audit for many years. Evaluating Professional Development by T.R Guskey
1.
Participants’ Reactions
2.
Participants’ Learning
3. School
Organisation Support and Change
4. Participants’ Use of Knowledge and Skills
5. Student Learning Outcomes over the period of the professional learning
The Teacher Professional
Learning and Development Best Evidence Synthesis Iteration (BES) by H. Timperley, A. Wilson, H. Barrar and I.
Fung (published in December 2007) means I now know heaps more about how best to change teacher practice in a New
Zealand context.
Timperley’s work means we
also know heaps about what doesn’t change teacher practice For example, the BES suggests that bringing in a
charismatic speaker, giving teacher’s release time, holding a TOD, taking
everyone away to a conference, enrolling people in an online community, or
having visionary leadership may well be valued activity but they are the
equivalent of “busy work” when it comes to professional learning for they do
not lead to changes in teacher practice that are reflected in changes in
student learning outcomes.
Being able to live with
paradox means that within the ICTPD cluster programme, I am charged with and
encourage all of the above.
When you read Hattie and
Timperley et al it is apparent that, despite all the literature framing adult
learners as having different learning needs from students stuff, there are even
bigger similarities between changing teacher practice and helping kids achieve
deep learning outcomes – both require hard work.
I want to learn how to teach
teachers in ways that are more effective, in ways that are most likely to make
big shifts in improving teacher and student learning. The BES helps but not enough. Without some clarity, over the strategies that
make the biggest differences to student learning outcomes when using ICTs, it
is hard for me to tell what I should be targeting, and then what I am doing,
how it is going and what I should do next.
I don’t want anecdote to
drive the decisions I make about the way I teach. But, after reading Hattie and Timperley et al
I acknowledge that I do not have enough research based evidence to judge the
effectiveness of what I do when teaching teachers how to teach using ICTs in
classrooms and schools.
The
BES suggests that if I want the teachers I work with to have deep learning
about the strategies that are most effective when using ICTs in teaching and
learning I need to provide them with,
1. multiple opportunities to
learn through a range of activities.
2. activities
focused on content aims, eg
translating theory into practice or demonstrating how assessment could be used
to focus and refine teaching.
Part
1 is achievable but part 2, well part 2 is problematic.
The
thing is that the facilitators in the ICTPD clusters don’t have any agreed upon content aims, theory,
practice or demonstration of how assessment could be used to focus and refine
teacher practice when teaching through ICTs.
I guess it is a whole lot easier to provide a
conference for teachers to attend each year than it is to provide clearly
identified professional standards for effective teacher practice when using
ICTs in teaching and learning.
Unlike
the numeracy, literacy or assessment for learning contracts the ict_pd cluster
programme pretty much leaves clusters and facilitators to develop their own
content aims.
Our
cluster’s content aims are based around helping teachers;
- identify the student learning outcome
in the learning experiences they plan for their students,
- identify the ICTs that might
enhance the conditions of value of that student learning outcome.
- learn how to use the ICTs
identified to help student learn, and
- develop self assessment rubrics
and success criteria to allow students to self assess their learning
outcome and know what to do next.
They
had always seemed defensible until I read Hattie.
Reading
"Visible Learning" makes me realise that my content aims and process are based upon
probability not evidence.
It is
probable/ likely that the approach I take will enhance student learning
outcomes but I have no evidence based practice to support this in the context
of using ICTs. I have no professional standards or success criteria to let me
know if I am successful or what I should do next.
However,
more significant was the realisation that what I usually take as proof of our
success – all those pages and pages of milestone reporting on improvements in
student learning outcomes as a consequence of using ICTs with students in our
cluster schools ... is actually no big deal ... for improvements in learning
outcome are a given when innovations are introduced to schools.
In
fact the only result worth reporting on in a milestone would be if there was no
improvement in student learning outcome as a result of using ICTs in teaching
and learning.
Which means that providing evidence in a
milestone report that suggests that student academic achievement has been
enhanced because I have encouraged teachers to use an IWB, student blogs, an
online community, or VoiceThread is not as exciting as we make it out to
be. At best it only affirms that
teachers are in a cluster looking at how best to use ICTs in teaching and
learning. Because the very act of looking at what we do enhances learning outcomes .
To put it another way, providing evidence of
enhanced student learning outcomes is no big deal because Hattie’s research
into influences on student learning outcomes identifies that most everything I
do as a teacher will improve learning outcomes.
Realising
that “everything works” is a bit of a wakeup call. It is also why the trading card game idea is a
winner.
If as Hattie describes just “having a
teacher’s pulse” in a classroom can be shown to improve student learning
outcomes, with an average effect size of d= 0.2 to d= 0.4 growth per year, then
it is uncomfortably apparent that we need to
look for other ways to discriminate between all the things that influence
learning outcomes.
Because if everything works, then everything
can be and is defended, and we have no professional yardstick to tell us what
to do next, or how to improve the effectiveness of our practice.
I reckon we are especially vulnerable to
this “everything works” effect in the ICT_PD Cluster programme. This is because in the ICTPD Cluster
programme we are asking teachers to change their existing practice, and as
Hattie shows just being involved in an innovation that alters an existing
practice can lead to improvements in student learning outcomes.
“the mere involvement in asking questions about the
effectiveness of any innovation may lead to an inflation of the results”
Hattie uses this to argue that we should be
looking for an effect size of d=0.4 or greater when deciding on where to put
our energies in teaching and learning, when deciding on the strategies to best
influence our effectiveness. Noting that effect sizes may not be uniform across
all students, Hattie suggests that “effects lower than d=0.4 indicate the need
more consideration (costs, interaction, facts and so on)”.
Look at the effect sizes associated with ICts and teaching and learning.
Computer
assisted instruction: d= 0.37
Web-
based learning: d= 0.18
Interactive
video methods: d= 0.52
Audio/Visual
methods: d=0.22
Simulations:
d=0.33
Programmed
instruction: d= 0.24
From Hattie, J. (2009) Visible
Learning p 220 to 232
I sense some great discussions ahead. Also worth thinking about is that the effect size for computer based instruction has remained much the same for the last thirty years - between 1975
and 2007. Meaning that all the
improvements in technologies, hardware, software, infrastructure etc, that we
have purchased and introduced to schools, have made no significant improvements
in the effect size on student learning outcomes. How will the paradigm shifters and marketers make
sense of that one?
There
are four questions that I particularly want to hang onto, before I read more.
Englemann (cited in Hattie 2009 p253)
challenges teachers and schools to ask four critical questions about the
innovations we are asked to adopt in school. The kicker is in the third question.
- Precisely
where have you seen this practice installed so that it produces effective
results?
- Precisely
where have you trained teachers so they can uniformly perform within
guidelines of this new system?
- Where
is the data that show you have achieved performance that is superior to
that achieved by successful programmes (not simply the administrations
last unsuccessful attempt).
- Where
are your endorsements from historically successful teachers (those whose
students outperform demographic predictions)?
When we are thinking about ICTs in teaching and learning they become:
- Precisely
where have you seen teaching and learning through ICTs installed so that
it produces effective results in enhancing student achievement?
- Precisely
where have you trained teachers so they can uniformly perform within
guidelines of this new system?
- Where
is the data that show teaching and learning using ICTs achieves
performance that is superior to that achieved by other successful
strategies used in teaching and learning?
- Where
are your endorsements from historically successful teachers – those whose
students outperform demographic predictions for student achievement?
This might be a reasonable start with Hattie for those not ready yet to buy his book: https://www.det.nsw.edu.au/proflearn/docs/pdf/qt_hattie.pdf
Posted by: Bill Kerr | February 07, 2009 at 02:55 AM
Thanks Bill,
I thought of your programming work when I was reading the table that summarised the effect sizes for the major uses of computers in classrooms
Check out the effect sizes in Table 10.8 (p224)
Hattie emphasises that the book is "An explanatory story, not a “what works” recipe."
Hattie spends time explaining the problems and criticisms of making meaning from meta analysis, and the strengths and weaknesses of evidence based practice. And after all the change rhetoric we have listened to in the past 5 years his take on what it will take for change to happen, and why change doesn’t happen in education is refreshing.
He allows contrary arguments space in his work and I like that.
Posted by: Artichoke | February 07, 2009 at 08:55 AM
Thanks Arti. Thanks Bill for the link.
You can read a bit of Visible Learning at Google Books but unfortunately not page 224.
At p221 Hattie says:
computers are used effectively
a) when there is a diversity of teaching strategies;
b) where there is a pretraining in the use of computers as a teaching and learning tool
c) when there are multiple opportunities for learning (eg deliberative practice, increasing time on task)
d) when the student not teacher is in "control" of learning
e) when peer learning is optimised and
f) when feedback is optimised
(d) and (e) are particularly relevant for game programming.
It is a pity not to view p224 to see what is meant by programming, simulations and problem solving. Though being a meta^2-analysis there may not be much more to read. What of programming simulations and solving problems while doing it? There is too much detail hidden in broad categories.
Posted by: tonyforster.blogspot.com | February 07, 2009 at 01:55 PM
I will scan p224 for you Tony but you do need to see all the pages, because Hattie is particularily alert to the problems of detail hidden in broad categories, different effects on different groups and how effects can be combined ... that is why it is such a refreshing read.
Blogging about Visible Learning doesn't and probably cannot do it justice because I am cherry picking effects which is contrary to way his argument is developed.
Hattie also acknowledges that
"the limitations of many of the results in this book is that they are more related to the surface and deep knowing and less to conceptual understanding. p249
And that
"It is the case that in this book only meta-analyses have been given the privilege of being considered. A review of non-meta-analytic studies could lead to a richer and more nuanced statement of the evidence. I leave this to others to review in this manner, although I have tried to incorporate aspects of these other views in my own summaries of each area. The emerging methodology of qualitative synthesis promises to add a richness to our literature (Au, 2007; Thorne, Jensen, Kearney, Noblit, & Sandelowski, 2004). p 255
Posted by: Artichoke | February 07, 2009 at 02:19 PM
One problem is that computer labs and the way they are organised in schools are often not very good environments for the nuanced development of feedback. ie. since the computer lab is an expensive, time restricted resource then there is great pressure on the teacher to keep students on computer tasks and not interrupt those tasks for other matters. Also the physical layout of some computer rooms is poor, especially those with computers in rows. Best layout is all computers around the walls. I would argue that if students had netbooks 24/7 then very different interactions could develop in computers use. So, perhaps Hattie's research confirms that the computer revolution hasn't happened yet?
Posted by: Bill Kerr | February 08, 2009 at 02:18 AM
Thanks Arti for helping me make some important links between Hattie's work and ICT. Like you say some great discussions ahead- Is there a forum for discussions focused on this research at Learning@School??? I feel the need to get a group going on this so some serious 'fat-chewing' can take place.
Perhaps a starting point would be a discussion around meta-analysis?
Posted by: Rocky | February 15, 2009 at 09:41 PM
I'd enjoy wrestling with the ideas in Visible Learning with others Rocky - for it is a densely textured book, and to blog about it compromises the ideas by simplifying them.
See if you can get hold of Chapter 2 The Nature of the Evidence where Hattie explains what a synthesis of meta-analyses invoves, the problems of meta-analyses, the distribution of effect sizes and the hinge point. It provides great background.
But I suspect even with Chapter 2 as a shared reading we should entice a statistician to join us incase we need help to explain the tricky bits ...
In my experience if we arrange to meet at the Pig and Whistle it shouldn't be too difficult to get those that can dance with the distribution of effect sizes, large sample sizes and normal distributions to join us. They often like Learning@Pub conversations
Posted by: Artichoke | February 15, 2009 at 10:26 PM
It is useful to distinguish the different directions and roles of feedback in the complex communication that goes on in teaching and learning situations. For example:
1. feedback can be about eliciting / noticing information from a student in order to behave differently or responsively, to better facilitate a learning goal.
2. another form of feedback from a teacher (or social worker or coach) directed toward the student is about guiding learning and inquiry (recognising learning opportunities and fruitful options to explore) in response to 1. while
3. feedback can also be purposed to "build relational trust" by helping a student recognise moments of resilience (in the face of dissappointment),or tolerance of failure, and mastery (in relation to some goal achieved or problem resolved).
This latter form of feedback is under-emphasised in much social practice and yet it is the thing that gets social practitioners and children and young people up in the morning - ready to go with renewed excitement and diminished fear.
The effect size arguments around this stuff make for simplistic linear retrospective explanations. The analogy in another domain is that conventional science is only now giving some grudging credence to the power and importance of the "placebo effect": the psycho-emotional determinants of health, which are frequently unconscious but often predictable. This is a case of academia catching up with centuries of heuristics.
Going back to education, you can get (good enough) "student achievement" co-existing with relatively low levels of self-belief / self-efficacy amongst students. This scenario is not a good [resilient / adaptive] educational result.
It begs the question what model of human development(should)underpin our teaching and learning. If we are to truly avoid treating Hattie's explanations as recipe's or checklists, "what works" should be viewed in this different kind of evaluative light.
Posted by: Geoff Stone | March 18, 2009 at 05:33 PM
Hi Geoff,
Thanks for the analysis and apologies for the time I have taken to post a reply.
My criteria for accepting new work has proven to be flawed – it is not sufficient to ask whether the works offers new ideas and dangers that might unstitch me ... I must also learn to ask whether accepting the new challenge will allow me the time to continue to play with Arti’...
Working in schools in high county landscapes where steers need mustering, Perendales need crutching and drenching, wild mushrooms need gathering and sorting for maggots , pig fodder needs transporting, black orpingtons need feeding, sheep dogs need instructing, horses need shoeing, petrol use needs monitoring and Speights needs drinking takes a lot of energetic observation.
To follow up the high country station experience with a visit to the lifestyle block fringes of Palmerston North and then to finish this with a cross country rental drive ending in the total indulgence that only Craig and Te Horo Lodge can provide is to risk a serious cognitive unravelling.
Still the delay has allowed me to play with the idea of feedback in an educational context. I think you are absolutely “ontoit” when you offer descriptors that allow us to distinguish different forms and functions for feedback and urge caution and oversimplistic analysis of causality.
To offer an analogy in another domain – many of the primary schools I work with study the learning area - science - living world – ecology and life processes through “mini beasts”. Once you get over the pejorative notions in adopting a term like “beasts” for invertebrates, I guess we have to argue that it is possible to make “good enough” assumptions and generalisations about the nutrition, respiration, excretion, movement, sensitivity, reproduction, growth of minibeasts found in the playgrounds of school. We have to argue that it is possible to study the habitats of minibeasts and the effects of human and natural induced changes on the habitats and minibeasts that live there.
However, just like simplified notions of feedback in classrooms. blurring of the invertebrates in this way – the mixing of arthropods – those joint legged insects (beetles, butterflies, ants), arachnids (spiders mites scorpions) , myriapods (centipedes and millipedes), and crustaceans (crabs, lobsters, slaters) with the moist soft and slimy molluscs – all those snails slugs mussels pipi cockle, and octopi, with the segmented and sometimes hermaphrodite annelids ( earthworm, leeches, flatworms), with the radially symmetrical hydraulic skeletonised echinoderms – (starfish, sea eggs, sea cucumber) and the hollow insided coelenterates – (sea anemones, jelly fish) is surely to betray our understanding.
I will admit that I have always been rather taken by mechanistic understandings of feedback – and it has taken me years to understand that when people asked for feedback they were not asking me what I thought but rather what I thought they could bear to hear and were likely to act upon.
It seems from what I read that biological systems, electrical engineering designs, and economic modeling which include feedback loops are prone to hunting. Hunting being an oscillation of output that results when a system responds to positive and then negative feedback. With mechanical devices hunting can in some circumstances destroy the device.
All this makes me wonder if it is possible that our relentless focus on feedback in classrooms might introduce a vulnerability in this regard?
In this case our thinking about feedback in classrooms may be less concerned with
1. Feedback eliciting/ noticing information
2. Feedback about guiding learning
3. Feedback to "build relational trust"
and more concerned with guarding against oscillation and spurning vacillation.
As teachers we might ask ourselves ...
And all this overthinking about types of feedback tempts me to adopt a simpler understanding altogether .. perhaps instead of interrogating forms of feedback we should be seeking conversation and friendship .. we should just talk with one another ... is this a model of human development?
Illich always does this best ...
Posted by: Artichoke | March 23, 2009 at 12:14 AM
I have reading the book of Professor Hattie, Visible Learning, that it has been very useful for my work as a teacher. He does not explain on the book the influence n.º 6 - Classroom behavioral (p. 74, Table 6.1). Can you clarify to me what that means? Thank you. José Lopes PORTUGAL
Posted by: José Lopes | April 03, 2009 at 12:03 AM
Hi Jose,
Thank you for commenting on this post - and apologies for only now getting back to the reply - it has been a really busy school term.
Your comment managed to duplicate itself several times so I have deleted a couple - like you I am reading and re reading Hattie and I keep finding things that help me better understand teaching and learning.
Regarding n.º 6 - Classroom behavioral (p. 74, Table 6.1).- Hattie explains this on page 103 under the sub heading Climate of the classroom group cohesion -
Classroom behaviour is any behaviour taking place in a classroom that either supports or interferes with the capability of students to learn the tasks and skills to achieve educationally.
Hattie focuses on cohesion - but he also cites research that identifies that goal directness, positive interpersonal relationships and social support behaviours optimise student learning. And behaviours associated with friction, cliquishness, apathy and disorganisation were negatively associated with learning outcomes.
Posted by: Artichoke | April 16, 2009 at 09:50 PM
Hi Artie
Got to your post on visible learning through Rocky. Your comments, and those of others, have firstly motivated me to get hold of this book as soon as possible, and secondly have really challenged my thinking on effective teaching and effective use of ICT. My head is spinning.
Posted by: Conor | April 17, 2009 at 03:21 PM
Hi Conor,
My head is still spinning - I like nothing better than to have something I hold as true to be shaken up a little - how else would I know I was alive -
I loved Visible Learning because it undermines in the best possible way so much of what we do in the day job and yet at the same time it doesn't pretend to provide the solution ..
Unlike many educators Hattie acknowledges that his writing is speculative ... and that is so refreshing.
And thanks for commenting - I'll be interested to hear what you make of the text itself - the internet allows all views to be broadcast and other people's blog interpretations are always a little dodgy - for instance many of mine need to be taken with several large bags of salt.
Posted by: Artichoke | April 17, 2009 at 04:11 PM
Thank you for yours explanations. José Lopes
Posted by: José Lopes | April 17, 2009 at 11:32 PM
An interesting posting indeed. I found it particularly fascinating how the concept of visual learning appeared to be dominated by the notion of the importance of both the teacher’s as well as the student’s perceptions on what is being learned and whether the process is effective or not. While the influence of a teacher on a student population will definitely be evidenced in the student’s performance, the more collaborative styled learning method which nearly equates a teacher’s realizations with a student’s should be even more pronounced.
Posted by: B. Goode | May 15, 2012 at 11:45 AM