What kind of “assessment for learning” is appropriate in the age of Google and Wikipedia? Facebook and You Tube? Smart phones and text messaging? Twitter and blogging? (after Manovich on Soft Cinema).
In December each year futurists read the e-market-place like goat entrails and share the “tech trends most likely” with the rest of us.
The assumption wedged underneath each forecast is “if stuff changes so should we” - that sense of “you can’t step into the same river twice” thing – first raised by Heraclitus 2,500 years ago.
The underlying message in the goat entrails remains the same each year – if you didn’t know better you’d swear it was the same goat - we learn that doing school with young people is taking place in a “runaway world” - a “here and now” world – a world where changing technologies mean migration, economics, politics, ecosystems, climate - everything is changing – so should school.
As a consequence of all these predicted trends of technological change - school – all that teaching and learning risks irrelevance if pedagogy doesn’t adapt to [insert the latest temporary and ever transient technology] that schools/students will be purchasing/using (this year we are looking at mobile technologies, cloud computing, augmented reality, open content, electronic books etc).
The e visionaries and thought leaders claim that because of [insert your preferred technological trend here] the actors (teachers and learners), the play (teaching and learning) and the performance space (school) have changed.
Yeeha - this is the season when the what, the how, the where, and the when of teaching and learning is up for critical review (once again or some things don’t change).
Determining what kind of “assessment for learning” is appropriate is quite a challenge with all this “are we, aren’t we”, “shall we, shan’t we”, “should we, could we” going on.
Forced into pedagogical promiscuity by the futurists, I spend each festive season wrestling over what kind of “teaching”, “learning” and “assessment for learning is appropriate in a school where [insert the latest divination - mobile technologies, cloud computing, augmented reality, open content, electronic books etc]are ubiquitous?
I do this whilst all the time knowing that “the better question to ask” is always the one that focuses on the culture (what people do to belong) rather than the technology.
Thinking about “assessment for learning” In a culture of consumerism – where students are consumers and commodities
What kind of “assessment for learning” is appropriate in a consumer culture? In a culture where data mining and consumer profiling makes our every activity something to be bought, sold and traded. In a culture where our preferred pleasures are principally made available through the market place (personal shopping, dining out, game playing and video watching)? In a culture where all aspects of our lives (including teaching and learning) are commoditised?
When students are both consumer and commodity, schools are charged with setting targets and meeting standards, to make the product of “providing school” visible. An abundance of measures and instruments are developed to determine if what is consumed is adding value to student learning outcomes, student engagement, student enrolment, student retention, and teacher professionalism.
An abundance of arguments result – many proposals address accountability - very few address context. Some of the best new thinking about “assessment for learning” in this regard is the ideas and open source applications coming out of Salman Khan’s latest initiative from the Khan Academy free self-paced applications for math … personalised assessment for learning with where to next advice.
My not-for-profit Khan Academy, which has recently gotten support from Google and the Bill & Melinda Gates Foundation, has a free, self-paced application that generates exercises for students dynamically. It is being developed as an open-source project and is already used by several tens of thousands of students. Over the next two years, we intend to have unlimited exercises covering every major math concept through calculus, and then to continue even further.
We collect data on when a student does a problem, how long it takes, and what happens before and after a video is viewed. And we can present this information in real time to the student, professor, parent, or administrator. We provide immediate feedback on proficiency and give step-by-step explanations of every problem. Most important, students continue to get exercises until they correctly answer 10 questions in a row (not once they have answered 70 or 80 or 90 percent of the questions correctly).
We are developing other applications to create repositories of teacher- and student-generated questions (on which we would collect the same metadata and ratings on quality, difficulty, and importance). Between these apps and our 2,000-plus and growing on-demand video library, which is being used by more than a million students a month, there is a genuine opportunity for educational institutions to rethink the system so that it is both more effective and more economical. We have decided to do this as a not-for-profit venture, so that our goal of optimizing learning never conflicts with profit maximization (which leads to the type of behaviour we see in publishers). Khan S.2010
Thinking about “assessment for learning” In a culture of participation: students as collaborators and producers.
What kind of “assessment for learning” is appropriate in a world where traditional media converging with digital, interactive and social media is creating a new participatory culture? What kind of “assessment for learning” is appropriate for a digital media culture where students are not only “consumers” and “commodities” but are also “producers” of learning?
To think about “assessment for learning” in this participatory cultural context requires a couple of leaps of faith. The first occurs when we accept the claims that a significant proportion of students ARE collaborating to produce new content – It is difficult to accept that young people are actively participating in this much vaunted “participatory culture” – as a number of commentators who use statistics uncontaminated by the din from the marketplace have commented - this is not necessarily warranted … refer the myth of user generated content
Educational reformers suggest that the advent of new technologies will radically transform what people learn, how they learn, and where they learn, yet studies of diverse learners’ use of new media cast doubt on the speed and extent of change. Warschauer (2007)
It seems that much like Twitter being a force for democracy when analysis showed that “Twitter users at the time of the revolution made up 0.082% of Internet users in Iran." (Leadbeater 2010) the participatory producing thing may be an espoused outcome not an actual outcome for young people.
Be that as it may, “blogetic license” allows me to imagine the changes I might hope to see in “assessment for learning” if a majority of students were actively involved in a participatory culture where they used using “images, photographs, video, animation, music, sound, texts and typography” to “interpret, design and create content” and a majority of their teachers understood … “frame composition; colour palette; audio, image, and video editing techniques; sound–text–image relations; the effects of typography; transitional effects; navigation and interface construction; and generic conventions in diverse media.”
How should we think about “assessment for learning” when students collaborate to take part in participatory culture as producers of knowledge?
We could focus here on the challenges in giving feedback on collaborative or group projects. However this thinking around the challenges in determining the relative individual contribution to a collaborative project has already been well teased out in the context of old media. Many commentators settle on individual reflective reports that explain how individual contribution enhances the project outcomes as a whole. It is more interesting to think about assessment for learning changes when the collaborative or group project produces new knowledge using new media.
This first difference is that when young people use new media to collaborate and to produce knowledge, expertise and authority no longer resides with teachers and authority figures in schools.
The MacArthur Foundation report “Living and Learning with New Media” Nov 2008 (pdf) reveals that in collaborating to produce new knowledge students rely on their peers (peer based sharing and feedback) more than they do authority figures. No surprises here then.
Critique and feedback can take many forms, including posted comments on a site that displays works, private message exchanges, offers to collaborate, invitations to join other creators’ social groups, and promotion from other members of an interest-oriented group. “Living and Learning with New Media” Nov 2008 p35
However, when peers do provide feedback that might count as “assessment for learning” it is not met with uncritical acceptance as might happen with feedback from an authority figure in school.
The mechanisms for getting input on one’s work and performance can vary from ongoing exchange on online chat and forums to more formal forms of rankings, critiques, and competition. Unlike what young people experience in school, where they are graded by a teacher in a position of authority, feedback in interest-driven groups is from peers and audiences who have a personal interest in their work and opinions. Among fellow creators and community members, the context is one of peer-based reciprocity, where participants can gain status and reputation but do not hold evaluative authority over one another. “Living and Learning with New Media” Nov 2008 p35
It seems student producers of new content are discerning about the value of the feedback from peers.
So what kind of “assessment for the learning” is appropriate for outcomes in a participatory culture when anecdotal comments from students suggest that they do not necessarily find review by their peers valuable?
For example blog commenting by other students may no longer be appropriate “assessment for learning” in a participatory culture - if student bloggers have wised up to the endless astro-turfing of their blog posts with comments that have been solicited by their teachers – they want discerning and authentic feedback as “assessment for learning”.
Study participants did not value simple five-star rating schemes as mechanisms for improving their craft, although they considered them useful in boosting ranking and visibility. Fansubbers generally thought that their audience had little understanding of what constituted a quality fansub and would take seriously only the evaluation of fellow producers. Similarly, AMV creators play down rankings and competition results based on “viewer’s choice.” The perception among creators is that many videos win if they use popular anime as source material, regardless of the merits of the editing. Fan fiction writers also felt that the general readership, while often providing encouragement, offered little in the way of substantive feedback. “Living and Learning with New Media” Nov 2008 P35
The second and I find significant difference when students collaborate online to produce new knowledge lies in how we interpret “peers”. In online participatory culture a peer is a co-conspirator and as such may well be an adult – just not an adult with conventional institutional authority as occurs in the F2F culture of school.
In contexts of peer-based learning, adults can still have an important role to play, though it is not a conventionally authoritative one. In friendship-driven practices, direct adult participation is often unwelcome, but in interest-driven groups we found a much stronger role for more experienced participants to play. Unlike instructors in formal educational settings, however, these adults are passionate hobbyists and creators, and youth see them as experienced peers, not as people who have authority over them. These adults exert tremendous influence in setting communal norms and what educators might call “learning goals,” though they do not have direct authority over newcomers.” “Living and Learning with New Media” Nov 2008 P43
The kind of “assessment for learning” appropriate for students in a participatory culture is - assessment feedback where adults (with considerable expertise and experience but without direct authority) set “learning goals” for students. Kind of like the flip of how we would describe most classroom teachers who have direct institutional authority over students and often lack work experience in creative/productive sector having never really left the institution of school - gone straight from school to university to teacher training and back to school.
A third challenge arises when you hear educators suggesting that web metrics – online visitor stats provide feedback for learning for their students. Do web metrics provide appropriate feedback for learning for students creating outcomes online in a participatory culture? The short answer is “probably not”.
I have referred before on Artichoke to Seb Chan’s thinking about web metrics in the context of feedback for museums - Seb Chan’s analysis reveals that traditional Web analytics and metrics are inadequate in terms of the feedback they provide.
Each measure a school might suggest useful as feedback has a validity and reliability error. Aka - The Problem With Log File Data, The Problem With Page Tagging Data, The Problem With 'Unique Visitors', The Problem With 'Visits' And 'Time Spent On Site', and The Problem With 'Page Views'.
Even the number of visitors who click on an interactive such as a student created video talk or download a student created podcast is exposed as dodgy “assessment for learning” when a more detailed analysis shows so few visitors watch the whole video or listen to the whole podcast. Chan suggests an alternative that might well prove useful for students trying to find a metric useful for “assessment for learning”.
“In many ways the best measure of the success of a podcast is how much feedback and discussion it generates. This is far more valuable than the total number of downloads”.
Chan argues for third party web metric measures of visitor behaviour using RSS feed tracking, comments on the museum website, but also on other blog posts and comments, tagging and comments on museum content on Flickr Commons photos and how these are used in other conversations in communities and blogs, Trackback, and Facebook friends, fans and profile comments. All of these gave a better indication of the success of museums and exhibits and events than number of visitors/page views. It seems likely they will also provide better “assessment for learning” for collaborative student created outcomes and knowledge production.
He refers to “measures of recommendation” – “how likely is it that you would recommend [the company/ experience] to a friend or a colleague? – a broader sense of those net promoter score stuff . He suggested that recommendation (and hence allowing recommendation and sharing) is how we should understand the way people interact with museums. Perhaps this is the way students could assess their collaborative learning outcomes online. How likely is it that “experienced peers” would recommend your student created outcomes to others?
Thinking about “assessment for learning” in the culture of language, symbols and texts.
It is easy to get caught up in change rhetoric in education – in the demand to embrace the new literacies (digital literacies/ multi literacies, twenty-first century literacies) so that students are not left behind. In focusing on the “culture of consumerism” and the “culture of participation” it is easy to neglect the culture of the text.
I think Jenkins says it best in Confronting the Challenges of Participatory Culture: Media Education for the 21st Century pdf
Much writing about twenty-first century literacies seems to assume that communicating through visual, digital, or audiovisual media will displace reading and writing. We fundamentally disagree. Before students can engage with the new participatory culture, they must be able to read and write. P19 Jenkins
Warschauer (2007) makes some important points in this regard in “The Paradoxical Future of Digital Learning” pdf – the first being that our need for people versed in the new literacies is making the need for traditional literacies even more important.
He argues that … “For example the development of a computer based international economy has brought about the loss of millions of manufacturing, mining and agricultural jobs in the US that demanded little or no literacy, whilst creating in their place large numbers of office jobs requiring substantial amounts of reading and writing.”
His second point that - successful entry into the world of new literacies requires competence in the traditional literacies - is often neglected when we enthuse about student engagement in the multiliteracies in schools -
Here Warshauer cites a qualitative study by Attewell and Winston (2003) of two groups of 11–14-year-old children in New York City as they make use of computers and the Internet.
One group consisted of school children from more affluent families who attend private schools. The group exhibited high degrees of both information and multimedia literacy. For example, a typical fourth grade student posted messages to bulletin boards, read political candidates speeches online, answered online polls to make his opinions heard, and even developed a Website so that his school could carry out its own class president elections online.
The second group consisted of African-American and Hispanic children from poor and working class families who scored below grade level in tests of reading. The limited reading ability of those children virtually eliminated their possibility of exercising information literacy. And multimedia for them became a crutch to avoid use of texts rather than a means of to further expand their knowledge.
The authors illustrate through the example of Kadesha, who spends ample time surfing the Web for pictures of rappers and wrestlers or advertisements for hot new sneakers or Barbie dolls, but ‘‘as image after image flashes by...Kadesha rarely settles on printed text.’’ In an after-school enrichment program, Kadesha was encouraged to research a future career, but stopped in frustration after she could not spell ‘‘bakery’’ correctly, while her classmates similarly stumbled on ‘‘burger’’ and ‘‘pediatrician.’’ Warschauer (2007)
Finding that without traditional literacies “multimedia can become a crutch to avoid use of texts rather than a means of to further expand their knowledge” has high significance for educators asking what kind of “assessment for learning” is appropriate in the age of Google and Wikipedia. Facebook and You Tube? Smart phones and text messaging? Twitter and blogging?
Thinking about “assessment for learning” in the culture of Google and Wikipedia.
Finally there is the notion that in the age of Google and Wikipedia we live in a time where “knowledge is a verb not a noun” Educators of all persuasions (including those in our MoE) are want to claim that the instant availability of information online makes the memorization of facts unnecessary or less necessary.
Mary Chamberlain, overseeing the project for the Education Ministry, says that although people are "rattled" by the changes, "there's no use (students) being little knowledge banks walking around on legs.
"We've got computers, we don't need people walking around with them in their heads... People just have to get used to that." In Curriculum change shifts emphasis from 'what' to 'how' Stuff September 2007
What kind of “assessment for learning” is appropriate for an age where “we’ve got computers”. Is assessment for learning of declarative knowledge inappropriate in an age where “we’ve got computers”? Is it better that “assessment for learning” focuses on functioning knowledge? On the processes of critical thinking and questioning for validity and reliability?
Larry Sanger (2010) in “Individual Knowledge in the Internet Age” presents a compelling argument against rejecting knowing content and factual knowledge. I enjoyed the whole opinion piece but this excerpt is relevant here
But this argument seems fallacious. It implies that the new information has either replaced or made trivial the old information. And this is obviously not so in most subjects. Think of all the things typically taught in primary schools: reading, writing, mathematics, basic science. How much of this has changed in the last one hundred years? Even granting that some of our understanding, especially in more advanced education, has been replaced (as in nuclear physics and geography) or refined (as in biology and history), the vast body of essential facts that undergird any sophisticated understanding of the way the world works does not change rapidly.6 This is as true in biology and medicine, fields with stunning recent advances, as it is in mathematics and philosophy.7 And to return to my point, unless one learns the basics in those fields, Googling a question will merely allow one to parrot an answer — not to understand it.
It also won't do to make the facile reply that there is no such thing as "the basics." The basics can be understood as what is commonly taught in introductory courses or what commonly appears in introductory textbooks. Granted, there are some (new) specialized fields in which there are relatively few basics that everyone is taught — I am thinking of knowledge management, computer programming, and social media. But in most fields, there is certainly a body of core knowledge.
To possess a substantial understanding of a field requires not just memorizing the facts and figures that are used by everyone in the field but also practicing, using, and internalizing those basics. To return to my "glib" argument, surely the only way to begin to know something is to have memorized it.
Attewell and Winston’s (2003) research of two groups of 11–14-year-old children in New York City as they make use of computers and the Internet supports Sanger’s contention.
Conventional wisdom is that students need knowledge of how to search rather than mastery of basic facts. However, for Kadesha and her classmates, ignorance of basic facts restricts their ability to search. One of her classmates, for example, had difficulty searching for the mayor of New York due to lack of understanding as to whether Buffalo was part of New York City or New York state. Warschauer (2007) P43
It seems that exposure to the multiliteracies most advantage those who are already advantaged.
There is a lot more thinking needed here – but it seems plausible that thinking critically about what kind of “assessment for learning” is appropriate in the age of [insert your preferred descriptor] is useful thinking. It may protect us (and our students) from futurist induced pedagogical promiscuity next year – by preventing the indiscriminate adoption of too many different pedagogical approaches.