Claims for the transformative effects of e-learning on student learning outcomes imply changes in the nature of learning when learning is mediated by technology. If it can be shown that the nature of learning changes in a distinctive way when learning is mediated by technology (Andrews 2011) then it seems plausible that the evidence for learning might also change. This paper explores how "assessment for learning" might change in a digital culture where students are "collaborative producers" of learning. It identifies some distinctive changes in the nature of the evidence for learning when learning is mediated by technology and asks whether these changes in evidence needs a new approach and/or theory of assessment.
e-Learning remains a speculative field. The many claims made about (and for) e-learning reshaping learning are too often based on studies using small sample sizes and descriptive narrative (rather than systematic observation and analysis). Their usefulness is limited by methodological and conceptual shortcomings. As Wright (2010) summarises in her e-Learning literature review for the Ministry of Education, “there is an international doxa about e-Learning’s inherent benefits to learners. It masks a relatively small amount of actual evidence about its relationship to improved educational and life chances for students.” Part of the problem is undoubtedly that “we approach our technologies through a battery of advertising and media narratives; it is hard to think above the din” (Turkle 2008 p4).
Haythornwaite and Andrews (2011) suggest we lack a theoretical perspective on e-learning. They ask “whether e-learning requires a new theory of learning; or whether it requires merely an extension and application of contemporary learning theories”. A similar question can be asked about “assessment for e-learning”.
To ask what kind of "assessment for learning" is appropriate in the age of Google and Wikipedia? Facebook and You Tube? Smart phones and text messaging? Twitter and blogging? is to make a number of assumptions.
Assumption 1: The character of learning changes when learning involves technology (as in e-learning).
Assumption 2: This change is distinctive. In that when learning is mediated by technology the complexity of the interaction can be consistently, reliably and validly profiled in a way that is different from the way we profile learning.
Assumption 3: The evidence for learning changes when learning involves technology (as in e-learning).
Assumption 4: This change is distinctive. In that when learning is mediated by technology (in e-learning) the evidence for learning differs from other evidence for learning in consistent, reliable and valid ways.
Assumption 5: The distinctiveness of the change in evidence for learning requires a re-imagining of the ways we might do and/or imagine “assessment for learning”.
The alternative is to suggest that e-learning and e-assessment merely replicate old pedagogies in different spaces or through different social interactions.
Does the character of learning change when learning involves technology?
It could be argued that e-Learning utilises the characteristics and affordances of the internet to create distinctive environments and interactions to support learning. For example, the non-linear architecture of the internet (a network of nodes and internodes) provides different opportunities and structures for learning, communication, collaboration and co-construction than those available in the face to face. Opportunities for learning can occur: with anyone, at anytime and in any place; through one to one, one to many, or many to many interactions; through push and pull; synchronously and asynchronously; using multi-literacies or multiple modalities (including text, graphic, audio, video, animation, etc.) and within open and flexible access systems. Andrews (2011) argues that if learning is socially situated then e-learning extends “the horizons of e-learning in space, resource and time.” He suggests that a new theory of learning is developing because in e-learning; the relationship between knowledge and the learner becomes “more democratic, more potentially dialogical”; transduction (the creation of observable evidence of learning) is easier; and the access and use of learning “according to socio-economic, geographic, cognitive and motivational factors” is stretched further (Andrews, 2011 p119).
Does the evidence for learning change when learning is mediated by technology?
The role of assessment and feedback is critical to learning (Black and Wiliam, 2001; Nicol and Macfarlane-Dick,2006; Boud and Sadler,2010). It seems plausible that learning in a community and environment that uses; Google and Wikipedia, Facebook and You Tube, smart phones and text messaging, and Twitter and blogging may leave different traces, and allow different kinds of useful “evidence of learning” and “assessment for learning” to emerge. If it does we ought to be alert to use of technology in learning in schools and take notice of the forms of evidence available. We need to critique the different evidence of learning that emerges in terms of its ability to enhance the quality, timeliness and variety of feedback provided for learners – and to assess how well the different evidence meets the accepted principles of effective feedback. Where it can be shown that this evidence enhances conventional feedback and feed forward we should be proactive in planning for its use.
The JISC report (2011) on “Effective Assessment in a Digital Age” suggests that when looking for evidence of learning, some educators will be looking for evidence of increasing competence; enhanced understanding of ideas at the micro-level and of completion of tasks at the macro-level. Feedback in these cases will be provided by experts on the strengths and weaknesses of new understandings. Others will create environments for experimentation, discovery and student inquiry and seek evidence for student collaboration, cooperation and sharing ideas. They will anticipate learner involvement in the nature of the assessment task. Feedback will be self generated arising from reflection and self assessment and/or peer based arising from collaborative activities and dialogue. Yet another group will prefer that students learn in specific and authentic communities of practice. They will look for evidence in environments that simulate professional practice – think and act like a historian, a scientist, or a playground designer. These educators will look for holistic assessments where feedback is socially produced (from multiple sources) and derives from authentic real-life tasks (JISC 2011 p11).
It is worth noting that the multimodalities available in e-learning support the various views of what constitutes “evidence for learning”. This broader view of what constitutes “evidence for learning” is taken by the New Zealand Curriculum (NZC) (MOE 2007), the Best Evidence Synthesis (BES) for teachers professional development learning (Timperley, Wilson, Barrar, & Fung, 2007) and the BES for diverse learners (Alton-Lee, 2003). The NZC extends desired learning outcomes to include the social, cultural and economic. It includes learning outcomes from the Key Competencies (“capabilities for living and lifelong learning”) and Values (“to be encouraged, modelled, and explored” so that they might be “expressed in everyday actions and interactions within the school”) (MOE 2007). The Best Evidence Synthesis (BES) on teachers’ professional development learning identifies “academic, social, personal or performance” learning outcomes (Timperley, Wilson, Barrar, & Fung, 2007 p18). The BES for diverse learners identifies learning outcomes in learning areas, skills, and values such as “the development of respect for others, tolerance (rangimārie), non-racist behaviour, fairness, caring or compassion (aroha), diligence and hospitality or generosity (manaakitanga)" (Alton-Lee, 2003 p7).
The key question is - What evidence of student learning is available through e-learning that is not available through conventional learning?
In accommodating the different perspectives on learning and approaches to assessment for learning it is useful to imagine a learning experience and think about how opportunities for assessment might change when students become “collaborative producers” of learning through computer mediated technologies.
Sample Learning Experience: Hawera Primary students are collaborating to explore the natural features of Taranaki intent on creating a safe travel resource for visitors to the area. The senior classes (Year 4 to 6) are each focusing on one area- Rivers and Dams, Gas & Oilfields, Mount Taranaki, Beaches and Coastline. Each class is to become the expert around these manmade and natural resources - sharing their findings back to the wider group, and then heading off for a two day geological adventure looking at safe travel aspects along the way. Students will visit the Patea Museum and the EOTC officer there will share a programme around rivers and dams. That night they will take part in an overnight camp in the school hall before heading off around the mountain taking in Kupe gas field, coastline, beaches, lahars, Puke Ariki Museum. They will have an opportunity to climb up the mountain and take part in a bush walk, before heading back to school where they will build a safe travel resource guide for the Taranaki area.
Some of the different opportunities and structures for “assessment of e-learning” are characterised below. An example is provided to show how each might allow evidence for e-learning that differs from that available through conventional assessment.
1. Access to the assessment task - any place, any time, with anyone.
Self-assessment is made accessible, immediate and easily shared with others, occuring: with anyone, at anytime and in any place; through one to one, one to many, or many to many interactions; through push and pull; synchronously and asynchronously; using multi-literacies or multiple modalities (including text, graphic, audio, video, animation, etc.) within open and flexible access systems.
The introduction of multiple literacies and modalities creates a challenge for teachers. When students use “images, photographs, video, animation, music, sound, texts and typography” to “interpret, design and create content” they need feedback from people who understand “frame composition; colour palette; audio, image, and video editing techniques; sound–text–image relations; the effects of typography; transitional effects; navigation and interface construction; and generic conventions in diverse media.” Teachers are not necessarily able to provide these understandings. This means when young people use new media to collaborate (and to produce knowledge), expertise and authority no longer resides solely with teachers in schools. Feedback conversations can occur with anyone in any place. In providing expertise without authority e-learning can create social networks that bypass the teacher altogether.
Discussion prompt: How might this flexibility of access change the nature of the “evidence for learning” provided by Hawera Primary students?
2. Immediate feedback
Feedback on how well the student is meeting an intended learning outcome is immediate and continuous when the learning intention is made explicit and the learning outcome is hosted online in a blog, wiki, video or other collaborative space where editing and/or commenting is enabled. The creation of interactive online resources and multimedia for an identified purpose and audience can receive immediate feedback on its usefulness from the web metrics from site visitors and from visitor comments.
Feedback can also be immediate and interactive when students access online mastery tests like those available through the Khan Academy http://www.khanacademy.org/. The Khan Academy provides an interesting example of how assessment can be re-imagined in online environments. A free, self paced and open-source application the Khan Academy generates exercises dynamically and provides immediate feedback on proficiency. It provides step by step explanations and requires the successful completion of ten questions in a row for mastery. The application can provide data on when the application was accessed, how long it was accessed for and what content students studied before and after the assessment. This information can be made available to student, teacher, parent and/or administrator.
Discussion prompt: How might access to immediate and continuous feedback change the nature of the “evidence for learning” provided by Hawera Primary students?
3. Personalised feedback
Different design structures for online environments expose different traces of learning to those available in conventional settings. For example, access to the edit history of wikis, reveals the contributions of individuals to group process. This allows the “memorably active participant” to become visible. For example, Vi´egas et al. (2004) used information visualization techniques to clarify the history of collaborative edits in Wikipedia. In addition the edit history makes visible iterations in the drafting and development of text by an individual. In the future we may well be able to surface and trace provisional exchanges between learners engaged in collaborative writing through new applications that explore learning analytics and their use in formative assessment (Gestwicki and McNely 2010).
These new forms of evidence of learning provide an insight into the nature of the learning outcome than is rarely available through conventional assessment. Both have the potential to add value to formative and summative assessment for individual and for collaborative learning outcomes.
Discussion prompt: How might personalised feedback change the nature of the “evidence for learning” provided by Hawera Primary students?
4. Range of performance modalities able to be assessed
The collection of evidence of student learning in e-Portfolios reveals the ease with which students can access and use multiple modalities with computer mediated technologies. Student can post images, photographs, video, animation, music, sound, texts and typography online as evidence of a learning outcome. Although web design has some distinct criteria for usability (search, navigation and content), the criteria for assessing and providing feedback on the learning outcomes from other modalities differs little from the criteria used if the same modalities were available offline. The issue is more likely that educators have yet to develop a nuanced meta-language for feedback on student use of the multi-modalities as they have for deep and surface features in literacy.
Performance modalities include those of the NZC Key Competencies. Evidence of student performance (and reflection on performance) when thinking, managing self, relating to others, participating and contributing and using language symbols and text are made more accessible through the multiple modalities in e-learning. For example, evidence of “memorably active participation” is more easily traced in an online forum or blog comments than it might be in a conventional group discussion. Cobo Romani (2010) takes this thinking further by suggesting that e-learning provides evidence for learning outcomes in the e-competencies. He breaks the e-competencies into e-awareness, media literacy, technological literacy, digital literacy and informational literacy outcomes many of which are specific to technology mediated learning.
Web metrics offer an interesting exception in the type of evidence available. For example, teachers whose students write in blogs will reference the number of “hits” a student blog receives or the different countries that visit a student created site. They are unlikely to reference the number or the country of origin of those who borrow a student created resource from the classroom or those who pause to look at a student artwork in the library. However, Chan (2010) has outlined the dangers of not looking closely at web metrics. He describes validity and reliability errors in log file data, page tagging data, unique visitor counts, visits, and time spent on a page and page views. It is better to look for metrics around retention, activity (as in repeat use) and recommendation (on other sites) for measures of the “effectiveness” of a student created resource.
Discussion prompt: How might the range of modalities assessed change the nature of the “evidence for learning” provided by Hawera Primary students?
Discussion prompt: How might web metrics change the nature of the “evidence for learning” provided by Hawera Primary students?
Peer and self-assessment, reflection, access to assessment outcomes and ease of adminstration of assessment outcomes all change when learning is mediated through technology. The same process can be used to imagine how the evidence for learning might change when students are peer and self assessing and/or reflecting on their learning outcomes. To ask how changing access to assessment outcomes or enhancing the administration of achievement outcomes might change the way in which we understand assessment and feedback.
Reports on actual collaborative outcomes online help balance claims made for transforming student learning and assessment through technology mediated “collaborative production”. When thinking about assessment and learning mediated by technology it is easy (as Turkle (2008) suggested) to be deafened by the din from the marketplace. For example, Tube Mogul (2010) reports that only 17% of videos viewed on YouTube were user generated. Of the hundreds of thousands of daily edits made to Wikipedia each day it is telling that less than 15% were by women (Glott, Schmidt & Gosh 2010). The average viewer abandonment rate by viewing time shows 20% of viewers have found something better to do online after the 1st ten seconds, by the 30 second spot a third are gone, and by 60 seconds 44% have left (Visible Measures 2010).
Yet the examples above suggest that evidence traces left by students involved in e-learning are different from those left by students learning in environments without access to e-learning. It is the nature and degree of difference that requires more discussion. We need to assess the “evidence for e-learning” against measures of quality, timeliness and variety of feedback provided; to ask how well the assessment meets the accepted principles of effective feedback (Nicol and Macfarlane-Dick 2006) and to use our new understandings about “assessment for e-learning” to question the principles themselves. We need educators who use e-learning approaches to critique current approaches to “assessment for e-learning” less our feedback conversations with students become increasingly outmoded, derivative and irrelevant.
Alton-Lee, A. (2003). Quality teaching for diverse students: Best evidence synthesis. Wellington, New Zealand: Ministry of Education.
Andrews, R., & Haythornthwaite, C. (Eds,). (2007). The handbook of e-learning research. London: Sage.
Andrews, R. (2011). Does e-learning require a new theory of learning? Some initial thoughts. Journal for Educational research Online. Volume 3, No. 1, 104-121.
Black and Wiliam (2001). Inside the Black Box.
Boud, D. and Associates (2010). Assessment 2020: Seven propositionsfor assessment reform in higher education. Sydney: Australian Learning and Teaching Council.
Chan, S. (2010). “Let’s make more crowns”, or, the danger of not looking closely at your web metrics” Fresh and New Blog. January 2010. Retrieved from http://www.powerhousemuseum.com/dmsblog/index.php/2010/01/09/lets-make-more-crowns-or-the-danger-of-not-looking-closely-at-your-web-metrics/
Cobo Romani, C. (2009). Strategies to promote the development of e-competences in the next generation of professionals: European and International trends. SKOPE Issues Paper Series. Published at the ESRC Centre on Skills, Knowledge and Organisational Performance, Department of Education, Oxford University & the School of Social Sciences, Cardiff University. N13, September 2009 [ISSN 1466-1535]. Retrieved from http://e-competencies.org/
Eurydice, (2011). Key Data on Learning and Innovation through ICT at School in Europe 2011 Brussels: EACEA P9 Eurydice. Retrieved from http://eacea.ec.europa.eu/education/eurydice/documents/key_data_series/129EN.pdf
Gestwicki, P. & McNely, B. (2010). Learning Analytics. Visualizing Collaborative Knowledge Work. Emerging Media Initiative. Ball State University. Retrieved from http://emergingmediainitiative.com/project/learning-analytics/
Glott, R., Schmidt, P., & R. Gosh (2010). Wikipedia Survey Overview of Results. United Nations University. Unu-Merit. Collaborative Creativity Group. Retrieved from http://www.wikipediasurvey.org/docs/Wikipedia_Overview_15March2010-FINAL.pdf
JISC (2011). Effective Assessment in a Digital Age. Retrieved from http://www.jisc.ac.uk/digiassess
Manovich, L., & Kratky, A. (2005) Soft Cinema: Navigating the Database. The MIT Press. Cambridge, Mass.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199-218.
Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2007). Teacher professional learning and development: Best Evidence Synthesis Iteration (BES). Wellington: Ministry of Education. Retrieved from http://www.educationcounts.govt.nz/publications/series/2515/15341
Turkle, S. (Ed.). (2008). The inner history of devices. Cambridge, MA: MIT Press.
Vi ´egas, F. B., Wattenberg, M. and D. Kushal. (2004). Studying cooperation and conflict between authors with history flow visualizations. In CHI ’04: Proceedings of the SIGCHI conference on Human factors in computing systems, 575–582, New York, NY, USA. ACM.
Wright, N. (2010). e-Learning and implications for New Zealand schools: a literature review. Report to the Ministry of Education. Retrieved from http://www.educationcounts.govt.nz/publications/ict/e-learning-and-implications-for-new-zealand-schools-a-literature-review/executive-summary
 After Manovich on Soft Cinema.
 Nicol and Macfarlane-Dick (2006) identified seven principles of effective feedback. Good feedback should; clarify what good performance is, facilitate reflection and self-assessment in learning, deliver high quality feedback information that helps learners self-correct, encourage teacher-learner and peer dialogue, encourage motivational beliefs and self esteem, provide opportinities to act on feedback, and use feedback from learners to improve teaching.