Abstract:
Claims for the transformative effects of e-learning on student learning outcomes imply changes in the nature of learning when learning is mediated by technology. If it can be shown that the nature of learning changes in a distinctive way when learning is mediated by technology (Andrews 2011) then it seems plausible that the evidence for learning might also change. This paper explores how "assessment for learning" might change in a digital culture where students are "collaborative producers" of learning. It identifies some distinctive changes in the nature of the evidence for learning when learning is mediated by technology and asks whether these changes in evidence needs a new approach and/or theory of assessment.
Introduction
e-Learning remains a speculative field. The many claims made about (and for) e-learning reshaping learning are too often based on studies using small sample sizes and descriptive narrative (rather than systematic observation and analysis). Their usefulness is limited by methodological and conceptual shortcomings. As Wright (2010) summarises in her e-Learning literature review for the Ministry of Education, “there is an international doxa about e-Learning’s inherent benefits to learners. It masks a relatively small amount of actual evidence about its relationship to improved educational and life chances for students.” Part of the problem is undoubtedly that “we approach our technologies through a battery of advertising and media narratives; it is hard to think above the din” (Turkle 2008 p4).
Haythornwaite and Andrews (2011) suggest we lack a theoretical perspective on e-learning. They ask “whether e-learning requires a new theory of learning; or whether it requires merely an extension and application of contemporary learning theories”. A similar question can be asked about “assessment for e-learning”.
To ask what kind of "assessment for learning" is appropriate in the age of Google and Wikipedia? Facebook and You Tube? Smart phones and text messaging? Twitter and blogging?[1] is to make a number of assumptions.
Assumption 1: The character of learning changes when learning involves technology (as in e-learning).
Assumption 2: This change is distinctive. In that when learning is mediated by technology the complexity of the interaction can be consistently, reliably and validly profiled in a way that is different from the way we profile learning.
Assumption 3: The evidence for learning changes when learning involves technology (as in e-learning).
Assumption 4: This change is distinctive. In that when learning is mediated by technology (in e-learning) the evidence for learning differs from other evidence for learning in consistent, reliable and valid ways.
Assumption 5: The distinctiveness of the change in evidence for learning requires a re-imagining of the ways we might do and/or imagine “assessment for learning”.
The alternative is to suggest that e-learning and e-assessment merely replicate old pedagogies in different spaces or through different social interactions.
Does the character of learning change when learning involves technology?
It could be argued that e-Learning utilises the characteristics and affordances of the internet to create distinctive environments and interactions to support learning. For example, the non-linear architecture of the internet (a network of nodes and internodes) provides different opportunities and structures for learning, communication, collaboration and co-construction than those available in the face to face. Opportunities for learning can occur: with anyone, at anytime and in any place; through one to one, one to many, or many to many interactions; through push and pull; synchronously and asynchronously; using multi-literacies or multiple modalities (including text, graphic, audio, video, animation, etc.) and within open and flexible access systems. Andrews (2011) argues that if learning is socially situated then e-learning extends “the horizons of e-learning in space, resource and time.” He suggests that a new theory of learning is developing because in e-learning; the relationship between knowledge and the learner becomes “more democratic, more potentially dialogical”; transduction (the creation of observable evidence of learning) is easier; and the access and use of learning “according to socio-economic, geographic, cognitive and motivational factors” is stretched further (Andrews, 2011 p119).
Does the evidence for learning change when learning is mediated by technology?
The role of assessment and feedback is critical to learning (Black and Wiliam, 2001; Nicol and Macfarlane-Dick,2006; Boud and Sadler,2010). It seems plausible that learning in a community and environment that uses; Google and Wikipedia, Facebook and You Tube, smart phones and text messaging, and Twitter and blogging may leave different traces, and allow different kinds of useful “evidence of learning” and “assessment for learning” to emerge. If it does we ought to be alert to use of technology in learning in schools and take notice of the forms of evidence available. We need to critique the different evidence of learning that emerges in terms of its ability to enhance the quality, timeliness and variety of feedback provided for learners – and to assess how well the different evidence meets the accepted principles of effective feedback[2]. Where it can be shown that this evidence enhances conventional feedback and feed forward we should be proactive in planning for its use.
The JISC report (2011) on “Effective Assessment in a Digital Age” suggests that when looking for evidence of learning, some educators will be looking for evidence of increasing competence; enhanced understanding of ideas at the micro-level and of completion of tasks at the macro-level. Feedback in these cases will be provided by experts on the strengths and weaknesses of new understandings. Others will create environments for experimentation, discovery and student inquiry and seek evidence for student collaboration, cooperation and sharing ideas. They will anticipate learner involvement in the nature of the assessment task. Feedback will be self generated arising from reflection and self assessment and/or peer based arising from collaborative activities and dialogue. Yet another group will prefer that students learn in specific and authentic communities of practice. They will look for evidence in environments that simulate professional practice – think and act like a historian, a scientist, or a playground designer. These educators will look for holistic assessments where feedback is socially produced (from multiple sources) and derives from authentic real-life tasks (JISC 2011 p11).
It is worth noting that the multimodalities available in e-learning support the various views of what constitutes “evidence for learning”. This broader view of what constitutes “evidence for learning” is taken by the New Zealand Curriculum (NZC) (MOE 2007), the Best Evidence Synthesis (BES) for teachers professional development learning (Timperley, Wilson, Barrar, & Fung, 2007) and the BES for diverse learners (Alton-Lee, 2003). The NZC extends desired learning outcomes to include the social, cultural and economic. It includes learning outcomes from the Key Competencies (“capabilities for living and lifelong learning”) and Values (“to be encouraged, modelled, and explored” so that they might be “expressed in everyday actions and interactions within the school”) (MOE 2007). The Best Evidence Synthesis (BES) on teachers’ professional development learning identifies “academic, social, personal or performance” learning outcomes (Timperley, Wilson, Barrar, & Fung, 2007 p18). The BES for diverse learners identifies learning outcomes in learning areas, skills, and values such as “the development of respect for others, tolerance (rangimārie), non-racist behaviour, fairness, caring or compassion (aroha), diligence and hospitality or generosity (manaakitanga)" (Alton-Lee, 2003 p7).
The key question is - What evidence of student learning is available through e-learning that is not available through conventional learning?
In accommodating the different perspectives on learning and approaches to assessment for learning it is useful to imagine a learning experience and think about how opportunities for assessment might change when students become “collaborative producers” of learning through computer mediated technologies.
Sample Learning Experience: Hawera Primary students are collaborating to explore the natural features of Taranaki intent on creating a safe travel resource for visitors to the area. The senior classes (Year 4 to 6) are each focusing on one area- Rivers and Dams, Gas & Oilfields, Mount Taranaki, Beaches and Coastline. Each class is to become the expert around these manmade and natural resources - sharing their findings back to the wider group, and then heading off for a two day geological adventure looking at safe travel aspects along the way. Students will visit the Patea Museum and the EOTC officer there will share a programme around rivers and dams. That night they will take part in an overnight camp in the school hall before heading off around the mountain taking in Kupe gas field, coastline, beaches, lahars, Puke Ariki Museum. They will have an opportunity to climb up the mountain and take part in a bush walk, before heading back to school where they will build a safe travel resource guide for the Taranaki area.
Some of the different opportunities and structures for “assessment of e-learning” are characterised below. An example is provided to show how each might allow evidence for e-learning that differs from that available through conventional assessment.
1. Access to the assessment task - any place, any time, with anyone.
Self-assessment is made accessible, immediate and easily shared with others, occuring: with anyone, at anytime and in any place; through one to one, one to many, or many to many interactions; through push and pull; synchronously and asynchronously; using multi-literacies or multiple modalities (including text, graphic, audio, video, animation, etc.) within open and flexible access systems.
The introduction of multiple literacies and modalities creates a challenge for teachers. When students use “images, photographs, video, animation, music, sound, texts and typography” to “interpret, design and create content” they need feedback from people who understand “frame composition; colour palette; audio, image, and video editing techniques; sound–text–image relations; the effects of typography; transitional effects; navigation and interface construction; and generic conventions in diverse media.” Teachers are not necessarily able to provide these understandings. This means when young people use new media to collaborate (and to produce knowledge), expertise and authority no longer resides solely with teachers in schools. Feedback conversations can occur with anyone in any place. In providing expertise without authority e-learning can create social networks that bypass the teacher altogether.
Discussion prompt: How might this flexibility of access change the nature of the “evidence for learning” provided by Hawera Primary students?
2. Immediate feedback
Feedback on how well the student is meeting an intended learning outcome is immediate and continuous when the learning intention is made explicit and the learning outcome is hosted online in a blog, wiki, video or other collaborative space where editing and/or commenting is enabled. The creation of interactive online resources and multimedia for an identified purpose and audience can receive immediate feedback on its usefulness from the web metrics from site visitors and from visitor comments.
Feedback can also be immediate and interactive when students access online mastery tests like those available through the Khan Academy http://www.khanacademy.org/. The Khan Academy provides an interesting example of how assessment can be re-imagined in online environments. A free, self paced and open-source application the Khan Academy generates exercises dynamically and provides immediate feedback on proficiency. It provides step by step explanations and requires the successful completion of ten questions in a row for mastery. The application can provide data on when the application was accessed, how long it was accessed for and what content students studied before and after the assessment. This information can be made available to student, teacher, parent and/or administrator.
Discussion prompt: How might access to immediate and continuous feedback change the nature of the “evidence for learning” provided by Hawera Primary students?
3. Personalised feedback
Different design structures for online environments expose different traces of learning to those available in conventional settings. For example, access to the edit history of wikis, reveals the contributions of individuals to group process. This allows the “memorably active participant” to become visible. For example, Vi´egas et al. (2004) used information visualization techniques to clarify the history of collaborative edits in Wikipedia. In addition the edit history makes visible iterations in the drafting and development of text by an individual. In the future we may well be able to surface and trace provisional exchanges between learners engaged in collaborative writing through new applications that explore learning analytics and their use in formative assessment (Gestwicki and McNely 2010).
These new forms of evidence of learning provide an insight into the nature of the learning outcome than is rarely available through conventional assessment. Both have the potential to add value to formative and summative assessment for individual and for collaborative learning outcomes.
Discussion prompt: How might personalised feedback change the nature of the “evidence for learning” provided by Hawera Primary students?
4. Range of performance modalities able to be assessed
The collection of evidence of student learning in e-Portfolios reveals the ease with which students can access and use multiple modalities with computer mediated technologies. Student can post images, photographs, video, animation, music, sound, texts and typography online as evidence of a learning outcome. Although web design has some distinct criteria for usability (search, navigation and content), the criteria for assessing and providing feedback on the learning outcomes from other modalities differs little from the criteria used if the same modalities were available offline. The issue is more likely that educators have yet to develop a nuanced meta-language for feedback on student use of the multi-modalities as they have for deep and surface features in literacy.
Performance modalities include those of the NZC Key Competencies. Evidence of student performance (and reflection on performance) when thinking, managing self, relating to others, participating and contributing and using language symbols and text are made more accessible through the multiple modalities in e-learning. For example, evidence of “memorably active participation” is more easily traced in an online forum or blog comments than it might be in a conventional group discussion. Cobo Romani (2010) takes this thinking further by suggesting that e-learning provides evidence for learning outcomes in the e-competencies. He breaks the e-competencies into e-awareness, media literacy, technological literacy, digital literacy and informational literacy outcomes many of which are specific to technology mediated learning.
Web metrics offer an interesting exception in the type of evidence available. For example, teachers whose students write in blogs will reference the number of “hits” a student blog receives or the different countries that visit a student created site. They are unlikely to reference the number or the country of origin of those who borrow a student created resource from the classroom or those who pause to look at a student artwork in the library. However, Chan (2010) has outlined the dangers of not looking closely at web metrics. He describes validity and reliability errors in log file data, page tagging data, unique visitor counts, visits, and time spent on a page and page views. It is better to look for metrics around retention, activity (as in repeat use) and recommendation (on other sites) for measures of the “effectiveness” of a student created resource.
Discussion prompt: How might the range of modalities assessed change the nature of the “evidence for learning” provided by Hawera Primary students?
Discussion prompt: How might web metrics change the nature of the “evidence for learning” provided by Hawera Primary students?
Peer and self-assessment, reflection, access to assessment outcomes and ease of adminstration of assessment outcomes all change when learning is mediated through technology. The same process can be used to imagine how the evidence for learning might change when students are peer and self assessing and/or reflecting on their learning outcomes. To ask how changing access to assessment outcomes or enhancing the administration of achievement outcomes might change the way in which we understand assessment and feedback.
Reports on actual collaborative outcomes online help balance claims made for transforming student learning and assessment through technology mediated “collaborative production”. When thinking about assessment and learning mediated by technology it is easy (as Turkle (2008) suggested) to be deafened by the din from the marketplace. For example, Tube Mogul (2010) reports that only 17% of videos viewed on YouTube were user generated. Of the hundreds of thousands of daily edits made to Wikipedia each day it is telling that less than 15% were by women (Glott, Schmidt & Gosh 2010). The average viewer abandonment rate by viewing time shows 20% of viewers have found something better to do online after the 1st ten seconds, by the 30 second spot a third are gone, and by 60 seconds 44% have left (Visible Measures 2010).
Yet the examples above suggest that evidence traces left by students involved in e-learning are different from those left by students learning in environments without access to e-learning. It is the nature and degree of difference that requires more discussion. We need to assess the “evidence for e-learning” against measures of quality, timeliness and variety of feedback provided; to ask how well the assessment meets the accepted principles of effective feedback (Nicol and Macfarlane-Dick 2006) and to use our new understandings about “assessment for e-learning” to question the principles themselves. We need educators who use e-learning approaches to critique current approaches to “assessment for e-learning” less our feedback conversations with students become increasingly outmoded, derivative and irrelevant.
References
Alton-Lee, A. (2003). Quality teaching for diverse students: Best evidence synthesis. Wellington, New Zealand: Ministry of Education.
Andrews, R., & Haythornthwaite, C. (Eds,). (2007). The handbook of e-learning research. London: Sage.
Andrews, R. (2011). Does e-learning require a new theory of learning? Some initial thoughts. Journal for Educational research Online. Volume 3, No. 1, 104-121.
Black and Wiliam (2001). Inside the Black Box.
Boud, D. and Associates (2010). Assessment 2020: Seven propositionsfor assessment reform in higher education. Sydney: Australian Learning and Teaching Council.
Chan, S. (2010). “Let’s make more crowns”, or, the danger of not looking closely at your web metrics” Fresh and New Blog. January 2010. Retrieved from http://www.powerhousemuseum.com/dmsblog/index.php/2010/01/09/lets-make-more-crowns-or-the-danger-of-not-looking-closely-at-your-web-metrics/
Cobo Romani, C. (2009). Strategies to promote the development of e-competences in the next generation of professionals: European and International trends. SKOPE Issues Paper Series. Published at the ESRC Centre on Skills, Knowledge and Organisational Performance, Department of Education, Oxford University & the School of Social Sciences, Cardiff University. N13, September 2009 [ISSN 1466-1535]. Retrieved from http://e-competencies.org/
Eurydice, (2011). Key Data on Learning and Innovation through ICT at School in Europe 2011 Brussels: EACEA P9 Eurydice. Retrieved from http://eacea.ec.europa.eu/education/eurydice/documents/key_data_series/129EN.pdf
Gestwicki, P. & McNely, B. (2010). Learning Analytics. Visualizing Collaborative Knowledge Work. Emerging Media Initiative. Ball State University. Retrieved from http://emergingmediainitiative.com/project/learning-analytics/
Glott, R., Schmidt, P., & R. Gosh (2010). Wikipedia Survey Overview of Results. United Nations University. Unu-Merit. Collaborative Creativity Group. Retrieved from http://www.wikipediasurvey.org/docs/Wikipedia_Overview_15March2010-FINAL.pdf
JISC (2011). Effective Assessment in a Digital Age. Retrieved from http://www.jisc.ac.uk/digiassess
Manovich, L., & Kratky, A. (2005) Soft Cinema: Navigating the Database. The MIT Press. Cambridge, Mass.
Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199-218.
Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2007). Teacher professional learning and development: Best Evidence Synthesis Iteration (BES). Wellington: Ministry of Education. Retrieved from http://www.educationcounts.govt.nz/publications/series/2515/15341
Turkle, S. (Ed.). (2008). The inner history of devices. Cambridge, MA: MIT Press.
Vi ´egas, F. B., Wattenberg, M. and D. Kushal. (2004). Studying cooperation and conflict between authors with history flow visualizations. In CHI ’04: Proceedings of the SIGCHI conference on Human factors in computing systems, 575–582, New York, NY, USA. ACM.
Wright, N. (2010). e-Learning and implications for New Zealand schools: a literature review. Report to the Ministry of Education. Retrieved from http://www.educationcounts.govt.nz/publications/ict/e-learning-and-implications-for-new-zealand-schools-a-literature-review/executive-summary
[1] After Manovich on Soft Cinema.
[2] Nicol and Macfarlane-Dick (2006) identified seven principles of effective feedback. Good feedback should; clarify what good performance is, facilitate reflection and self-assessment in learning, deliver high quality feedback information that helps learners self-correct, encourage teacher-learner and peer dialogue, encourage motivational beliefs and self esteem, provide opportinities to act on feedback, and use feedback from learners to improve teaching.
A thoughtful paper, thanks for posting it. Prompted me to think further of practices that do and dont transpose well in digital spaces. Too often these practices relate to what students might do differently, but with same old same old marking criteria, such innovation becomes unstuck.
Posted by: ailsa | August 31, 2011 at 10:33 PM
Thanks Ailsa, and the alternative interests me - when students write online and opportunities for feedback and feedup comments are extended "in space, resource and time" - I seldom see any explicit identified learning outcome (ILO)or success criteria available to identify the learning purpose of the e-activity.
It seems like in shifting to e-pedagogies we not only fail to look for different/new traces of evidence for learning that might be available to enhance understanding - we also neglect to note the evidence we would use in a non-digital space.
Posted by: Artichoke | September 01, 2011 at 08:15 AM
This post had me thinking about nearly 30 years of rhetoric surrounding e-learning. Nicholas Negroponte talked in the early 80's noting that "being digital is being different' and how he could not wait for those changes to occur.
Reading your work and thinking of the post-modernist commentary of 'local' knowledge which is tentative, provisional and contingent and wondering about current view of assessment by teachers based on evidence ?
And then conflating that with Wilkinson and Pickett's "the Spirit Level and Oliver James 'The Selfish Capitalist'with the view that children are bathed in consumer aspirations and artefacts. Is there an over arching question somewhere?
Posted by: Raisinrb | September 01, 2011 at 09:39 PM
Thanks for the prompt to look for the bigger picture Raisinrb – I am not sure where I will find it but it definitely exists.
When looking at our current affection for evidence based learning outcomes in the context of “we do better when we are all equal” arguments and the rise in depressive conditions I think of Illich and Medical Nemesis. Perhaps that is where I can start thinking …
In Medical Nemesis Illich talks about clinical iatrogenesis, social iatrogenesis, and cultural iatrogenesis.
Clinical iatrogenesis - direct harm to the patient caused by medical intervention. What keeps you healthy and allows for longevity are the physical and social environments you live in – not medicine. So does our focus on finding short term evidence for learning outcomes cause direct harm to the learner?
Social iatrogenesis – the medicalisation of life – how much of the GNP is spent on health care and how the availability of health care creates a demand for more health care – add to this the marketing of disease and pathology - the commercial benefits of creating new conditions that needs new drugs – which in turn create new conditions. So does our focus on finding short term evidence for learning outcomes create a demand for more and more evidence – where new vulnerabilities are imagined and new instruments and measures are created. I can see this effect in many of the presentations at the Symposium on Assessment and Learner Outcomes in Welllington this week.
Cultural iatrogenesis, where we lose traditional ways of dealing with sorrow, pain, fever, and increasing age related frailty etc. could also be imagined as a consequence of our desire to find more nuanced traces of learning. What were our traditional ways of living with diverse learners. How did we value each individual and let them feel valued – belong before the institution of school and the need for evidence of learning outcomes?
Posted by: Artichoke | September 03, 2011 at 07:35 AM
Assessment in On-Line Learning Environments
The technological advancements impacting the phenomenon of on-line learning environments certainly make the topic of assessment in on-line learning classrooms or e-learning timely and relevant. I agree that online learning settings require appropriate assessment measures that actually assess learning. One of the important things to consider as Roberts (2006) states is that assessment should be more than grading; it is about learning. He recommends self-assessment, peer-assessment and group assessment as effective measures to assess students in an on-line setting. These methods he adds can only be effective when the instructor is conversant and technologically savvy to monitor students’ interactions and participations.
As an online student, I support the implementation of synchronous and asynchronous assessment measures. The concern I have is how can synchronous and asynchronous e-learning settings ensure that their assessment prevents plagiarism cheating, and verifies students’ identities (Hricko and Howell, 2006)? Hricko and Howell (2006) suggest that if there is more emphasis placed on “interactive, creative, authentic, reflective and constructive assessment” many of the concerns about assessment in online learning settings.
Reference
Hricko, M. & Howell, S.L. (2006). Online assessment: Foundations and challenges. USA: Idea Group Inc.
Roberts, T.S. (2006). Self, peer and group assessment in e-learning. USA: Idea Group Inc.
Posted by: Ann-Marie | September 05, 2011 at 11:17 AM
I agreed with you Ann-Marie 100%.
Posted by: Peter Write | October 02, 2011 at 05:07 PM
Hi Ann Marie - thanks for the comment and for the references
I see a lot of learning focused self peer and group assessment in the day job - where the schools I work with use SOLO Taxonomy to provide a generic framework for determining the level of learning outcome in the face to face - It has the reliability, validity and rigour to transfer easily into online environments - and indeed has been used to assess the responses in online forum discussion etc
The tricky bit is as Roberts (2006) notes - in the re-imagining of what the evidence traces of participation and interaction look like on-line. I did some thinking along these lines a while back - looking at Why World of Warcraft is Better Than School
Whispers – private messages sent to one person only.
Party chat- a chat between the 5 members of your current party (a group of people working for a common goal)
Raid chat – a larger group chat (up to 40 people).
Guild chat – chat between a group of people allied with each other can be as small as 10 up to several hundred.
General chat everyone who is playing in the same area as you are.
Outside the game you will see kids txting, msn, on the phone, shouting, gesturing, laughing, scrawling notes …. and, and, and …..
Is hard for educators who are unfamiliar with the nuance of online interaction and participation to imagine possibilities.
In terms of "The concern I have is how can synchronous and asynchronous e-learning settings ensure that their assessment prevents plagiarism cheating, and verifies students’ identities" bit - I reckon this will not change until we concede the need for collaborative over individual outcomes for a healthy society - then the learning task will not be able to be accomplished individually - and much like in the early development of World of Warcraft - we will see students working together for a collective outcome - with a negotiated dragon kill points (DKP) allocation and/or distribution system for recognising individual participation that supported the collective.
For example the occupants off the corridor initially adopted a "zero sum approach" where the total of every raiding member's DKP is equal to zero. This system works by assigning predetermined points values to every item that can potentially drop. Whenever an item drops each person in the raid obtains the points value for that item divided by the amount of people in the raid. The people who obtain items have the points value of the item they obtained subtracted from their DKP, potentially allowing members to obtain negative DKP values. A strength of this system is its transparency, in that everyone's points are clearly identifiable allowing you to estimate when you will get your next item based on the points of your peers. I noticed extraordinary ethical conversations about how to recognise individual contribution and the unforeseen consequences - leading to modifications that recognised attempts made etc etc
Posted by: Artichoke | October 07, 2011 at 07:10 PM
A little bit off topic: I want to subscribe, but there is something wrong with the page! Help, please!
Posted by: Malin | October 20, 2011 at 05:10 PM
Off topic is a much undervalued conversational space - subscribe button seems to work ok - try http://artichoke.typepad.com/artichoke/atom.xml
Posted by: Artichoke | October 21, 2011 at 07:54 AM
Every kind of information over the internet is good for all ages except the adult content which should really be inappropriate and must be strongly filtered.
Posted by: Accredited Online Schools | January 25, 2012 at 08:45 PM
Interesting ideas here, loads to think about.
Posted by: Jo Wheway | January 26, 2012 at 01:24 PM
This really makes me think of how I can utilize technology more in my high school English classroom. I have just started blogging and using a wiki for a doctoral class and find them very user friendly. I am trying to get more comfortable with using this type of technology. I know my students are very comfortable with blogging and would love to use this type of technology for class…I am just not sure on the logistics of it all. How would I assess blog postings? How many is enough? Too many? And of course there is still the question of authenticity. How do I know that my students are not cheating? You bring up some interesting ideas that make me truly consider using technology to assess my students.
Posted by: Dana | February 02, 2012 at 10:42 AM
I am totally agree with Ann-Marie
Posted by: Business Plan Writers | February 11, 2012 at 08:18 PM
Elearning has to be different because the assessment impacts what people learn, how they learn and what impact it will have on the learner. In addition, online learning tends to be more accessible because of the technology involved. Furthermore, the learning style needs to match the assessment method.
Posted by: B. Goode | May 09, 2012 at 12:34 PM
Thank you for your thoughtful examination of assessments in the context of new media, learning and social models.
This is an area Gooru has looked at closely and has addressed thru the launch of our solution: www.goorulearning.org
As such, I wanted to introduce Gooru here:
Gooru is a free search engine for learning. Teachers and students can use Gooru to search for rich collections of multimedia resources, digital textbooks, videos, games and quizzes created by educators in the Gooru community. Collections are aligned to standards and currently cover every 5th-12th grade math and science topic, with other subjects coming soon.
Gooru is free (of cost and ads) and developed by a 501(c)(3) non-profit organization whose mission is to honor the human right to education. To get started, watch this video introduction to Gooru:
http://www.youtube.com/watch?v=ytV6ogucez4&feature=related
Enjoy and comments / questions are welcome!
Posted by: Gooru4U | July 21, 2012 at 08:41 AM
We can't deny the fact that technology has been a part of learning today. It has a huge impact to the new generation nowadays. In some points of view, Technology for learning has a huge advantages for education.
Posted by: Bookkeeper Caloundra | August 30, 2012 at 10:54 PM
Good post. I also like your post on the 'Why World of Warcraft is Better Than School ' .
Posted by: Justin | September 14, 2012 at 12:29 PM