ONLINE COLLABORATIVE ASSESSMENT
December 2008
Assessment of student online collaboration This paper examines an emerging form of assessment – the assessment of online collaboration, using tools such as blogs and wikis. It seeks to address three questions: (1) Is online collaborative assessment educationally valid?; (2) What is current practice?; (3) What are the strengths and weaknesses of current practice? Bobby Elliott, Scottish Qualifications Authority
[email protected]
Online collaborative assessment
Online collaborative assessment ASSESSING STUDENT ONLINE COLLABORATION
WHAT IS ONLINE COLLABORATIVE ASSESSMENT? Online collaboration is a topical issue. There are two main drivers for the current interest. One is the ongoing societal change that is embracing new forms of working and socialising; the other is technological change, characterised by the emergence of “Web 2.0”. As we move from an industrial society to an information one, soft skills relating to communications, team working and collaboration are replacing the traditional skills of craft, diligence and obedience. Education, too, is changing with the focus more on “deep learning” rather than memorisation of facts and figures. The concurrent emergence of Web 2.0 has facilitated these changes. The original web (Web 1.0) was a passive medium, where users consumed information; the new web is the “read/write web” (Gillmor, 2004), which facilitates user interaction and makes it easy to create and contribute information. There are a wide range of collaboration tools available. The main ones are e-mail, discussion forums, blogs, wikis, social networks, VOIP, and virtual worlds. Although these services vary widely in their functionality, they all provide an environment that supports communication and interaction between groups of users. Virtual worlds, such as Second Life 1 , are the latest addition to the genre and their use is expected to grow significantly within the next few years, with four in five Internet users predicted to use at least one virtual world by 2011 (de Freitas, 2008). Online collaborative assessment is the assessment of activities that utilise these tools. Education (particularly Higher Education) is beginning to make use of these tools to support learning, and taking the first steps in assessing students’ work produced using these services. While there are clear parallels between online collaborative work and traditional group work, this new environment is sufficiently different to pose some fundamental questions about this emerging form of assessment.
VALUE OF ONLINE COLLABORATIVE ASSESSMENT Employer demand for a more innovative, creative and collaborative workforce is well documented (see, for example, 21st Century Skills, Education & Competitiveness, 2008). Universities have responded by seeking to produce graduates who can work in teams and collaborate (Hounsell, et al, 2007). The value of collaborative working goes beyond pragmatic vocational considerations. Collaborative learning is rooted in pedagogy, particularly the social constructivist theories of Vygotsky (1978) and the experiential learning theories of Kolb (1984).
“With a constructivist framework, students are seen as active collaborators in the building of knowledge. Learning takes place through interaction, existing in the transaction between student and student, student and text, and student and teacher.” (Murphy, 1994)
1
http://www.secondlife.com Page 1
Online collaborative assessment
Online collaborative activity fits this framework well. It is also considered a key way to improve teaching and learning:
“When asked to compare communication technologies, 52% of respondents state that online collaboration tools would make the greatest contribution in terms of improving education quality over the next five years, the top response.” (Economist Intelligence Unit, 2008)
Formative and summative use Online collaborative activities can be used formatively or summatively. Collaborative activities done online are particularly suitable for formative assessment given the interactive nature of the medium, making peer and teacher feedback straight-forward. For example, e-mail is an excellent way to provide one-to-one feedback; and the comment facilities in a blog make it straight-forward to give group feedback. In fact, online collaborative activities are well suited to all seven principles of self-regulation outlined by Nicol and MacFarlane-Dick (2006) but particularly the principles relating to teacher and peer dialogue, and self-assessment. Collaborative online assessment is also well suited to summative use. The student activities typically undertaken using Web 2.0 tools are generally authentic, being related to real-world tasks, and the skills involved in collaborative online working foster lifelong learning skills. Such activities also engage learners, which increases student motivation. This engagement with authentic activities can reduce the “disjunction” between course objectives and assessment practice noted in several recent reports (see, for example, Elton & Johnston, 2002).
Quality of online collaborative assessment Knight (2002) identified four characteristics of assessment: validity, reliability, affordability and usability.
“In 21st century learning environments, de-contextualised drop-in-from-the-sky assessments, consisting of isolated tasks and performances will have zero validity as indices of educational attainment.” (Pellingrino, 1999)
Online collaborative assessment scores highly in most of these. It is potentially a highly valid form of assessment because it permits the learner to undertake engaging and authentic tasks that can closely match learning objectives; however, it faces the same reliability issues as traditional group work – maybe more because of its innovative nature; it’s affordable because of its use of commonly (and usually, freely) available Web 2.0 tools; and students generally enjoy using these tools and services. The issue of the reliability of collaborative online assessment is considered later in this paper. A characteristic not mentioned by Knight, but one that has important implications for online assessment, is fairness, given the long-standing complaints from students about recognising individual effort (or lack of) and penalising malpractice. Online collaborative assessment appears to be no better, perhaps worse, in this regard than traditional group work.
Page 2
Online collaborative assessment
CURRENT USES OF ONLINE COLLABORATIVE ASSESSMENT Online collaborative assessment is an emerging form of assessment and, as such, is not yet widely used within education. However, there are numerous, if isolated, examples of its use, particularly in Higher Education. The Scottish Qualifications Authority (SQA) is piloting the use of blogs and wikis for summative assessment in one of its vocational award 2 ; blogs are frequently used in place of student logbooks in degree courses; and some post-graduate programmes are using collaborative tools for group work (for example, the Masters in E-Learning at the University of Edinburgh uses a collaborative wiki in one of its courses that contributes 50% to the final grade). While practice is patchy, attitudes towards this innovative form of assessment are generally positive. A recent Becta report (Crook and Harrison, 2008) noted that 59% of school teachers wanted to make more use of Web 2.0 technologies in the classroom, citing its potential for collaborative learning as one of its main attractions. In my own research among colleagues in SQA 3 , almost 90% of respondents wanted to see more use made of online collaborative technologies within the assessment process (Elliott, 2008). Topics relating to online collaborative assessment also feature prominently in the academic literature. In a wide-ranging review of literature relating to assessment undertaken by Hounsell et al (2007), use of new technology was the third most common topic, with the assessment of groups and collaboration being the fifth most cited issue. However, these examples should be put in context. The same Becta report noted low actual usage of Web 2.0 tools in class time: only 12% of teachers had uploaded a video, 9% contributed to a discussion board or a blog, and 6% had edited a wiki. In the SQA survey, over half of the respondents reported their own knowledge and skills as a barrier to the adoption of these tools for assessment, and over 90% felt that teachers’ knowledge and skills prevented greater use of these tools and services for assessment.
CURRENT ASSESSMENT PRACTICE While there is a great deal of interest in this innovative form of assessment, its use is limited and practice is variable. The research among SQA officers reveals that some use has been made of e-mail and discussion forums for assessment purposes, but only one in four had made any use of blogs, one in five had used wikis, and none had made use of social networks, VOIP or virtual worlds. Given that this group has oversight of all summative assessment throughout Scotland in the school, college and training sectors, it is indicative of the low use made of collaborative technologies in the assessment process outwith the HE sector. The SQA research also reveals concerns about cheating (particularly plagiarism) and difficulties in assessing online collaborative activity as barriers to adoption. The concern about marking was reflected in the Hounsell et al (2007) literature review, with queries about what and how to assess student group work appearing regularly in the literature. As part of the research for this paper, a number of marking rubrics for online collaborative activities were examined. There were significant differences in their quality. Many were standard rubrics (for marking traditional group work) with no recognition whatsoever of the online
PBNC Health & Safety in Care Settings (SCQF Level 5). An online survey conducted in December 2008 among Qualifications Officers and Qualifications Managers within SQA; 30 officers completed the survey (out of 38 officers in total). These officers have responsibility for assessment within Scottish schools, colleges and training centres. The questionnaire and summary of results are attached as appendices to this paper.
2 3
Page 3
Online collaborative assessment
environment. The best rubrics reflected the unique nature of online collaborative working, and had specific criteria relating to this environment. 4 But practice varied widely. The Becta research (Crook and Harrison, 2008) revealed current assessment practice as a barrier to adoption in schools with its focus on individual work and the prioritising of reliability over validity:
“Many indicated that there was a tension between the collaborative learning encouraged by Web 2.0 and the nature of the current assessment system.” (Crook and Harrison, 2008)
STRENGTHS AND WEAKNESSES OF CURRENT PRACTICE As previously stated, there is a great deal of interest in, and enthusiasm for, this form of assessment. This interest is uniform across all sectors (schools, colleges, training centres and universities). The links with good pedagogy are recognised, and its potential to engage learners and provide authentic assessment is generally accepted. And although current use is low, it is increasing, with many teachers and lecturers (at least numerically, if not as a proportion of the professions) experimenting with collaborative technologies for learning and, to a lesser extent, for assessment.
“It was in the area of assessment that some teachers felt that Web 2.0 had some of its greatest potential, as peer assessment and collaborative composition connected with the personalisation and skills agenda.” (Crook and Harrison, 2008).
However, examples are isolated and there is no apparent over-arching pattern to the deployment of collaborative tools – either for learning or assessment. Traditional methods of assessment prevail – even when, as in the case of paper log books, online alternatives appear to be superior in every respect. A particular concern is the reliability of online collaborative assessment, especially for high stakes summative assessment. Many marking schemes simply regurgitate the rubrics used to assess traditional group work. Often, when rubrics are customised to the online environment, they use crude metrics such as frequency or length of contributions. Sometimes no rubrics are used at all: over 50% of SQA officers reported that no marking scheme was used during the assessment of online collaborative work. Teachers’ ability to use collaborative tools appears to be a significant barrier to wider adoption, this being cited as the major factor in the SQA research. Teachers themselves feel that “curriculum and assessment pressures reduced their opportunities to introduce Web 2.0 approaches.” (Crook and Harrison, 2008). So, in spite of its strong theoretical basis and vocational relevance, practice is currently limited and, when it is used, quality is variable.
IMPROVING CURRENT PRACTICE The main areas of weakness appear to be: 1. teacher skills in using Web 2.0 tools and services 2. curricular barriers to the adoption of online collaborative assessment 3. variability of current practice as it relates to online collaborative assessment.
4
See example rubric in the appendices to this paper. Page 4
Online collaborative assessment
The existing skill base of teachers and lecturers needs to be improved to embrace Web 2.0 skills; and new attitudes towards collaborative working and group assessment may need to be engendered. This training is also required by other educational professionals – over 50% of the respondents to the SQA survey report their own knowledge and skills in this area to be a barrier to the increased use of these tools for assessment. The existing curriculum and assessment systems remain a significant source of inertia in assessment: the “persistence of traditionalism” (Elton and Johnston, 2002). The “backwash effect” of assessment (whereby teachers teach what they believe will be assessed – and little else) further entrenches existing practice and constrains innovation in assessment. And the emphasis on reliability over validity by examing bodies reinforces current practice in preference to more innovative forms of assessment. A recurring concern among teachers and other educational professionals is the marking of online collaborative work. The importance of clear and transparent marking criteria is well documented (see, for example, Hounsell et al, 2007). But current practice is variable. The development of well-grounded criteria for assessing online group work is vital to give teaches and lecturers more confidence in their assessment of online activity. Creanor (2000) advocates a number of criteria for assessing online collaborative activity. These are: 1. 2. 3. 4. 5. 6.
Presenting new ideas. Building on others' contributions. Critically appraising contributions. Coherently summarising discussions. Introducing and integrating a relevant body of knowledge. Linking theoretical discussions to own experience.
Creanor’s criteria address some of the unique aspects of online collaborative working, and represent a significant improvement over the crude application of “traditional” criteria to online activity. However, they fail to address some of the unique skills involved in online group work, particularly the quality of collaboration and the technical skills involved in structuring and presenting information. A review of a number of exemplar marking schemes revealed a range of additional criteria in current use: 7. Collaborating with other contributors effectively. 8. Using the tool's facilities to structure and present information. 9. Providing accurate, concise and clearly written contributions. 10. Summarising concepts from readings. 11. Moving discussions forward. 12. Identifying strengths in contributions. 13. Providing constructive criticism where appropriate. 14. Suggesting solutions to problems. 15. Providing links to high quality and relevant online and offline resources. 16. Using multimedia to improve the quality of information. 17. Observing expected norms of behaviour for the medium in use. There was little consistency in the selection or application of these criteria in the exemplars reviewed. However, used consistently, a subset of these criteria could be selected to assess most online collaborative activity, and a rubric could be created with marks awarded to each criterion in proportion to their relevance to the assessment task.
Page 5
Online collaborative assessment
While the use of detailed criteria would improve the transparency and reliability of assessment, such an approach has its limitations. As noted by Knight (2002), “attempts to produce precise criteria lead to a proliferation of sentences and clauses, culminating in complex, hard to read documents.” And no amount of criteria can remove the inherent subjectivity of marking schemes:
“The research evidence makes it clear that there is a degree to which criteria cannot be unambiguously specificed but are subject to the social processes by which meanings are contested and constructed.” (Greatorex, 1999)
CONCLUSION Online assessment often shines a light on the dark corners of traditional assessment, exposing questionable practices that has been tolerated for years and which only come to light when we seek to computerise the task. This may be true of collaborative online assessment, which inherits many of the long-standing problems associated with the assessment of traditional group work. However, the problems are not insurmountable and the benefits appear to be understood and accepted by most practitioners. Progress has been slow and there is a well documented danger that the assessment system (in fact, the entire education system) comes adrift from the rest of society, with a reluctance to modernise its practices. Adoption of collaborative approaches to learning appears to be a key trend in education and this has to be reflected in the assessment system. This should be part of the modernisation of the assessment system that has been widely called for.
References Creanor, L. (2000) Structuring and Animating Online Tutorials. Case studies from the OTiS e-Workshop (Heriot-Watt Univeristy and Robert Gordon University). Crook, C., Harrison, C., (2008) Web 2.0 Technologies for Learning at Key Stages 3 and 4: Summary Report, Becta. de Freitas, S. (2008) Serious Virtual Worlds: A Scoping Study, JISC. Elliott, R. (2008) Survey results available at : http://www.surveymonkey.com/sr.aspx?sm= DIH_2baU0bC_2bVIZDztauzmqfd34Rq_2fq030CfMtq92lyzE_3d, (Accessed: 20 December 2008) Elton, L., Johnston, B., (2002) Assessment in Universities: A Critical Review of Research, LTSN Generic Centre. Gillmor, D. (2004) We the Media, O’Reilly Publishing. Greatorex, J. (1999) Generic Descriptors: A Health Check. Quality in Higher Education. Hounsell, D., Klamphfleitner, M., Hounsell, J., Huxham, M., Thomson, K., Blair, S., Falchikov, N., (2007) Innovative Assessment Across the Disciplines, The Higher Education Academy. Knight, P. T., (2002) Summative Assessment in Higher Education: Practices in Disarray, Studies in Higher Education, Volume 27, No. 3. Kolb, D. A. (1984) Experiential learning: Experience as the source of learning and development, New Jersey: Prentice-Hall. Morgan, M. (2004) Notes towards a rhetoric of wiki. Paper presented at CCCC 2004, San Antonio TX. Murphy, S., (1994) Portfolios and Curriculum Reform: Patterns in Practice, Assessing Writing 1. Nicol, D., MacFarlane-Dick, D., (2006) Formative Assessment and Self-Regulated Learning, Studies in Higher Education, Volume 31 (2). Pellegrino, J. W. (1999) The Evolution of Educational Assessment. William Angoff Memorial Lecture Series (ETS). The Economist Intelligence Unit (2008) The Future of Higher Education: How Technology will Shape Learning, The Economist. Vygotsky, L. G., (1978) Mind and society: The development of higher mental processes, Harvard University Press.
Page 6