Evaluation Framework

  • Uploaded by: Livinus Ngwumohaike
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Evaluation Framework as PDF for free.

More details

  • Words: 13,295
  • Pages: 46
Draft

A Framework for the Evaluation of Online Learning

Stephen Walker 2004

Contents Page 2

Abstract

3

Introduction The Purpose of Evaluation

4

Types of Evaluation

8

Online Learning and Pedagogy

12

Characteristics of the learning Environment Characteristics of Adult Learners

13 16 25

Evaluation Frameworks: A Review of the Literature Gaps in the Research

27

The Evaluation Framework Pre-Course Evaluation

29

Formative Evaluation

31

Summative Evaluation

32

Bibliography

36 Appendix 1

Questions to ask in the Preparation of an Evaluation Pre-Course Evaluation

37 Appendix 1

Formative evaluation

38 Appendix 1

Summative Evaluation

40 Appendix 2

Evaluation and the FENTO Standards

41 Appendix 3

Evaluation Tools

43 Appendix 4

Virtual Learning Environments

45 Appendix 5

Blogs

1

Stephen Walker 2004

Abstract The evaluation of learning is now a standard procedure in Post16 Education as colleges strive to meet quality standards, and to publicly demonstrate the effectiveness and cost-effectiveness of their courses. There is also a move to develop online distance learning, and to justify the expenditure on the technology that enables it. This requires new evaluation methods, as online learning and traditional face-to-face learning are fundamentally different. The evaluation community has put many evaluation frameworks forward, but most are designed for the evaluation of courses in higher education, and for the use of expert evaluators. A significant facet of online learning is that it lends itself to student-centred independent learning, where the teacher facilitates the construction of knowledge, and this constructive engagement with learning leads to learning at a deep level. This way of learning also fits the learning styles of adult learners. The intent of this paper is to provide a framework that can be used by teachers of adults in Post-16 Education, to implement courses that facilitate online student-centred independent learning, and to formatively and summatively evaluate online teaching, and to prepare for the implementation of online courses. The factors to be taken into account when carrying out evaluations are mapped to the FENTO and ILT Standards for teaching and supporting learning in Post-16 Education. There is a discussion of suitable pedagogies for online learning, and of the characteristics of adult learners, and of the features of successful online learning packages, followed by a review of the literature on evaluation frameworks. The purpose of evaluations is outlined with an explanation of formative and summative evaluation. A discussion of gaps in the research is followed by descriptions of the factors to be included in an evaluation, and questions to ask in the preparation of an evaluation.

2

Stephen Walker 2004

Introduction The increase in access to the internet and personal computers, and the move towards online delivery of services such as shopping, banking, booking holidays, and the government’s intention to move local government services online, means that many people will become familiar with online delivery and will also expect to be able to access education online. The market is driving the desire for online learning, as a new generation of computer and Internet literate people enter work and education. As Weller (2003) points out “one is not struggling to convince an audience of the potential of the technology, but operating in an environment that is in a process of rapid take-up.” However, the development of online learning also has the potential to exclude the socially and economically disadvantaged. Those in the lower socio-economic groups are less likely to own or have access to a computer and the Internet, and there is a limited understanding of the needs of disadvantaged learners. A further problem is the lack of suitable materials for adults returning to learning, and materials in minority ethnic languages (Clarke, 2003). The Internet has great potential as an educational tool, particularly because it is ideal for interactive and collaborative learning, which can facilitate a deeper understanding of concepts. Colleges may want to use online courses so they can use their resources more efficiently and reach new audiences, both of which can provide increased profitability and higher numbers of enrolments. They may also want to be seen to be part of the ‘e-learning revolution’.

The Purpose of Evaluation In Post-16 Education teachers are being given more freedom to determine what they teach, to whom, and how they teach it (Reece and Walker 2003). However, with this increase in autonomy come accountability and the requirement to maintain and develop standards. This requires the monitoring and evaluation of courses, and the requirement to produce reports periodically. As with traditional learning, online learning has to be evaluated. Evaluation involves more than just assessing the progress and achievements of students, it goes further by offering a judgement on the value of the learning. Evaluation is also a collaborative process involving teachers, students, and the institution in which the learning takes place, and the information learned for the evaluation can be used to improve services, and to validate a particular package as a learning tool. It is an integral part of good professional practice. The standards for teaching and teaching support in Post-16 Education, the FENTO Standards, provide a framework for teaching and support. Online learning is included in the FENTO standards, as is evaluation. Edwards (1991) describes some of the main purposes of evaluation, the first of which is that students become more fully involved in the decision making process. They come to understand the goals of the teachers, and it also gives them the opportunity to reflect upon their own learning and whether their objectives are being achieved. This helps to break down barriers between 3

Stephen Walker 2004

students and those in authority and helps to foster a more student-centred environment. The second aspect of evaluation is that it should lead to better communication between all those involved in an educational project. It can involve the recording and exchange of information that can be passed on to other bodies in the form of reports. Third, is the improvement in accountability to both the public and the learners when evaluation information is made available. There is now a climate in which public bodies are asked to justify their existence and cost-effectiveness must be demonstrated. Finally, according to Edwards, evaluation contributes to learning theory, and thereby to improved teaching and learning. Rogers (2002) suggests that there are three main areas that need to be evaluated. First are the goals and objectives. Are the goals being met, and are they the right goals? In order to be able to do this the definition of the goals needs to be very clear at the outset. There is also the question of whose goals they are – the student’s or the teacher’s. The main point, according to Rogers, is that the object is student learning – it is this that needs to be assessed. The second area is the teaching and learning process. The whole process should be evaluated, from planning to exposition. The main concern again should be with the learner – are they motivated and enabled to learn? This requires regular and ongoing (formative) evaluation. Third are learner attainments and other outcomes. This is at the heart of evaluation. What are students learning and what quality and level is it at? Rogers suggests that the attitudinal development of students should also be assessed – do they have additional confidence and motivation? This is often missed out when there is an emphasis on increased knowledge. Unexpected outcomes also need to be noted. Rogers suggests that there will be many and not all will be life enhancing. Rogers goes on to suggest that the “only fully satisfactory” mode of evaluation is summative evaluation, at the end of each stage of the learning program. The difficulty with adult learning is that there will be multiple outcomes of the new learning and learners will not plan to use their new learning in the same way as each other. This means, according to Rogers, we have to find methods to measure the many ways of expressing new learning. Because of this it is suggested that the results of any summative evaluation should only be tentative, and that the learners should be involved in deciding what the goals of the learning should be, and in deciding the criteria for the evaluation. Types of Evaluation There are essentially two types of evaluation: Formative and Summative. Formative Evaluation takes place during the instructional process. It allows immediate feedback to be given to both student and teacher, and revisions and improvements to be made. It needs to be timely and utility is perhaps more important than validity (Oliver, 2000a). Lockee et al (2002) suggest that formative evaluation can be broken down into two primary categories:

4

Stephen Walker 2004

Instructional design issues, and interface design issues. Instructional design issues relates to whether students learn what the goals and objectives intended, were the objectives clearly stated and measurable? Were appropriate instructional strategies chosen? Was there enough practice and feedback? Were examples provided? Did assessment methods correlate with instructional content and approaches? If these questions can be addressed then corrective measures can produce more effective learning experiences… Interface design issues relate to the strengths and weaknesses of the look and feel of the web site or materials Was the web site easy to navigate? Was it aesthetically pleasing, as well as legible? Did each page in the site download easily? If special plug-ins were needed, were links provided to acquire them? There should also be an assessment at this point of any special needs requirements. The Disability Discrimination Act (DDA) does not directly address Internet sites, but supporting documents indicate that the DDA would cover Internet sites. Institutions that offer online courses would be expected to make these courses accessible to people with disabilities. Actions that can be taken include the provision of screen readers, and the use of the built-in accessibility functions of, for example, Microsoft applications such as Windows, Office, and Outlook; content designers should also be aware of the need to provide the option to have materials in different font sizes and background colours, and the ability of users to avoid animations and videos. Pages should be accessible without the use of a mouse or pointing device. In order that evaluation can tie in with the desirability for student-centred learning, it is necessary for formative assessment to be built in to the learning process. This requires that teachers establish learning guidelines and outcomes as well as the criteria for evaluating student performance (Palloff and Pratt 1999). It is necessary to take into account a rage of data from students, including assignments and other class exercises, as well as quantity of posts, and the quality of participation in online discussion. Summative evaluation takes place at the end of the instructional process. The outcomes of the course are assessed against the aims and objectives to show how well the instructional material has been learned. Summative evaluation is also used to evaluate the overall effectiveness of a learning program. Where summative evaluation is used to change aspects of a learning program it becomes formative. Formative and summative evaluations are not mutually exclusive.

5

Stephen Walker 2004

Oliver (2002a) describes two main types of summative evaluation: Experimental, and Exploratory. In an experimental model, where a control group who didn’t receive the particular teaching intervention under evaluation, there are ethical problems surrounding the decision to deny one sub-group of students a potentially beneficial educational experience. There are also many problems with isolating independent variables from the host of cultural and social influences on a learning experience, so full control is almost impossible, therefore it is almost impossible to attribute causality. A more fruitful summative evaluation would account for any independent variables, rather than try to isolate them. Comparative evaluations are popular due to the desire to see if there is added value from a particular intervention. The problem here, in the case of evaluating online learning is that students can quite easily share information, identities, and passwords. The ethical problem of allocation of potentially beneficial resources to one group and not another is also present here, as well as the problem of materials changing, or teachers or students changing (Oliver 2000a). A further issue is that traditional face-to-face teaching and online teaching are radically different. Online learning involves new forms of communication and collaboration, and new patterns of study and group behaviours and, as Dempster (2004) points out, these do not necessarily have counterparts in traditional face-to-face learning, so it is futile to compare an e-learning group with an existing group. If the assessment of online and traditional learning remains the same then it may be possible to make a valid comparison. A comparison may also be valid where criteria-referenced assessment is being used. The conclusion to these issues, given by Oliver (2000a) is that while experimental and comparative evaluations can be done, they need to be carefully designed and reported in a way that acknowledges their limitations. As an exploratory model Oliver (2000a) suggests an ‘illuminative’ approach for the evaluation of online learning. It is an anthropological approach that “looks at what is important” to students. It requires observation, enquiry and explanation, and analytical methods need to be adapted pragmatically and triangulation used to improve reliability. This approach seeks to describe and interpret factors that influence learning, rather than control them, and the educational context becomes the focus of the study. It follows from this that there will be problems with the objectivity of the evaluation and its transferability. However, it is suggested that it would be useful for specific programs, and the results of an evaluation should then be critically assessed in new contexts for their applicability. This was the approach taken in the TILT (Teaching with Independent Learning Technologies) Project. In this project, which aimed to improve teaching and learner support using Information Technologies in Higher Education, elements of both illuminative and experimental approaches were used and the results triangulated. There was an assumption that the results would be situationally specific, and there was no attempt to generalise findings. It seems that within the ‘evaluation community’ there is a move towards a new philosophy of pragmatism, and

6

Stephen Walker 2004

that evaluations can have different theoretical underpinnings when required by context and audience (Oliver 2000b). Sims (2001) suggests that most evaluation is ‘reactive’ in that it takes place at the end of a learning program, examining the functionality of resources and the achievement of learning outcomes once materials have been developed and implemented. He puts forward a ‘proactive’ development environment in which evaluation takes place at all stages of the development of a learning program. Developers need to “focus on the criteria on which their products and resources would be evaluated and thereby ensure that all factors associated with a successful evaluation are addressed during the design and development process.” This is not intended to replace formative and summative evaluation, but to provide an integrated framework for an online pedagogy.

7

Stephen Walker 2004

Online Learning and Pedagogy Definitions of online learning range from the very simple such as “courses offered over the internet”, … to the pragmatic: There is no easy definition of online learning; you must define for yourself what online learning is for your company. You must determine what definition is right for you (Information Technology Development Corporation 2003). to the more encompassing: …a learner centred approach to education, which integrates a number of technologies to enable opportunities for activities and interaction in both asynchronous and real time modes. The model is based on blending a choice of appropriate technologies with aspects of campus-based delivery; open learning systems and distance education. The approach gives instructors the flexibility to customise learning environments to meet the needs of diverse student populations, while providing both high quality and cost effective learning (Bates 1997). There are other definitions of online learning, but what they all have at the core is the use of the Internet and computing technology to facilitate independent student-centred learning. The dominant approach to learning in an online environment is constructivism (Weller 2003). This theory of learning emphasises the social construction of knowledge: learners come to an understanding of concepts via their interaction with other learners and their teacher. It is a student-centred approach and places collaboration at the heart of learning. The reason why constructivism is popular in online learning is that online learning provides an ideal platform for student-centred, collaborative learning of the kind that has been put forward as a pedagogical ideal, but has not been evident in the classroom to any great extent. Reeves (1997) uses a scale of pedagogical dimensions of epistemology and philosophy to put constructivist theory into context. With epistemology the scale goes from ‘Objectivism’ to ‘Constructivism, where at one extreme ‘objectivism’ holds that • • • • •

Knowledge exists separate from knowing Reality exists regardless of the existence of sentient beings Humans acquire knowledge in an objective manner through the senses Learning consists of acquiring truth Learning can be measured precisely with tests.

8

Stephen Walker 2004

At the other extreme ‘constructivism’ • • • • •

Denies that knowledge exists outside the bodies and minds of human beings Reality exists, but it is individually constructed Humans construct knowledge subjectively based on prior experience and metacognitive processing or reflection Learning consists of acquiring viable assertions or strategies that meet one’s objectives At best, learning can be estimated through observations and dialogue (Reeves 1997)

If the designers and practitioners using online learning lean towards the objectivist viewpoint then they will be primarily concerned with seeking a definitive structure of knowledge, which is comprehensive and accurate with respect to the ‘truth’ as they know it. Constructivists will be concerned with assuring that the online learning content reflects a wide spectrum of views from which learners can construct their own knowledge (Reeves 1997). The dimensions of pedagogical philosophy are placed on a scale ranging from ‘Instructivist’ to ‘constructivist’, where at the extreme end instructivism stresses the importance of goals and objectives that sit apart from the learner. The teacher structures these into learning hierarchies, and then direct instruction is given to move the learner through the objectives. The learner is ‘viewed as a passive recipient of instruction’. Contructivism, at the other extreme, emphasizes the primacy of the learner’s “intentions, experience and metacognitive strategies.” Direct instruction is replaced with self-directed learning and exploration (Reeves 1997). Ewing and Miller (2002) state that much is unknown about the complicated relationship in a learning exchange between the social and cultural influences upon a learner and how this affects their way of thinking about and understanding their learning activities, but that a constructivist approach goes some way to bringing the two together. It allows “a freer approach to the instructional process” than a process-based approach. It also includes the learner at the centre of the design process, rather than the design of a set of instructional sequences that focus on learned outcomes. They suggest that there should be a closer examination of the relationship between collaborative learning and electronic learning, and point to evidence that suggests that collaborative learning, along with “case-based instruction and a significant emphasis on student autonomy” make a significant contribution to effective learning. They put forward a framework to identify the key elements of collaborative learning and ICT supported learning, and to highlight the points of contact between them. The framework is discussed in the literature review. Garrison and Anderson (2003) suggest that “the dominant educational feature of (online learning is) to support asynchronous, collaborative learning.” They go on to advocate a ‘collaborative constructivist’ view of teaching and 9

Stephen Walker 2004

learning, the goal of which is to “blend diversity and cohesiveness into a dynamic and intellectually challenging ‘learning ecology’”. It is this new ‘ecology’ that demands a different approach to teaching and learning, and trying to simulate traditional face-to-face approaches makes little sense. Benigno and Trentin (2000) argue that what distinguishes online education from previous forms of distance learning is that it is a social process, rather than an individual one. This is brought about by computer conferencing systems “that foster interpersonal communication and collaborative learning.” Collaboration is brought about by group tasks that require students to work as a team, or to contribute elements. Weller (2003) outlines six advantages of collaboration: • • • • • •

It promotes reflection, and Active learning, which is more beneficial than simply being a passive recipient of information, and Development of communication skills, and Deeper understanding, and A broader scope of knowledge can be brought to a discussion or task, and Exposure to different ideas.

There are pitfalls in collaborative learning, many of which are also applicable to such learning in a traditional face-to-face environment. The key problems are first, it reduces the capacity for independent learning, at a pace and time that suits the student. This is especially the case with adult distance learners, who will have work and family commitments. Second, the link between collaborative work and assessment: what weight is the collaborative work given compared to individual work? Some students may feel that they are being put at a disadvantage if they are in a ‘weak’ group. Third, implementing and negotiating collaborative working is time-consuming. Weller (2003) suggests that while collaborative learning is essential in online learning, if the technology is to be used in a meaningful way, it needs to be used in combination with individual activities, and implemented carefully. Reeves points out that while constructivism has been popular in theory there has been little evidence of it in practice in schools. In practice there tends to be a combination of instruction and construction, but the domination of didactic methods is not surprising given that the predominant assessment method is examination, and that the performance of schools and is measured in public league tables based upon these exams. This is the case in what Garrison and Anderson (2003) would call the traditional learning ‘ecology’, online learning is relatively new and is still open for a ‘collaborative constructivist’ method to become the predominant theoretical perspective and mode of practice. However, from the point of view of constructivists, one worrying finding from a study of Australian Vocational Educational Training providers was that “teacher-centred approaches are dominant in current offerings despite the great potential of web-based flexible learning to engage learners in problem-solving, responding to change and improvement through self-monitoring and self-reflection.” (McKavanagh et al 2002). Whether this 10

Stephen Walker 2004

relatively small survey can be generalised to a British population is open to question. There is also the point that online learning requires students to pass examinations and to have their learning tested, thus, as in traditional face-toface teaching there is a need to be pragmatic, and in the case of adults there is a need to address individual circumstances of students – some may not respond well to self-directed learning – there are adults who would prefer to be ‘taught’ more formally. Hanson (1996), in research on mature students, found that adults were prepared to ‘suspend their adulthood at the door of the institution and be only too willing to submit themselves to its constraints’. A caveat was that the tutor had to have something to offer which justified such an unequal partnership. It is suggested by Sims (2001) “as online learning environments can be perceived as supporting the constructivist paradigm, adopting rigid Instructivist strategies may degrade the overall effectiveness of the encounters experienced by the learners”. However, Garrison and Anderson (2003), while adopting Sims’s framework, point out that the design of online learning courses will reflect the pedagogical biases of their creators, and that there will be many online courses that follow an Instructivist design. They suggest that an online course should be consistent with the philosophy of the designers and that it can reflect a variety of pedagogical assumptions. What is important is that every course should: Be aligned with the prior experience and knowledge of the learners; provide pathways and sequencing that are coherent, clear and complete; provide opportunities for discourse; and provide means by which both students and teachers can assess their learning and the expected outcomes; and the ways in which these outcomes are to be achieved should be clearly articulated. This would seem to be a more pragmatic way of moving forward with the design and implementation of online courses, because while a purely constructivist pedagogy is an ideal, the everyday practicalities of teaching, the assessment requirements of a course, and time constraints mean that a mixture of instructional methods is a more suitable approach. Courses should combine elements of instruction with associated collaborative activities that explores the concepts (Weller 2003).

11

Stephen Walker 2004

Characteristics of the Learning Environment A major goal for teaching is to ensure that the learning environment is as rich as possible. It is this richness that produces the conditions to facilitate deep levels of understanding rather than just the recall of factual information. Garrison and Anderson (2003) describe two levels of processing or understanding: “surface level processing, where the student has a reproductive or rote conception of learning and a corresponding learning strategy; and deep-level processing, where the intention is to comprehend and order the significance of the information as well as integrate it with existing knowledge.” Reece and Walker (2003) point out that these deep and surface concepts are linked to motivation, because it is possible to understand them in terms of the student’s motivation to achieve an end result by using a ‘surface or strategic approach’. The importance of this is that approaches to learning are influenced by the educational environment. Students will adapt their learning to the demands and characteristics of the course they are doing. Ramsden (1998) argues that there are three domains that influence a student’s perception and approach to learning. These are: assessment, curriculum, and teaching. Assessment – this has a powerful influence on what students see as important and how they should approach their learning. If the assessment is by examination then this can influence the student to prepare for the recall of information, at the expense of deeper understanding. A deeper understanding on behalf of the student requires assessment methods that reflect this. Curriculum – excessive demands on a student’s time will encourage a surface approach to learning. Online learning can be a powerful tool in enabling the student to have greater freedom to choose the content of learning, an important condition for deep learning. Teaching – influences the approach to learning and addresses the issues raised in the other two domains. The teacher shapes the learning environment and outcomes, and defines the goals, contents and assessment. It is the teacher that creates the conditions for deeper learning. The teacher can do this online by ensuring that learning is applied to problem-based situations, encouraging discussion and structuring reflection (Reece and Walker 2003). The strength of online learning is in its ability to influence these three domains in the formation of what Palloff and Pratt (1999) describe as “a learning community through which knowledge is imparted and meaning co-created (which) sets the stage for successful learning outcomes.”

12

Stephen Walker 2004

Characteristics of Adult Learners The California Distance Learning Project (quoted in Palloff and Pratt 1999) found that students who were successful in online learning shared certain characteristics. They • • • •

Are voluntarily seeking further education Are motivated, have higher expectations, and are more self-disciplined Tend to be older than the average student Tend to possess a more serious attitude towards their courses

There are similarities to these characteristics and the characteristics of adult learners in general. Rogers (2002) lists seven characteristics of adult learners: 1. The students define themselves as adults As such they ‘exercise their adulthood’ by voluntarily attending learning programmes. Even when they are sent by employers, they choose to come rather than incur whatever consequences would follow from a decision not to. It also means that they accept responsibility for their own learning, if the subject is seen as appropriate. 2. They are in the middle of a process of growth, not at the start of a process Adult learners are not passive individuals, they are ‘actively engaged in a dynamic process’ and the teacher must be aware of this in order to be able to respond to the varying demands of adult students. 3. They bring with them a package of experience and values This can have implications for the collaborative aspects of online learning. The teacher should strive to utilise the varied experience and knowledge of all of the members of the group. This should help to bind the group together. The ‘package’ also determines what meanings the learner creates. 4. They come to education with intentions Adult learners are usually more highly motivated, they want to change something about their world and see education as the means to do this. Whether this involves simply learning a subject, or is for social and personal growth, it means that they can be directed towards particular learning objectives to develop a sharper awareness of the learning task and its relevance to them. Once this is facilitated then their motivation can lead to a greater degree of self-directed learning. 5. They bring expectations about the learning process This is based upon their experiences of school and of education since leaving school. It may lead some to seek independent learning, but others may seek to ‘be taught’ in a more formal structure. It is important to ascertain these expectations before a course begins. Another aspect to be aware of is that some students may feel that there are certain

13

Stephen Walker 2004

subjects that they will ‘never learn’, and may confine themselves to the role of ‘lurker’. 6. They have competing interests Adult learners usually have a full social environment. They may be parents, have partners, workmates, friends, a full time job. These contextual and environmental elements need to be considered in an evaluation of the learning that has taken place. Their background is relevant to their learning, and their learning may not always have their full attention. 7. They already have their own set patterns of learning This emphasises the fact that all of the learners will have their own learning styles. The teacher needs to be aware of this and respond to it. In a student-centred environment the students must be given the opportunity to exercise their own way of learning. The term ‘androgogy’ has been used to describe the way that adults (as opposed to children) learn, and it is suggested by its proponents that the term should replace ‘pedagogy’ when referring to adults learning. One of the main theorists behind androgogy, Malcolm Knowles, identified six assumptions about adult learning (Knowles 1970): 1. The need to know Adults need to know why they need to learn something before they start. 1. Self-concept Adults have a self-concept of being responsible for their own lives, and need to be seen by others as being capable of self-direction. 2. Experience Adults can draw upon experience for their learning. 3. Readiness to learn Adults are motivated to learn those things that they need to know and be able to do. 4. Orientation to learning Adults are motivated to learn when it will help them in their work or in real-life situations. 5. Motivation The best motivators are internal pressures) like increased job satisfaction and self-esteem). Hanson (1996) points out, however, that there is a danger that androgogy, rather than describing the ways that adults learn, actually prescribes the ways

14

Stephen Walker 2004

that they should learn, ignoring the constraints that hinder the ability of adult learners to learn independently and use their experiences. These characteristics of online learning and online and adult learners suggest that a student-centred method of delivering learning, which allows independent learning and collaboration, is desirable when teaching adults, and that teachers should be negotiating with adult learners about their objectives and how to achieve them. However, there are a number of constraints that need to be considered when adopting this type of approach to teaching. These would include the amount of time that is available to complete a course, the outcomes required from a course in terms of its assessment by examination boards or the college or business funding the learning, and the individual learning styles of the students.

15

Stephen Walker 2004

Evaluation Frameworks: A Review of the Literature There have been a number of attempts to develop frameworks for the evaluation of online learning, and to outline a theoretical basis of evaluation, and which factors need to be taken into consideration. Hughes and Attwell (2002) suggest that evaluation work has been dominated by ethnographic studies, in particular ‘heavily contextualised case studies’, rather than interpretation and analyses. They go on to outline a ‘taxonomy of e-learning evaluation’, which is based upon a web-based literature survey, and organises the evaluation research under seven main headings: Case Studies of specific e-training programmes Comparisons with traditional learning Tools and instruments for evaluation of e-learning Return on Investment reports Benchmarking models Product evaluation Performance evaluation However, in addition to this a number of papers have been published that attempt to produce models for the evaluation of online learning, both for practitioners and academic research staff. Hughes and Attwell suggest that evaluation should be integral to e-learning and not ‘bolted on’, with both formative and summative elements, client-centred and ethical with a range of different approaches and theoretical perspectives. Theirs is an attempt to ‘build a robust classification system’ for mapping and coding existing work into the ‘effectiveness, efficiency and economy of e-learning’. The Hughes and Attwell framework consists of five major clusters of variables that need to be taken into account, which are: Individual learner variables Such as physical characteristics, learning history, attitude Learning environment variables Physical environment, organisational environment, subject environment

16

Stephen Walker 2004

Contextual variables Class and gender, political context (who is funding the learning), cultural background, geographical location Technology variables Hardware, software, connectivity Pedagogic variables Support systems, methodologies, learner recruitment, assessment, accreditation

autonomy,

selection

and

The conclusion of Hughes and Attwell is that the evaluation of online learning is fundamentally the same as traditional learning, but with certain variables playing a more prominent role. These variables would obviously include the technology variables, and also due to the nature of online learning it would include those individual variables that pre-dispose a learner to be more independent and more highly motivated – characteristics that are strongly associated with adult learners, as outlined above. Pedagogic variables would also be prominent given that online learning requires different pedagogies than traditional face-to-face learning. An attempt to incorporate technology into the evaluation process is outlined by Wentling and Johnson (2004). They describe the conceptualisation and development of an evaluation system for online learning. They suggest that the current evaluation practices of practitioners and academics are limited to the ‘use of traditional research methods and intuitive approaches to evaluation’. They also point out, as do Hughes and Attwell, that there is a lack of systematic evaluation that is built on evaluation theory and practice, although their system doesn’t seem to address this issue. The system has three stages, and uses a ‘medical analogy’. The first stage is to assess ‘vital signs’, to check the current state of health of the program. The second is to carry out an ‘in-depth diagnosis’ of the results of the vital sign assessment where there is an identification of substandard results, followed by the third stage of ‘program improvement planning’, to provide solutions to the problems investigated at stage two. The vital signs, identified by Wentling and Johnson, along with their respective ‘data elements, are: Program demand Number of applications requested and received per semester, number of telephone contacts per semester Student satisfaction Results of formative assessment and questionnaires etc Faculty satisfaction The ratings of the ‘faculty’ with regard to technology, technical support, interaction with students and the quality of student’s work. Student retention

17

Stephen Walker 2004

Student learning Financial efficiency What these signs and data elements suggest is that this is an evaluation system to be used at a departmental level rather than by tutors to evaluate online learning. Where this research marks an advance on others is the development of an electronic performance support system (EPSS) to automise the data gathering, and Wentling and Johnson point out that ‘beyond a few efforts in corporate settings, this technology has not been widely applied in education and training for evaluation purposes’. The problem with the support system is that limits had to be set on the variables that it could handle, therefore missing some possible responses. However, Wentling and Johnson point out that the data collection instruments were carefully developed and tested and that the computer program is being used to assist evaluation by practitioners, not to replace practitioners. The vital signs outlined are broadly applicable and have the advantage of being ‘specific outcome measures with minimal data requirements’. Student satisfaction, student learning, and financial efficiency tie in with Kirkpatrick’s levels 1, 2, and 4 respectively. Donald Kirkpatrick presented a four-level model of training evaluation that has become a standard evaluation model in business and educational training. It can also be adapted for online learning, and could provide a cohesive underlying evaluation theory that is missing in many evaluation frameworks. Kirkpatrick’s levels of evaluation are: 1. Reaction Students’ reactions are often gathered using a ‘happy sheet’ at the end of a training event (and in many training events this is the only evaluation). A positive reaction from students doesn’t necessarily imply that learning has taken place, but a negative reaction would almost certainly reduce the possibility of learning having occurred. In the case of online learning this would be part of the formative evaluation of the course. Questionnaires can be used, as part of the course, about the relevance of the objectives, the navigability of the software, perceived value and transferability of the learning (Kruse 2004). 2. Learning This moves beyond learner satisfaction and tries to assess whether learning has actually occurred. This can involve formal and informal testing, or even self-assessment. It is desirable to have pre-testing before the training so that a baseline can be established and a meaningful comparison made.

18

Stephen Walker 2004

3. Transfer of behaviour This is a measure of the change in behaviour after a training program. Is the training being put to use in the workplace, for example? Is it demonstrated in coursework assignments? This requires evaluation to be made at periods after the training, and, in the case of workplace training, would need to involve input from peers and managers. 4. Results This measures the return on investment. Does the training have an effect on increased turnover or productivity? Does it result in more students passing exams and the course being more popular? This is the most difficult aspect to measure, as it is almost impossible to isolate particular variables that may have had an effect on the learning outcomes. The Kirkpatrick model can provide a useful guideline for the formative and summative evaluation of online courses, although level three may take more time and resources than are available to individual teachers, and may have to be carried out as part of a college-wide evaluation process. At the present time there are few entirely satisfactory measures of the return on investment either in educational or business establishments. An attempt to provide a quantitative measure of the effectiveness of online learning programs has been made by Sonwalker (2002). He proposes a three-dimensional grid in which five ‘functional learning styles’ are on the xaxis, six media elements on the y-axis, and interactive aspects on the z-axis (on a scale from teacher centred to learner centred). The five learning styles are apprenticeship, incidental, inductive, deductive and discovery. The six media elements are text, graphics, audio, video, animation, simulation. The interactive elements are feedback, revision, email, discussion, bulletin. These are each given a weighting and are put into a ‘summative rule’, the result of which is a Pedagogical Effectiveness Index (PEI). The three dimensions are weighted at .34, .33 and .33, with learning styles each given a weight of.068, media elements weighted at 0.55, and interactive elements .066. The theory is that a greater mix of elements will give a score closer to 1, the closer the score is to 1, the greater the increase in ‘pedagogical effectiveness’. (For example, a course with three learning styles, four media elements, and two interactive elements will be PEI = 3*0.068 + 4*0.005 + 2*0.066 = 0.556). The greater mix of elements increases ‘cognitive opportunity, leading to enhanced learning. The problem with this is that although it is well worked out it gives a very narrow definition of ‘pedagogical effectiveness’, which cannot be measured purely quantitatively; feedback from students would also be

19

Stephen Walker 2004

necessary. It is also questionable whether each of the elements within the dimensions should be given equal weight, for example are text-based programs equal in their effectiveness to ‘simulation’, is a bulletin equal in its effectiveness to discussion? What the framework does emphasise is that a rich learning environment leads to deeper learning. It may have some use as ready reckoner for the potential effectiveness of a course, and may provide the sort of ‘factual’ and numerical data that pleases those in charge of the purse strings, but it is not a substitute for a carefully planned evaluation using a mix of qualitative and quantitative elements. However, the three dimensions and the elements within them are pointers to factors that play a part in successful online learning. Ewing and Miller (2002) suggest that a constructivist approach to online learning with an emphasis on collaborative learning produces effective learning. Their framework tries to find relevant points of contact between collaborative learning and ICT supported learning. The effectiveness of their framework will depend upon “the strength of the links between the pairs of features in the table below. Key features of e-learning support Autonomy in student learning An environment which promotes collaborative learning Moving beyond knowledge transmission to include communication as a real life skill Promotion of personalisation and reduction of depersonalisation of learning Support for learners in development of ICT and personal learning skills

Key features of collaborative learning Learners have individual responsibility and accountability Learning interaction takes place in small groups Communication during learning is interactive and dynamic Learners can identify their role in the learning task Participants have a shared understanding within the learning environment Ewing and Miller (2002)

The conclusions that they came to were that it is possible to give students freedom and choice over “the conditions and circumstances of their own learning” and, in doing so, to sustain a high level of individual responsibility and accountability for collaborative learning. “The students responded positively to collaborative learning, and asked for more of the same. However, a substantial element of the communication was ‘utilitarian and contentbased’, rather than being meaningful person-to-person exchanges.” What this meant was that it was difficult to move beyond ‘knowledge transmission’ to more meaningful exchanges that promoted deeper learning. There is a need to express to students that communication is a valuable aspect of learning, and teachers need to recognise a role for personalisation in the learning event.

20

Stephen Walker 2004

In the evaluation framework proposed by Benigno and Trentin (2000) the focus is on the participation in group activities rather than on fully-fledged formal evaluation of learning. They take it for granted that ‘satisfactory participation’ will result in learning of course topics. They also suggest that when running an online course “it is essential to start with a reasonably homogenous student group, especially in terms of pre-knowledge”. This may be feasible up to a point in some institutions, but tutors have to expect to cater for a range of abilities within the online group, whether these are academic abilities, or competence with ICT, learning environment etc. The homogeneity of the group can be ensured by entry questionnaires. These should be used anyway so that tutors know who they have in their group, and to establish the level that they are at. Whether it is then used to weed out ‘unsuitable’ students is a policy matter for the provider. The analysis of participation necessary to determine whether learning is occurring is done via a ‘qualitative measurement grid’. This is not a scientifically rigorous tool but has the advantage of being quick and easy to use and is seen as a useful tool by tutors. Four basic elements are taken into consideration in the grid: • • • •

Number of messages sent by each individual Interactivity characteristics of the messages Extent to which the message covers the topics that course experts have identified as significant Depth (granularity) to which the topics have been explored

The first two cover participation quality from the point of view of presence and interaction with other students. The second two do so from the point of view of the contents being studied. The tutor should also get regular progress reports from students, noting what they have done, any problems they have encountered with the coursework or with the technology, personal issues etc. At the end of the course it is suggested that students are given a questionnaire to examine their opinions on the course. The questions should cover: • • • • • •

Course contents The educational approach used Organisational aspects of course activities Participation logistics of individual students Technical aspects related to use of the net and other technologies Performance of tutors, as moderators, facilitators, activity leaders etc

Benigno and Trentin also point out that it is necessary to monitor learning in the long term, in a person’s work for example, corresponding to Kirkpatrick’s Level 3 of transfer of learning.

21

Stephen Walker 2004

McKavanagh et al (2002) put forward a framework for evaluation that they claim, “steers a mid-course between teacher-centred and student-directed learning approaches”. Their ‘conversational’ framework is based upon the work of Laurillard and derives from the view that good teaching and learning is reflected in ‘deep exchanges’ between all participants. They also hold the view that constructive engagement with learning leads to changes in understanding, rather than learning at a shallow level. The evaluation involved counting the sort of exchanges that would lead to “deep, reflective and effective learning”. This was done by counting the number of exchanges between students-students, students-teachers, students-course content, and monitoring the application of new skills. An email survey was also carried out among teachers and module co-ordinators. The framework is to be administered by expert evaluators, rather than used by teachers as part of the teaching process, but the principles do highlight the capacity for online learning to provide the ‘constructive engagement’ with learning activities that is necessary for changes in understanding, and it provides examples of the sort of variables that need to be considered if an evaluation is to produce evidence of deeper learning. Scanlon et al describe an evaluation framework that encourages a variety of methods for evaluation. They make the point that context is important in any evaluation and needs to be taken into account. The rationale behind this is that the evaluation framework should aim to evaluate the use of educational technology on a course, and not the educational technology alone. An illuminative approach is suggested which focuses on three aspects: “context, interactions and outcome.” These are defined as follows: Context refers to a wide interpretation of the rationale for use of the software including the aims of its use….Interactions refers to a consideration of ways of examining and documenting how students interact with computers and each other, focussing on the learning process. An outcome refers to a wide interpretation of the changes in students, which result as a consequence of using the program. They also suggest methods of data collection, and which types of data need to be collected for each of the three aspects. The suggestions include interviews, questionnaires (pre- and post-course), logs of computer use and observations. One caveat is that the use of an illuminative approach with mixed methods of data gathering means that the results are context specific and not able to be generalised. This does not mean that the results are of no use however; they can still inform good practice if disseminated. Sims (2001) suggests a number of factors that need to be considered in the design and implementation of online learning. Sims believes that evaluation should be a proactive part of every stage of the development and implementation of online learning programs. Garrison and Anderson (2003) also use Sims’s work in their framework for evaluation. The first thing that needs to be established is the ‘strategic intent’ of the online learning: why are the resources or activities being placed online? By making the strategic goals

22

Stephen Walker 2004

explicit it is possible to establish mechanisms for measuring the achievement of those goals (Garrison and Anderson 2003). The second element is the content of online learning courses, which can range from that which is predetermined by the teacher, to the other end of a continuum where content is constructed by teacher and students as the course progresses. What is important is that there is cohesiveness among the elements of the course with a consistent style and at a level that is appropriate for the students. Achieving this becomes more problematic as the contribution of the learners increases. Another aspect of the evaluation of content is whether it can be adapted for future use. Smaller units are easier to adapt and use in different situations and can be placed in an online repository of learning objects such as www.merlot.org. Allen (2004) believes that there is too much online instruction that focuses on content presentation rather than learning experience. His three primary criteria for good content are the ‘3Ms’: Meaningful – Is it tailored to the learners? Can they understand it? What are the goals of the learners and how do they relate to the goals the tutor has for them? Memorable – The learning experience needs to be such that the knowledge stays with the students. A good mix of media elements and graphical content help this. Motivational – Do the students understand the purpose of the work? Are they getting feedback in time? Can they engage with the course materials constructively? These elements are perhaps more important in online learning which is often done in isolation and boredom can set in quickly. Third, Sims suggests that interface design and level of interactivity are two of the most neglected aspects of online learning, and both need to be evaluated. The qualities of an effective interface are ease of navigation, use of images, sound, video, and where appropriate with the design based on some kind of ‘metaphor’ such as a building, or filing system. It should also be customisable by students and teachers. Interactivity (Sims’s fourth element) should be measured in terms of student-student, student-teacher, student-content, and teacher-content. Garrison and Anderson suggest two other types: teacherteacher and content-content. ‘Content-content’ referring to the “logic and intelligence that is being built into various autonomous agents and will eventually give rise to a new type of content that is capable of updating itself and changing in response to interactions with teachers, learners and other content agents.”

23

Stephen Walker 2004

As was outlined earlier, the forms of assessment influences the learning behaviour and can define a course. The fifth aspect of an evaluation, according to Sims, is to consider the assessment activities and whether they measure the course objectives. Student support is the sixth element of proactive evaluation and should cover whether there is adequate support for educational factors (remedial or enrichment activities), technical support, and personal issues. Sims asks, “What support personnel and resources have been identified to ensure (students) will feel integral to the learning environment?” The final aspect of Sims’s ‘proactive evaluation’ is a consideration of whether outcomes have been met and match with objectives, student and teacher satisfaction, results, and of course whether learning has taken place. What Sims and Garrison and Anderson have done is point out that an evaluation is a broad and complex undertaking, and involves more than a mere assessment of learning outcomes and student satisfaction. Garrison and Anderson (2003) make the point that tutors should reward participation in online learning significantly, more so than participation in traditional learning environments. Adult distance learners will have many demands upon their time, and if participation carries a high percentage of marks in an assessment then they are more likely to participate and to find the time to do so. The quality of the participation is obviously an issue, and while Garrison and Anderson suggest heuristics for measuring the quality of participation, they recognise that it is labour intensive and so not practical for many tutors. The issue of providing qualitative measures of participation is one that is difficult for tutors to overcome. Garrison and Anderson suggest that it is possible for students to present their own evidence of meaningful participation. This can be done at the end of a course by asking students to write a ‘reflection piece’ that quotes from their contributions. A list of heuristics could be given to students to act as guidelines. The heuristics (based upon an ‘internet based learning construction kit’ produced by Curtin University 2001) are listed below Do the student’s postings: • Encourage others to learn? To participate? • Contribute regularly at each stage of the unit? • Create a supportive and friendly environment in which to learn? • Take the initiative in responding to other students? • Seek to include other students in their discussions? • Successfully overcome any private barriers to discussion? • Demonstrate a reflective approach to online learning • Use online learning in novel ways to increase their own and other students’ learning? (Curtin University 2001)

24

Stephen Walker 2004

Gaps in the research A review of the literature suggests that the work on the evaluation of online learning is skewed towards evaluating courses in Higher Education. There needs to be more research into the wider contexts of online distance learning, particularly among adults, matching the characteristics of adult learners, and adults’ learning styles, with pedagogy and the content of online learning. The ability of online learning programs to facilitate student-centred learning requires research into online pedagogies. Teachers need to be able to adapt to new ways of teaching that are less teacher-centred and engage students in the construction of their learning. In particular, attention should be given to methods of facilitating collaborative learning, and to the social processes at work in collaborative learning. The summative evaluation of online learning is often tainted by political considerations. More research is required into the validity of evaluations, to ensure that they are reflecting the learning process and are not being used to justify expenditure by the agencies that drive the research. There is also a lack of research into the Return on Investment related to online learning. It is difficult within a business or educational environment to isolate the independent variables that would indicate the cause of any financial gains, and there appears to be a lack of any coherent methodology for measuring such gains. The difficulty in isolating independent variables is also an issue in comparing traditional learning to online learning. There is an acknowledgement that context-bound evaluations are the most feasible models of evaluation. Transferable models and processes of evaluation are required (Hughes and Attwell 2002) that can enable good practice to be replicated and disseminated. The effective measurement of the quality of online contributions is problematic. Message analysis is only feasible in a research environment due to the extensive amount of work involved in carrying it out. What is missing is a heuristic that teachers can use on an everyday basis with which to provide a qualitative analysis of contributions. The psychological effects of online learning are not given sufficient attention. Students in traditional learning environments will see themselves as a ‘student’; this identity affects the way that they interact with their environment, their peers and with the learning process. Do online students have a student identity? What are the characteristics of that identity? Assuming that developing such an identity is beneficial, how can it be developed and encouraged by tutors? How does the development of an online identity contribute to the success, or otherwise, of the student? The development of an online identity depends upon there being a ‘community’ to which a student can belong. What are the best ways of developing online communities, and what are the social and psychological characteristics at play in an online

25

Stephen Walker 2004

community? An important factor in the development of an online community is a sense of ‘place’ and a welcoming environment. There is a need for research into the personalisation of online learning that will provide such a welcoming environment and a sense of place. Traditional learning has a wealth of evaluation models, for example Kirkpatrick’s 4 levels of evaluation. Online learning and traditional learning are fundamentally different so research is required to see how models for evaluating traditional learning can be adapted to online learning.

26

Stephen Walker 2004

The Evaluation Framework The aim is to provide a framework for the evaluation of online learning that can be used by managers or course leaders, with factors in the evaluation process mapped to the relevant FENTO Standards (Appendix 2). The framework can be used to ensure that online learning provides for the needs of adult learners by evaluating the extent to which a course provides opportunities for both collaborative and independent student-centred learning. The process of evaluation has three stages. First is a pre-course or design stage where the elements to be evaluated and their place in the course are identified. This is followed by formative and then summative assessment. Qualitative and quantitative methods should be used for data gathering, to enable a fuller picture to be built of the learning process. There are links to resources that provide pro-formas and design methodologies to assist this. There are also links to a range of tools and resources to help with the capture of data and the design and implementation of evaluations. These can be found in Appendices 3, 4 and 5. The design or pre-course stage is based upon the work of Sims (2001) and Garrison and Anderson (2003), where the emphasis is upon proactive evaluation at all stages of the learning process.

Pre-Course Evaluation The factors that should be considered in the design of a course are: The intent of the course – There are general strategic issues that need to be addressed about why the course is being placed online and what the intended outcomes are. These outcomes need to be clear and specific so an assessment can be made of whether they have been achieved. More specifically, adult learners need to know why they are learning something before they start. A clear outline of the learning objectives should therefore be given to adult learners. This will give them a sharper awareness of the learning task and its relevance to them, which will enhance their already high level of motivation. Adult learners could also be encouraged to discuss their own objectives for the course, and to be clear that what they want is congruent with what the teacher wants. Assessment methods – these define the course and the way in which students will structure their learning. The assessment methods should be congruent with the course content, and should be capable of adequately measuring the course objectives. It is important in online learning to reward participation. It should ideally carry a significant percentage of the marks for the overall course. 27

Stephen Walker 2004

Adult learners are not passive individuals; they are capable of self-direction and can actively engage in the learning process. It is therefore questionable whether strictly Instructivist course content with exam or test based assessment is suitable for them. Assessment should contain an element of coursework that they can engage with and construct their own knowledge to produce a deep level of understanding. The course content – There should be a good mix of media elements to increase ‘cognitive opportunity’. The style of the content should be consistent, and the content should be cohesive and at the right level for the students. It should be Meaningful, Memorable, and Motivational (Allen 2004). Interface – The interface should be easily navigable, and aesthetically pleasing. It should also be customisable. A metaphor should be used where possible, such as a school building, or filing cabinet. The interface should also address disability issues. Interactivity – The course content must have elements that support interaction between students-students, between students-teachers, and between students-content. Opportunities for collaborative work should be made available. Support – Adult learners have competing interests, from work and family commitments for example. This can lead to time management difficulties and problems with participation and meeting deadlines. Mechanisms should be in place to provide support to students, for educational needs (Is there provision for remedial work and enrichment activities?), technical issues, and personal issues. The students will need to know how to get this support, and when it will be available. Support should be available in ways other than via a computer. Assessment of student needs - An additional factor to consider is the assessment of student needs, and the level that the students are at both educationally and with regard to computer skills. Can any sort of pre-testing take place so that comparisons can be made after the course? What do the students understand will be the outcomes of the course – do they match the teacher’s objectives for them?

28

Stephen Walker 2004

Formative Evaluation Levels 1 and 2 of Kirkpatrick’s four levels of evaluation are useful here for tutors. The first of which is the reaction of the students. What are the students’ reactions? This can be ascertained by questionnaires, or ‘happy sheets’ administered online. Is there a facility for doing this as part of the course content, rather than being administered separately? For example by the use of voting buttons, or sections for comments. While ‘happy sheets’ can be seen as a superficial measure, there is the point that unhappy students will probably not be engaging with the learning process. There is also the possibility of using online diaries, if the learning package facilitates this, or students could be signed up to a blogging website with instructions to enter comments regularly – this may have to be given structure in the form of heuristics, as in the ‘reflection piece’ mentioned above by Garrison and Anderson. Usability – can also be evaluated as part of the students’ reactions. Usability will involve navigational issues and technical issues, such as access and download times. Observations of use would be useful as part of a summative evaluation of the course. Learning – The assessment of students should be an integral part of the learning-teaching process. Is learning occurring? Are the students achieving their own objectives, and the objectives laid down by the teacher? How this information is gained depends upon the assessment methods that the teacher decided upon at the beginning of the course. If the objectives and the criteria for evaluating performance were clearly stated then this should be straightforward. As with traditional face-to-face teaching there is a range of procedures for testing, from quizzes to written work or demonstrations and confidence logs. Tutors should also monitor the quantity and quality of interactions and contributions from students. Are they reflecting mastery of the subject and learning at a deep level? Are they appropriate to the learning required? This again can also be assessed from structured entries in diaries/blogs. Are the assessment methods appropriate? Participation – it is important to reward participation in online learning, and to give significant marks for participation. Criteria for the level of participation should be laid down at the start of the course. Participation can be monitored in a number of ways. Many online learning systems have a facility for logging online presence and pages accessed, where this is not available teachers need to find a way to count and log the participation of students. While it is important to note the quality of participation for the purposes of assessing learning, a more formal assessment of quality should take place at the end of the course for the purpose of assigning marks. Does the instructional style encourage participation?

29

Stephen Walker 2004

There is also the need to ensure that an online community is being established, where appropriate. What opportunities are there for students to collaborate on work, or to discuss issues?

30

Stephen Walker 2004

Summative Evaluation Outcomes mapped to objectives – This requires a clear description of objectives at the beginning of the course. It would be judged via scores on assessments, whatever form they took, diaries/blogs where students are asked to reflect on their own objectives for the course. There are also the strategic intentions outlined at the development stage that require evaluation. Support – did the students and tutors feel that there was adequate and timely support for educational, personal and technical issues? Using end of course questionnaires and interviews/tutorials. Participation – the students’ participation can be judged by an assessment of the quality of contributions. Garrison and Anderson suggest a ‘reflection piece’ be used at the end of the course. Also, online diaries, or blogs can give pointers to participation and other issues. Where programs support logging of use then this can give a quantitative measure. Transfer of behaviour – Transfer of behaviour and retention of knowledge are more likely to occur where there has been constructive engagement with the learning process. Is the learning demonstrated in the coursework? In the case of vocational training this has to be followed up in the workplace. This may not be feasible for teachers, but is a factor that providers need to consider. Student satisfaction – This can be judged by an end of course questionnaire. A fuller picture of the students’ attitudes can be gained from a reflection piece, structured diaries or blogs, and interviews.

31

Stephen Walker 2004

Bibliography Allen, M. (2004) Down with Boring e-learning! Available at www.learningcircuits.org/2004/jul2004/alen.htm An interview with ‘e-learning guru’ Dr. Michael Allen Bates, T. (1997) The Impact of Technological Change on Open and Distance Learning Distance Education, 18, (1) pp 93-109 A framework for implementing technology in a university or college environment Britain, S; Liber, O. (2004) A Framework for the pedagogical Evaluation of eLearning Environments Available at: www.jisc.ac.uk/uploaded_documents/vleFullReport08.doc A framework for evaluating Virtual Learning Environments Benigno, V; Trentin, G. (2000) The Evaluation of Online Courses Journal of Computer Assisted Learning, 16, pp 259-270 A Framework for evaluating adult learning. Focusing on the differences between traditional and inline distance learning Curtin University (2001) Internet based learning construction kit Available at: www.curtin.edu.au/home/allen/we3/ A resource for creating effective internet-based learning courses and units Clarke, A. (2003) Online Learning and Social Exclusion NIACE Considers the nature of online learning and what needs to be done to realise the potential of online approaches to include non-traditional learners and people who encounter barriers in accessing learning Dempster, J. (2004) Evaluating e-Learning Available at: www.warwick.ac.uk/go/cap/resources/eguides Guidelines and links to resources for lecturers in higher education who want to evaluate their courses Edwards, J. (1991) Evaluation in Adult and Further Education. A Practical Handbook for Teachers and Organisers WEA A handbook for evaluating traditional face-to-face learning in Post-16 Education

32

Stephen Walker 2004

Ewing, J; Miller, D. (2002) A Framework for Evaluating Computer Supported Collaborative Learning Educational Technology and Society, 5, (1), pp112118 Outlines five features of collaborative e-learning and proposes a framework for identifying their existence in any e-learning activity Garrison, D; Anderson, T. (2003) E-Learning in the 21st Century Routledge Falmer The authors look at the technological, pedagogical and organizational implications of e-learning, in Higher Education Hanson, A. (1996) The Search for a Separate Theory of Adult Learning. Does Anyone Really Need Androgogy? In R. Edwards, A. Hanson, P. Raggatt (Eds) Boundaries of Adult Learning (1996) Routledge A critique of the theory of androgogy. Hughes, J; Attwell, G. (2002) A Framework for the Evaluation of e-Learning Available at: www.theknownet.com/ict_smes_seminars/papers/Hughes_Attwell.html A proposed framework for reporting on and classifying e-learning evaluation Information Technology Development Corporation (2003) www.itdc.edu/webed/demo/unix-a1-dem-/WYNTK/01WTK-02.html Kirkpatrick, D.L. (1998) Evaluating Training Programs. The Four Levels Berrett-Koehler Classic book on evaluating training programs. Provides examples of how to evaluate Reactions, learning, Behaviour, and Results of training Knowles, M. (1970) Androgogy: An Emerging Technology for Adult Learning in M. Tight Adult Learning in Education (1983) Croom Helm Kruse, K. (2004) Evaluating e-Learning: Introduction to the Kirkpatrick Model Available at: www.e-learningguru.com/articles/art2_8.htm A short article that adapts, for online learning, Kirkpatrick’s four levels of evaluation. Lockee, B; Moore, M; Burton, J. (2002) Measuring Success: Evaluation Strategies for Distance Education Educause Quarterly, 1, 20-26 Strategies for evaluating the effectiveness of Distance Education programs McKavanagh, C; Kanes, C; Beven, F; Cunningham, A; Choy, S. (2002) Evaluation of Web based Flexible Learning

33

Stephen Walker 2004

Available at: www.ncver.edu.au/research/proj/nr8007.pdf A paper based upon research on the evaluation of digital technologies in the VET sector. An evaluation framework is proposed based upon Laurillard’s ‘conversational’ theory of learning. Oliver, M. (2000a) Evaluating Online Teaching and Learning Information Services and Use, 20, pp83-94 IOS Press A review of issues in online teaching and learning. Oliver, M (2000b) An Introduction to the Evaluation of Learning Technology Educational Technology and Society, 3, (4) Available at: http://ifets.ieee.org/periodical/vol_4_2000/intro.html A summary of debates within the evaluation community Palloff, R; Pratt, K. (1999) Building learning Communities in Cyberspace: Effective Strategies for the Online Classroom Jossey-Bass A practical guide to creating a virtual classroom environment Ramsden, P. (1988) Content and Strategy: Situational Influences on learning, in R. Schmeck (Ed.) Learning Strategies and Learning Styles (1988) Plenum Reece, I; Walker, S. (2003) Teaching, Training, and Learning. A Practical Guide Business Education Publishers A general textbook for teachers and student teachers in Post-16 Education Reeves, T. (1997) Evaluating What Really Matters in Computer-Based Education Available at: www.educationau.edu.au/archives/CP/reeves.htm A description of fourteen pedagogical dimensions of computer-based education Rogers, A. (2002) Teaching Adults Open University Press A textbook on teaching practice for teachers of adults Rogers, J. (2001) Adults Learning Open University Press A textbook on teaching practice for teachers of adults Scanlon, E; Jones, A; Barnard, J; Thompson, J; Calder, J. (2000) Evaluating Information and Communication Technologies for Learning Available at: http://ifets.ieee.org/periodical/vol_4_2000/scanlon.html

34

Stephen Walker 2004

A description of the CIAO! Evaluation framework, developed from evaluations conducted at the Open University Shepherd, C. (2004) Evaluating Online Learning Available at: www.fastrackconsulting.co.uk/tactix/features/evaluate/evaluate.htm An article aimed at businesses providing online learning for their employees, outlining the reasons for carrying out evaluations Sims, R. (2001) From Art to Alchemy: Achieving Success with Online learning Available at: http://it.coe.uga.edu/itforum/paper55.htm Sims’s model of proactive evaluation Sonwalkar, N. (2002) A New Methodology for Evaluation: The Pedagogical Rating of Online Courses Available at: www.syllabus.com/article.asp?id=5914 A method of assigning a numerical rating to online learning courses Weller, M. (2003) Delivering Learning on the Net RoutledgeFalmer A comprehensive and practical resource for developing and using the internet for learning Wentling, T; Johnson, J. (online article, accessed 2004) The Design and Development of an Evaluation System for Online Instruction Available at: www.hre.uiuc.edu/online/eval.model.pdf A description of the design and implementation of an evaluation system that can be used to judge online learning

35

Stephen Walker 2004

Appendix 1 Questions to ask in the Preparation of an Evaluation Pre Course Evaluation Course Intent What is the strategic intent of the course? - Why is it being delivered and developed online? E.g. to improve access to courses, to retain and attract more students, to increase revenue. What are the intended learning outcomes? - How have you ensured that the students know what the outcomes are? - What are the students’ objectives? Assessment Methods What assessment methods are being used? - How do they match the course content - Can students find out about their progress? - Is there an appeal process, and do students know how to use it? Course Content Is there a good mix of media elements? - What are they? - Is there a consistent style? - Is it at the right level for the students? - Is it is it Meaningful, Memorable, and Motivational? How? - Can it be altered to suit student needs? - What opportunities are there for collaborative work? Interface Is it easily navigable? Does it use a metaphor? Is it customisable? Is it able to meet any special needs requirements? Interactivity What opportunities are there for interaction? -Student to student? -Student to teacher? -Student to content? -Teacher to teacher?

36

Stephen Walker 2004

Appendix 1 (Cont.) Student Support What support is available? - For learning issues (remedial or enrichment activities) - For personal issues - For technical issues - When is it available? - How will students know how to get support? - Is there a complaints procedure? Assessment of Student Needs What have you done to assess the needs of students? - What is their level of ability? - Are there any special needs? - Are they IT literate enough to be able to do the course? - Do the students’ objectives match the teacher’s objectives for them?

37

Stephen Walker 2004

Appendix 1 (Cont.) Formative Evaluation Reactions to the course What methods will you use to assess reactions? - List them Learning What methods will you use to ensure that learning is occurring? - List them Participation How will you encourage participation – how does the instructional style encourage it? - How will you measure it? Quantitatively? Qualitatively? - Is participation rewarded significantly?

38

Stephen Walker 2004

Appendix 1 (Cont.) Summative Evaluation The summative evaluation will be a report based upon the questions asked and the data gathered in the pre-course and formative evaluation. Evidence to use in a summative report: -

-

-

Mapping outcomes to objectives – results from tests, examinations, coursework. Transfer of behaviour – are you able to assess this at the end of the course, from coursework for example, or does it require evidence to be gathered from a workplace after a period of time? If the latter, what provision is there for this? Student satisfaction – a summation of comments from the questionnaires/diaries used to assess reactions. These can be qualitative and quantitative (from responses on a Likert Scale, for example) Quantity of participation – e.g. computer logs? Quality of participation – e.g. a structured ‘reflective piece’? Quality of support – learning, personal and technical, from student questionnaires/diaries, and from direct evidence – who used support and why? Was the response timely and appropriate? Did the course match the strategic intent? Suggestions for developing the course

39

Stephen Walker 2004

Appendix 2 Evaluation and the FENTO Standards Evaluation is covered by the Further Education National Training Organisation (FENTO) standards. The National Learning Network’s (NLN) contextual amplification of the FENTO standards covers teaching and supporting learning and managing learning in the Information and Learning Technology (ILT) context. The table below maps the elements of this evaluation framework to the FENTO and ILT standards.

Precourse/Development Strategic Intent Assessment Methods Course content, Interface Interactivity Support Needs assessment Formative Evaluation Reactions Learning Participation

FENTO Standards g1, g2, g3

ILT Standards

a1, a2, b1, b2 f1, f2

A2, B1 F2 D1, D2, D3, D4

e2, e4 a1, a2 g1, g2, g3

C*1, C*2, C*3, F1 E1, E2 A2 F2

f1, f2 f1, f2 c1, c2

C*1 F1, F2, F3 C*2, C*3, F1

Summative Evaluation Outcomes vs Objectives Support

g1, g2, g3

F2, G4

b1, f1, f2

F2, F3

e2, e4

Participation Student satisfaction Transfer of behaviour

c1, c2

C*1, C*2, C*3, E1, E2 C*2, C*3, F1

f1, f2

F2, F3

40

Stephen Walker 2004

Appendix 3 Evaluation Tools There are a number of tools available on the web for designing and planning evaluations. There seems to be no reason to re-invent the wheel as far as this is concerned and the websites linked below may provide some useful information and resources. http://www.icbl.hw.ac.uk/ltdi/cookbook/ This is a resource from the Learning Technology Dissemination Initiative (LTDI) at Heriot Watt University, Edinburgh. It is series of ‘recipes’ offering practical advise on all aspects of evaluation. The links below are to other pages on their website. http://www.icbl.hw.ac.uk/ltdi/implementing-it/app2.pdf Some checklists, questionnaires, and log pro-formas from the LTDI. (http://www.icbl.hw.ac.uk/ltdi/index.html)

http://www.elt.ac.uk/ELT%20documents/materials/evalguide.pdf This resource has information on the planning and rationale of evaluations, collecting and analysing data, and report writing. This is from of the Embedding Learning Technologies. The project’s aim is to provide materials and information for teaching and learning professionals wishing to introduce learning technologies into teaching. The home page can be found here: http://www.elt.ac.uk http://www.ltss.bris.ac.uk/jcalt/ This is an Evaluation Toolkit from the University of Bristol. It consists of a step-by-step model of the process of designing an evaluation, together with resources or activities that support each step. http://mime1.marc.gatech.edu/MM_Tools/evaluation.html A collection of tools for evaluation. It includes pro-forma questionnaires and checklists, as well as guidelines for data gathering.

41

Stephen Walker 2004

Appendix 3 (Cont.) http://iet.open.ac.uk/plum/evaluation/plum.html Developed by Open University and University of Hull. Guidelines and proformas for carrying out evaluations http://www.clt.uts.edu.au/contentssfs.html From University of Technology, Sydney. Guidelines and advice on carrying out evaluations. http://www.gla.ac.uk/rcc/projects/tltsn/resource.html A resource form Glasgow University. It includes example questionnaires to download. http://www.icbl.hw.ac.uk/ltdi/implementing-it/eval.pdf From the Institute for Computer Based Learning. A guide to methods of evaluation.

42

Stephen Walker 2004

Appendix 4 Virtual Learning Environments (VLEs) A list of some of the VLEs available. The information is taken from Britain and Liber (2004). WebCT www.webct.com WebCT is a commercial vendor. The main product is Campus, a course management system. There is also a product called Vista that has the course management features of Campus but also accommodates the provision of courses across an institution. Blackboard e-Education Suite www.blackboard.com A commercial system that consists of a Learning system, a Portal system and a Content Management system. Granada Learnwise Version 3 www.learnwise.com The third most widely used VLE in UK Higher and Post-16 Education. It allows multiple departments to collaborate, and is set up to provide a tutorstructured, content-centric course model. FirstClass 7.1 www.firstclass.com A conferencing system. It puts collaboration at the centre of workflow. Less fully featured than WebCT, Blackboard or Learnwise. Learning Activity Management System (LAMS) www.lamsinternational.com A system for creating and managing sequences of Learning Activities. It is not a VLE in the traditional sense, but can be integrated with a VLE. It is intended to facilitate the rapid authoring of learning activity sequences using a visual authoring interface. It is an open source product currently being evaluated in FE colleges in the UK by the Joint Information Systems Committee (JISC). See http://www.jisc.ac.uk/index.cfm?name=elp_lams for details of the evaluation. COSE (Creation of Study Environments) http://cose.staffs.ac.uk Designed to support a constructivist approach to teaching and learning. It is student centred in that a ‘course’ is a group of learners to which content and activities are assigned rather than vice-versa.

43

Stephen Walker 2004

Appendix 4 (Cont.) MOODLE (Modular Object-Oriented Dynamic Learning Environment) www.moodle.org An open source VLE. It has tools to support constructivist learning.

44

Stephen Walker 2004

Appendix 5 Blogs The term ‘Blog’ comes from ‘web log’. A web log is a personal website that allows the owner to add messages in the form of a diary. Web logs have a range of possible content and designs, depending on which one the user signs up to. There are a number of web log providers. Some require a payment; others are free with the option to buy extra functionality http://www.xanga.com/ A free blogging site http://www.blogger.com/start A free blogging site http://jade.mcli.dist.maricopa.edu/blogshop/archives/000286.html Information on what blogs are, and how they can be used in education. http://www.edtechpost.ca/gems/matrix2.gif A diagram that puts the uses of blogging in education into a matrix.

45

Stephen Walker 2004

Related Documents

Evaluation Framework
November 2019 7
Teacher Evaluation Framework
December 2019 12
Framework
June 2020 26
Framework
July 2020 20
Framework
November 2019 56

More Documents from ""

Evaluation Framework
November 2019 7
Higher Education Students
November 2019 28