Philosophy Paper

  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Philosophy Paper as PDF for free.

More details

  • Words: 689
  • Pages: 3
Evaluation for Improvement According to Dr. Reeves and Hedberg (2003), anyone who wants to develop or manage interactive learning systems (ILS) must be skilled at handling various evaluation activities before making decisions. Evaluation is an important component of the process of producing an interactive learning system. For example, before producing an online learning program, a developer may ask content experts, interview teachers, and observe learners using prototype versions of the program to collect his/her needed evaluation data (Reeves & Hedberg, 2003). I agree that evaluation is a management tool that helps to make mid-course corrections and document the successes of a program. The effective evaluative information will help program producers/implementers to determine if the project is on-track and also more effectively learn from their reflecting experience. As a project manager, instructional designer, and implementer of interactive learning systems, he/she is keenly interested in knowing how to do his/her work better. Evaluation provides a mechanism for stakeholders to understand what works, what doesn’t work, and why (Kirshstein, & Quiñones, 1998). Evaluation is sometimes thought as an intrusive requirement that takes time away from the process of producing/implementing a program. From my point of view, evaluation is not an additional work, but a critical part of the process. The team of an interactive learning system should be actively engaged in collecting the information they need to make decisions, and in interpreting and using the evaluative data. As a result, the process of gathering data can become part of the change process. I believe that the validity and usefulness of evaluation results are enhanced by how skillful an instructional designer or

1

project manger in conducting evaluative acts such as interviewing, testing and observing etc. We have learned that effective evaluation should be planned and implemented within the context and nature of specific program (Kromrey, Hogarty, Hess, RendinaGobioff, Hilbelink & Lang, 2005). Furthermore, we should move evaluation from being a stand-alone monitoring process to an integrated and valuable part of an interactive learning system planning and delivery. There are a variety of evaluation models that have been developed. Most often, I would suggest mixing of multiple evaluation models, i.e. “Multiple Methods evaluation model”. Using multiple methods is not just adding simple two or more methods into one solution. We need to choose appropriate approaches for a specific purpose. For instance, the case of Kromrey et al. (2005) showed that only using multiple measures can meet the requirements of their university’s online instruction evaluations. Currently, online courses gain many attentions. The complexities of the Webbased learning and teaching derive from integrating technologies (such as multimedia applications and social networking) and an “its use to support the new modes of teaching and learning. (Kromrey et al., 2005)” From my point, an online course should be evaluated on three stages. Firstly, on the planning stage, the evaluator works with project manager and designers to review course documents including course syllabi, instructional design plans, and online course content. The second stage is implementing stage, and the evaluator would deliver surveys and interviews to faculties to collect their reflections. At the end of the course, evaluation would focus on students’ responses. The evaluator would deliver surveys including both selected-response and open-ended items

2

to students and faculties. Finally, the evaluator would collect all participant students’ performance grades or examinations’ results to analyze the online course’s academic impactions. To analyze all these collected data, the evaluator may use quantitative methods. As an evaluator, he/she should be comfortable with and believe in using a wide range of appropriate methods to gather information. Most often, an interactive system is evaluated by both quantitative and qualitative approaches. References Kirshstein, R., & Quiñones, S. (1998). An educator's guide to evaluating the use of technology in schools and classrooms. Washington, D.C.: American Institutes. Available at http://www.ed.gov/pubs/EdTechGuide/index.html Kromrey, J.D., Hogarty, K.Y., Hess, M.R., Rendina-Gobioff, G., Hilbelink, A., &Lang, T.R. (2005). A comprehensive system for the evaluation of innovative online instruction at a research university: foundations, components and effectiveness. College Teaching and Learning Conference, Lake Buena Vista, Florida, U.S. Reeves, T.C., & Hedberg, J.G. (2003). Interactive learning systems evaluation. Englewood Cliffs, NJ: Educational Technology.

3

Related Documents

Philosophy Paper
June 2020 8
Philosophy Paper
November 2019 28
Philosophy Paper
August 2019 32
Philosophy Paper
December 2019 17
Philosophy Paper
July 2020 7
Final Philosophy Paper
April 2020 7