Judging Whether A Survey Is Valid And Reliable

  • December 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Judging Whether A Survey Is Valid And Reliable as PDF for free.

More details

  • Words: 562
  • Pages: 2
Practice:

Get quality data into your evaluator’s hands

Key Action:

Use technique to ensure valid and reliable data

TOOL: Judging the Validity and Reliability of a Survey

Purpose:

Surveys are commonly used to measure program outcomes. However, to yield accurate information, surveys must be both reliable and valid. The following narrative and checklist can help you determine whether surveys proposed for use in your evaluation have been tested for reliability and validity. Note: Reliability and validity are sometimes termed “psychometric properties.” Conducting the statistical tests required to establish survey reliability and validity can be labor intensive. Individual magnet school evaluations generally do not have the time or financial resources to do thorough psychometric testing of an instrument. The most efficient alternative is to use existing evaluation instruments that provide appropriate data documenting their reliability and validity.

Instructions:

1. Read the brief definitions and examples of “validity” and “reliability.” 2. Review the checklist with your evaluator as a way to begin a discussion about the validity and reliability of survey instruments being used in your evaluation.

1

Practice:

Get quality data into your evaluator’s hands

Key Action:

Use technique to ensure valid and reliable data

Judging the Validity and Reliability of a Survey Magnet schools are designed to produce both cognitive (improved learning in reading and mathematics) and noncognitive outcomes (increased student and staff engagement with academic content; decreased racial isolation). Commonly, some outcomes are measured through surveys, and it’s important that such surveys are both valid and reliable, so that they yield credible information. Validity: Instrument validity means that the survey or test measures what it says it’s measuring, and not some other concept. For example, if a survey of students claims to measure their engagement in school activities, then it should ask questions that address the extent to which they voluntarily commit time and effort to school-related endeavors. The keys here are the words “voluntarily” and “effort.” For example, we might ask how much time they devote to extracurricular activities (i.e., Do they spend more time in school than is required?) or what books they read for pleasure (i.e., Is the content of the Type here books related to the content of the curriculum?). We shouldn’t ask if they “like” school—we can like something (“I like having the TV on when I’m home”) without being engaged with it (“the TV is a nice background while I’m cooking dinner”). Reliability: Instrument reliability means that the survey or test yields the same results on repeated trials. For example, a thermometer should read 212 degrees Fahrenheit every time it is placed in boiling water. Without research tools and procedures whose reliability is documented, it’s not possible to draw credible conclusions from your evaluation or make data-based decisions about how to improve your program. Checklist for Validity and Reliability 1. Does the survey instrument include accompanying data on its validity and reliability? 2. If so, which of the following methods were used to establish these psychometric properties? a. Test-retest reliability: Administering the same test to the same groups multiple times, with similar results b. Factor analysis: Determining whether items that are conceptually linked show similar linkages in the answers c. Split-half reliability: Using two versions of questions and comparing the results d. Other statistical techniques, such as structural equation modeling, an analytic technique that assesses the “fit” between the questions and the underlying constructs

2

Related Documents