Normative Assessment

  • July 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Normative Assessment as PDF for free.

More details

  • Words: 2,539
  • Pages: 10
Title: Student Assessment in higher Education: A handbook for assessing performance Allen H. Miller- Bradford W Imrie- Kevin Cox Citation: Cox, K., Imrie, B.W., & Miller, A.H. (1998). Student assessment in higher education: A handbook for assessing performance. London: Routledge. Citation: Author, A. A. (Year of publication). Title of work: Capital letter also for subtitle. Location: Publisher. Summary:

This comprehensive overview of higher educational assessment features a guide to setting, marking and reviewing the coursework, assignments, tests and examinations used in higher education. In addition, the authors examine the various programs for certificates, diplomas, first degrees as well as higher degrees. The strong influence that assessment has on the way students approach their learning is also discussed. Truly international in focus, this book features authors with higher education experience in Australia, New Zealand, Scotland, England, Canada, Hong Kong, USA, and Thailand. Question: in HIED today, assessment is broadly divided into normative and formative. Provide from your experience illustrations of normative assessment. What are its priorities? What are its motivations? Notes: International Focus: http://books.google.com/books? id=n6M9AAAAIAAJ&printsec=copyright&dq=normative+and+formative+assessment+i n+higher+education&lr= http://escholarship.bc.edu/cgi/viewcontent.cgi?article=1059&context=jtla Normative Definition according to dictionary.com nor·ma·tive Audio Help [nawr-muh-tiv] Pronunciation Key - Show IPA Pronunciation –adjective 1.of or pertaining to a norm, esp. an assumed norm regarded as the standard of correctness in behavior, speech, writing, etc. 2.tending or attempting to establish such a norm, esp. by the prescription of rules: normative grammar. 3.reflecting the assumption of such a norm or favoring its establishment: a normative

attitude. Definitions: Formative- “ provide students with information which will help them judge the effectiveness of their learning strategies to date. It also alerts teachers to any sections of the course or approactes to teaching where students are having difficulties and which may need further attention. Normative/Summative- the main part of Summative assessment is to make a judgement regarding each student’s performance. They are expressed at marks, percentages, grades, or classification. It may also be defined as a measure of a students’ performance or level of achievement at the end of a sequence of study and serves three main purposes. Assessing Learners in Higher Education Summative- includes end-of-course assessment and essentially means that this is assemsnet which produces a measure which sums up someone’s achievement and which has no other real use except as a description of what has been achieved.

Wikipedia: norm-referenced

test

A norm-referenced test is a type of test, assessment, or evaluation in which the tested individual is compared to a sample of his or her peers (referred to as a "normative sample").[1]

Other types Alternative to normative testing, tests can be ipsative, that is, the individual assessment is compared to him- or her-self through time.[2][3] By contrast, a test is criterion-referenced when provision is made for translating the test score into a statement about the behavior to be expected of a person with that score. The same test can be used in both ways.[4] Robert Glaser originally coined the terms "norm-referenced test" and "criterion-referenced test".[5] Standards-based education reform is based on the belief that public

education should establish what every student should know and be able to do.[6] Students should be tested against a fixed yardstick, rather than against each other or sorted into a mathematical bell curve. [7] By assessing that every student must pass these new, higher standards, education officials believe that all students will achieve a diploma that prepares them for success in the 21st century.[8]

Common use Most state achievement tests are criterion referenced. In other words, a predetermined level of acceptable performance is developed and students pass or fail in achieving or not achieving this level. Tests that set goals for students based on the average student's performance are norm-referenced tests. Tests that set goals for students based on a set standard (e.g., 80 words spelled correctly) are criterion-referenced tests. Many college entrance exams and nationally used school tests use normreferenced tests. The SAT, Graduate Record Examination (GRE), and Wechsler Intelligence Scale for Children (WISC) compare individual student performance to the performance of a normative sample. Testtakers cannot "fail" a norm-referenced test, as each test-taker receives a score that compares the individual to others that have taken the test, usually given by a percentile. This is useful when there is a wide range of acceptable scores that is different for each college. For example one estimate of the average SAT score for Harvard University is 2200 out of 2400 possible. The average for Indiana University is 1650[9]. By contrast, nearly two-thirds of US high school students will be required to pass a criterion-referenced high school graduation examination. One high fixed score is set at a level adequate for university admission whether the high school graduate is college bound or not. Each state gives its own test and sets its own passing level, with states like Massachusetts showing very high pass rates, while in Washington State, even average students are failing, as well as 80 percent of some minority groups. This practice is opposed by many in the education community such as Alfie Kohn as unfair to groups and individuals who don't score as high as others.

Advantages and limitations An obvious disadvantage of norm-referenced tests is that it cannot measure progress of the population of a whole, only where individuals fall within the whole. Thus, only measuring against a fixed goal can be used to measure the success of an educational reform program which seeks to raise the achievement of all students against new standards which seek to assess skills beyond choosing among multiple choices. However, while this is attractive in theory, in practice the bar has often been moved in the face of excessive failure rates, and improvement sometimes occurs simply because of familiarity with and teaching to the same test. With a norm-referenced test, grade level was traditionally set at the level set by the middle 50 percent of scores.[10] By contrast, the National Children's Reading Foundation believes that it is essential to assure that virtually all of our children read at or above grade level by third grade, a goal which cannot be achieved with a norm referenced definition of grade level.[11] Critics of criterion-referenced tests point out that judges set bookmarks around items of varying difficulty without considering whether the items actually are compliant with grade level content standards or are developmentally appropriate.[12] Thus, the original 1997 sample problems published for the WASL 4th grade mathematics contained items that were difficult for college educated adults, or easily solved with 10th grade level methods such as similar triangles.[13] The difficulty level of items themselves, as are the cut-scores to determine passing levels are also changed from year to year.[14] Pass rates also vary greatly from the 4th to the 7th and 10th grade graduation tests in some states.[15] One of the faults of No Child Left Behind is that each state can choose or construct its own test which cannot be compared to any other state.[16] A Rand study of Kentucky results found indications of artificial inflation of pass rates which were not reflected in increasing scores in other tests such as the NAEP or SAT given to the same student populations over the same time.[17] Graduation test standards are typically set at a level consistent for native born 4 year university applicants. An unusual side effect is that while colleges often admit immigrants with very strong math skills who

may be deficient in english, there is no such leeway in high school graduation tests, which usually require passing all sections, including language. Thus, it is not unusual for institutions like the University of Washington to admit strong Asian American or Latino students who did not pass the writing portion of the state WASL test, but such students would not even receive a diploma once the testing requirement is in place. Although the tests such as the WASL are intended as a minimal bar for high school, 27 percent of 10th graders applying for Running Start in Washington State failed the math portion of the WASL. These students applied to take college level courses in high school, and achieve at a much higher level than average students. The same studyc oncluded the level of difficulty was comparable to, or greater than that of tests intended to place students already admitted to the college. [18] A norm referenced test has none of these problems because it does not seek to enforce any expectation of what all students should know or be able to do other than what actual students demonstrate. Present levels of performance and inequity are taken as fact, not as defects to be removed by a redesigned system. Goals of student performance are not raised every year until all are proficient. Scores are not required to show continuous improvement through Total Quality Management systems. A rank-based system only produces data which tell which average students perform at an average level, which students do better, and which students do worse. This contradicts the fundamental beliefs, whether optimistic or simply unfounded, that all will perform at one uniformly high level in a standards based system if enough incentives and punishments are put into place. This difference in beliefs underlies the most significant differences between a traditional and a standards based education system.

References ^ a b Assessment Guided Practices ^ Assessment ^ PDF presentation ^ Cronbach, L. J. (1970). Essentials of psychological testing (3rd ed.). New York: Harper & Row. 5. ^ Glaser, R. (1963). Instructional technology and the measurement of 1. 2. 3. 4.

learning outcomes. American Psychologist, 18, 510-522. 6. ^ [1] Illinois Learning Standards 7. ^ stories 5-01.html Fairtest.org: Times on Testing "criterion

8. ^

9. ^ 10. ^

11. ^ 12. ^

13. ^ 14. ^

15. ^

16. ^

17. ^

18. ^

referenced" tests measure students against a fixed yardstick, not against each other. [2] By the Numbers: Rising Student Achievement in Washington State by Terry Bergesn "She continues her pledge ... to ensure all students achieve a diploma that prepares them for success in the 21st century." [3] About.com "What is a Good SAT Score?" From Jay Brody Aug 2006 [4] NCTM: News & Media: Assessment Issues (Newsbulletin April 2004) "by definition, half of the nation's students are below grade level at any particular moment" [5] National Children's Reading Foundation website [6] HOUSE BILL REPORT HB 2087 "A number of critics ... continue to assert that the mathematics WASL is not developmentally appropriate for fourth grade students." Prof Don Orlich, Washington State University [7]Panel lowers bar for passing parts of WASL By Linda Shaw, Seattle Times May 11, 2004 "A blue-ribbon panel voted unanimously yesterday to lower the passing bar in reading and math for the fourth- and seventh-grade exam, and in reading on the 10th-grade test" [8] Seattle Times December 06, 2002 Study: Math in 7th-grade WASL is hard By Linda Shaw "Those of you who failed the math section ... last spring had a harder test than your counterparts in the fourth or 10th grades." [9] New Jersey Department of Education: "But we already have tests in New Jersey, why have another test? Our statewide test is an assessment that only New Jersey students take. No comparisons should be made to other states, or to the nation as a whole. [10] Test-Based Accountability Systems (Rand) "NAEP data are particularly important ...Taken together, these trends suggest appreciable inflation of gains on KIRIS. ... [11]Relationship of the Washington Assessment of Student Learning (WASL) and Placement Tests Used at Community and Technical Colleges By: Dave Pavelchek, Paul Stern and Dennis Olson Social & Economic Sciences Research Center, Puget Sound Office, WSU "The average difficulty ratings for WASL test questions fall in the middle of the range of difficulty ratings for the college placement tests."

See also

The term "normative assessment" refers to the process of comparing one test-taker to his or her peers.[1] 1. http://www.murdoch.edu.au/admin/policies/assessmentlinks.html#4 2.http://eric.ed.gov:80/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/15/ 8b/9c.pdf Abstract: ACT and 127 institutions (primarily community colleges) worked together on a Partners in Progress Research project comparing ASSET reading, writing, and math scores (incoming student placement tests) with CAAP reading, writing, and math scores (exiting student outcomes test). This was in order to refine the content of the related exams, and establish the degree of statistical relationship between them so that student intellectual growth might be measured between the student's point of entry and the point of exit from the institution. Administrators at Mid-Plains Community College Area (MPCCA) compared 108 pairs of ASSET and CAAP reading scores. Results indicated that reading improvement of MPCCA students was comparable with the public, two-year college normative percentages of improvement, with the majority of students achieving expected gains in their reading. In terms of writing test cohort, 163 matched ASSET/CAAP outcomes indicated that MPCCA students improved their writing ability at a slightly higher rate than the norm. For 162 ASSET/CAAP math outcomes, results indicated that although MPCCA had slightly more students improving at a lower rate than expected, they also had slightly more students improving at a slightly higher rate than the norm.

3. Loyola University 4. http://www.gseis.ucla.edu/heri/cirpoverview.php 5.http://books.google.com/books? id=60h0ZVgWrYoC&pg=PA54&lpg=PA54&dq=Higher+Education+Institution+Normat ive+Assessment&source=web&ots=L9eQ7IwpD7&sig=jhNEiPLQLpmPgxJrPBQH9GO2rE&hl=en&sa=X&oi=book_result&resnum=3&ct=result#PPA5 5,M1 Pg. 54!

Normative Assessment~ What is Normative Assessment: Answers the question, “ How do I compare to others?” “Normative assessment makes qualitative judgments about the level of achievement relative to other students…. Normative assessment can work with suppressed criteria (“impression marks”) or with an itemized marking scheme. The underlying question for the assessor is not what has the student achieved but that person’s achievement relative to others undertaking the same assessment. The assessor may compare the work assessed with the work of others to get a rank order. This ranking usually goes with a system of allocating marks or grades. *Citation for the above excerpt* Ashcroft, K., Foreman-Peck, L. (1994). Managing teaching and learning in further and higher education. London: Routledge.

Normative assessment is used in a variety of areas in Higher Education: 1. SAT & the GRE are forms of Normative Assessment; which is used for comparative purposes for entrance into different programs. 2. Identify comprehensive information of an institution’s incoming first-year students’ characteristics: parental income and education, ethnicity, financial aid, attitudes, beliefs, and self-concept.(See Link #1: CIRP) 3. Identify ways an institution can improve itself. “Assessment of student learning demonstrates that the institution’s students have knowledge, skills, and competencies consistent with institutional and program goals and that graduates meet appropriate higher education goals.” In providing the context for this standard, Middle States goes on to say, “The systematic assessment of student learning outcomes is essential to monitoring quality and providing the information that leads to improvement. […] The mission of the institution provides focus and direction to its outcomes assessment plan.”1 Limitations: -When focusing on where individuals fall within a given population, this method handicaps itself by not being able to measure the progress of the whole population. Links to normative assessment examples: 1. http://www.gseis.ucla.edu/heri/cirpoverview.php -This program (CIRP) identifies a national normative profile for the entering 1

Middle States Commission on Higher Education. (2002). Characteristics of excellence in higher education: Eligibility requirements and standards for accreditation. Philadelphia, PA: Author.

freshman class, and it is used to examine: readiness for college, student values and beliefs about diversity and civic engagement.... 2. http://www.loyola.edu/academics/collegeofartsandsciences/documents/Assessment Plan Final Version.DOC – Loyola College’s Assessment Plan 3. 4. 5.

Related Documents