Critical Thinking Assessment

  • Uploaded by: Farid Gusranda
  • 0
  • 0
  • September 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Critical Thinking Assessment as PDF for free.

More details

  • Words: 5,210
  • Pages: 9
Critical Thinking Assessment Author(s): Robert H. Ennis Source: Theory into Practice, Vol. 32, No. 3, Teaching for Higher Order Thinking (Summer, 1993), pp. 179-186 Published by: Taylor & Francis, Ltd. Stable URL: http://www.jstor.org/stable/1476699 Accessed: 18-05-2015 22:26 UTC REFERENCES Linked references are available on JSTOR for this article: http://www.jstor.org/stable/1476699?seq=1&cid=pdf-reference#references_tab_contents You may need to log in to JSTOR to access the linked references.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/ info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Taylor & Francis, Ltd. is collaborating with JSTOR to digitize, preserve and extend access to Theory into Practice.

http://www.jstor.org

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

Robert H. Ennis

Critical

Thinking

Assessment

LTHOUGHcritical thinking has often been urged

as a goal of educationthroughoutmost of this century (for example, John Dewey's How We Think, 1910; and the EducationalPolicies Commission's The Central Purpose of American Education, 1961), not a

great deal has been done about it. Since the early 1980s, however, attentionto critical thinkinginstruction has increased significantly-with some spillover to critical thinking assessment, an area that has been neglectedeven morethancriticalthinkinginstruction. Partly as a result of this neglect, the picture I paint of critical thinking assessment is not all rosy, though there are some bright spots. More explicitly, my major theme is that, given our current state of knowledge, critical thinking assessment, albeit difficult to do well, is possible. Two subthemes are that (a) the difficulties and possibilities vary with the purpose of the criticalthinkingassessmentand the format used, and (b) thereare numeroustrapsfor the unwary. In pursuitof these themes, I consider some possible purposes in attemptingto assess critical thinking, note some traps, list and comment on available critical thinking tests (none of which suit all of the purposes), and finish with suggestions for how to develop your own critical thinking assessment, including a discussion of some major formats. But first, some attentionmust be paid to the definition of critical thinking,becausecriticalthinkingassessmentrequires thatwe be clearaboutwhat we are tryingto assess. Robert H. Ennis is professor of education at the University of Illinois at Urbana-Champaign.

Defining Critical Thinking The upperthree levels of Blooms' taxonomy of educationalobjectives (analysis, synthesis, and evaluation) are often offered as a definition of critical thinking.Sometimes the next two levels (comprehension and application)are added. This conception is a good beginning, but it has problems. One is that the levels are not really hierarchical,as suggested by the theory, but rather are interdependent.For example, although synthesis and evaluation generally do require analysis, analysis generally requires synthesis and evaluation (Ennis, 1981). More significantly, given our concern here, the three (or five) concepts are too vague to guide us in developing and judging critical thinking assessment. Consider analysis, for example. What do you assess when you test for ability to analyze? The difficulty becomes apparentwhen we consider the following varietyof thingsthatcan be labeled"analysis":analysis of the political situationin the Middle East, analysis of a chemical substance,analysis of a word, analysis of an argument,and analysis of the opponent'sweaknesses in a basketball game. What testable thing do all these activities have in common?None, except for the vague principle that it is often desirable to break things into parts. A definition of critical thinking that I at one time endorsed is that critical thinking is the correct assessing of statements (Ennis, 1962). If I had not elaborated this definition, it would be as vague as Bloom's taxonomy.But even whenelaborated,it suffers

THEORYINTO PRACTICE,Volume 32, Number 3, Summer 1993

Copyright 1993 College of Education, The Ohio State University

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

THEORYINTO PRACTICE/ Summer 1993

Teachingfor Higher Order Thinking

from excluding creative aspects of critical thinking, such as conceiving of alternatives, formulatinghypotheses and definitions, and developing plans for experiments.I now think the contemporaryconception of critical thinking includes these things, so the "correct assessing" definition is more narrow than standardusage, and thus could interfere with communicationamong proponentsof critical thinking. The following definition seems to be more in accord with contemporaryusage and thus, I hope, will minimize confusion in communication:"Critical thinking is reasonablereflective thinking focused on deciding what to believe or do." As it stands,however, this definition is also as vague as Bloom's taxonomy. It too needs elaboration.Here is an abridgment of the elaborationsI have providedand defendedelsewhere (Ennis, 1987, 1991, in press): In reasonably and reflectively going about deciding what to believe or do, a person characteristically needs to do most of these things (and do them interdependently): 1. Judgethe credibilityof sources. 2. Identifyconclusions,reasons,and assumptions. 3. Judgethe qualityof an argument,includingthe acceptabilityof its reasons,assumptions,andevidence. 4. Develop and defenda positionon an issue. 5. Ask appropriate clarifyingquestions. 6. Planexperimentsandjudge experimentaldesigns. for the context. 7. Define termsin a way appropriate 8. Be open-minded. 9. Try to be well informed. 10. Drawconclusionswhenwarranted, butwithcaution. This interdependentlist of abilities and dispositions can provide some specificity for guiding critical thinking testing. The elaborations,of which the list is an abridgment,are more thorough,but the simplicity of this list can make it useful. It can serve as a set of goals for an entire critical thinkingcurriculum or as a partialset of goals for some subject-matteror other instructionalsequence. It can be the basis for a table of specificationsfor constructinga criticalthinking test. (A table of specifications provides the areas that a test is supposed to assess and indicates the weighting assigned to each.) The elaborationalso can be used as a guide in judging the extent to which an existing critical thinking test is comprehensive, and whether it assesses critical thinking at all. One of my chief criticisms of most existing critical thinking tests is their lack of comprehensiveness.For example, they typically fail

to test for such importantthings as being open minded, and many even fail to test for judging the credibility of sources. Without some defensible conception of critical thinking, judgments about tests are likely to be erratic-or worse. Two other well-known definitions of critical thinkingare McPeck's "reflectiveskepticism"(1981, p. 7) and Paul's "strong sense" definition (1987). Paul's definition is similar in broad outline to the definitionproposedhere, but emphasizesmoreheavily being aware of one's own assumptions and seeing things from others' points of view. However, neither of these definitionsprovidessufficient elaborationfor developing critical thinking tests. Furthermore, McPeck's definitionis negative.Criticalthinkingmust get beyond skepticism. Purposes of Critical Thinking Assessment Not only must we have a defensible elaborated definition of critical thinking when selecting, criticizing, or developing a test, we must also have a clear idea of the purpose for which the test is to be used. A varietyof possible purposesexist, but no one test or assessmentprocedurefits themall. Herearesome majorpossiblepurposes,accompaniedby comments: 1. Diagnosing the levels of students' critical thinking. If we are to know where to focus our instruction,we must "startwith where they are"in specific aspects of critical thinking.Tests can be helpful in this respect by showing specific areas of strength and weakness (for example, ability to identify assumptions). 2. Giving studentsfeedback about their critical thinking prowess. If students know their specific strengthsand weaknesses, their attemptsto improve can be better focused. 3. Motivating students to be better at critical thinking.Thoughfrequentlymisusedas a motivational device, tests can and do motivate students to learn the materialthey expect to be covered on the test. If critical thinking is omitted from tests, test batteries, or other assessment procedures,studentswill tend to neglect it (Smith, 1991; Shepard, 1991). 4. Informingteachers about the success of their efforts to teach students to think critically. Teachers can use tests to obtain feedback about their instruction in critical thinking. 5. Doing researchabout critical thinkinginstructionalquestionsand issues.Withoutcarefulcomparison

180

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

Ennis CriticalThinkingAssessment of a variety of approaches,the difficult issues in critical thinking instructionand curriculumorganization cannot be answered. But this research requires assessment, so that comparisonscan be made. 6. Providing help in deciding whethera student should enter an educationalprogram. People in some fields already use assessed critical thinking prowess to help make admissions decisions. Examples are medicine, nursing,law, and graduateschool in general. The idea seems good, but the efficacy of existing efforts in selecting bettercriticalthinkershas not been established. Researchneeds to be done in this area. 7. Providing informationfor holding schools accountablefor the critical thinkingprowess of their students. A currentlypopularpurpose for testing, including critical thinkingtesting, is to pressureschools and teachers to "measure up" by holding them accountable for the test results of their students. Purposes 6 and 7 typically constitute "highstakes"testing, so called because much often depends on the results. The science reasoning section of the American College Test (ACT),much of the new Medical Colleges AdmissionsTest (MCAT),College Board Advanced Placement (AP) tests, Iowa Test of Educational Development,and the analytic and logical reasoning sections of the GraduateRecord Examination (GRE) and the Law School Aptitude Test (LSAT)are examples of high-stakes critical thinking tests. Traps In pursuingthe above purposes, educatorsneed to be aware of several traps, including the following: 1. Test results may be compared with norms, and the claim made that the difference, or similarity, is the result of instruction. There are usually other possible explanationsof the result, such as neighborhood influences.Currently-popular accountabilitytestinvites us into this ing trap. 2. A pretestand a posttestmay be given without comparingthe classto a controlgroup.The lackof a controlgrouprendersthe pretest-to-posttest resultsdubious, since many things otherthan the instructionhave happened to the students,and could accountfor the results. 3. The use of the same test for the pretest and posttest has the problem of alerting the students to the test questions. On the other hand, the use of different forms of (allegedly) the same test for pretestposttest comparisons,given that the testing is for critical thinking,is probablyworse, since differentforms

are actually different tests. Comparabilityis always suspect, since so much depends on the specific content of the test. 4. Most critical thinking tests are not comprehensive, especially those that are easiest to use, the multiple-choicetests. These tests typically miss much that is importantin critical thinking. 5. Another problem in the use of (especially) multiple-choicetests lies in differencesin background beliefs and assumptionsbetween test maker and test taker. Since a critical thinkeremploys a grasp of the situation, different beliefs about the situation can sometimes result in justifiably different answers to test questions (see Norris & Ennis, 1989). 6. Significant results may be expected in too short a time period. Learningto think critically takes a long time. Much reflective practice with many examples in a variety of situationsis required. 7. High-stakespurposes often interferewith the validity of a test. This is partlybecause they motivate cram-schools, which teach students how to do well on thetestswithoutthe students'havingthecriticalthinking prowessfor whichthe test is supposedlytesting.The studentsoften learntricksfor takingthe tests. This interference with validity occurs also in because the high-stakes situation pressures the part test makers to avoid taking risks with items, the answers to which might be subject to challenge. So the pressureis for them to limit their testing to multiplechoice deductive-logic items of various sorts, that is, items in which the conclusionnecessarilyfollows from the premises (thus limiting the test's comprehensiveness and content validity). Deductive-logic items are the most immuneto complaintaboutthe keyed answer. 8. Scarce resources (indicated by low assessment budgets and overworkedteachers)often lead to compromisesthat affect test validity. Because of the expense of, and/orteacher gradingtime requiredfor, the tests necessary to assess critical thinking, many testingprogramshave resortedto multiple-choicetests that are arguablyless valid than short answer, essay, and performancetests of critical thinking. Published Critical Thinking Tests Although a numberof tests incorporatecritical thinking (including the high-stakes tests just mentioned), only a few have critical thinking (or some aspect of critical thinking) as their primaryconcern. None exist for studentsbelow fourth grade. 181

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

THEORYINTO PRACTICE/ Summer 1993

Teachingfor Higher Order Thinking

This dearthof critical thinking tests is unfortunate; many more are needed to fit the various situations and purposesof critical thinkingtesting. In Table 1, I have attemptedto identify all currentlyavailable publishedtests thathave criticalthinkingas their primaryconcern. The tests are groupedaccordingto whether they aim at a single aspect of critical thinking or more than one aspect. The essay test is more comprehensivethan the others. It would also make sense to group the tests according to whether they are subject specific or general-contentbased. Subject-specific critical thinking tests assess critical thinkingwithin one standardsubject matterarea, whereas general-content-basedcritical thinkingtests use content from a variety of areas with which test takers are presumed to be already familiar. A committee of the National Academy of Education has recommendedthat there be a strong effort to develop subject-specifichigher orderthinking tests (The Nation's Report Card, 1987, p. 54). A full understandingof any subjectmatterarearequires that the person be able to think well in that area. Regrettably,I can find no subject-specificcritical thinking tests (that is, tests whose primarypurpose is to assess critical thinking in a subject matter area), althoughparts of some tests (such as the ACT section on science reasoning) fit this criterion. So there is no subject-specificgroupingin this listing of tests primarilycommittedto critical thinking. All of the tests listed here are general-content-basedtests. Unfortunately,the NationalAcademy committee also recommendedthe neglectof general-content-based higherorderthinkingtests (p. 54). This is a mistake.We tests to checkfor transferof need general-content-based criticalthinkinginstructionto everydaylife, regardless of whetherthinkinginstructionis embeddedin subject matterinstructionor whetherit is offeredin a separate courseor unit, or some combinationof the two. Since I am a coauthor of some of the listed tests, my conflict of interest in presenting and discussing this listing is obvious. I have tried not to let it interfere with my objectivity, but do recommend Arter & Salmon's Assessing Higher Order Thinking Skills: A Consumer's Guide (1987), which provides more extensive coverage. A generaldiscussion of the problems,prospects,and methodsof critical thinking testing can be found in Evaluating Critical Thinking (Norris & Ennis, 1989). Since statistical informationabout tests can be misleading, it is importantto make one's own infor-

mal judgment about the validity of the content. Persons who are seriously considering using any test should take the test and score it themselves. There is no better way to get a feel for the test's content validity. One should not depend solely on the name given to the test by the author and publisher. The following questions should be considered: 1. Is the test basedon a defensibleconceptionof critical thinking? 2. How comprehensiveis its coverageof this conception? 3. Does it seem to do a good job at the level of your students? Though these questions might seem obvious, they are often neglected. In varying degrees, all of the listed tests can be used for the first five purposes specified earlier (all but the high-stakes purposes). Their use for high stakes is problematicfor two reasons:(a) There is no security on the tests, so prospective examinees can secure copies, and (b) most of the tests are not sufficiently comprehensive to provide valid results in a high-stakes situation. Let me elaborate on this second problem. As indicated earlier, existing multiple-choice tests do not directly and effectively test for many significant aspects of critical thinking, such as being open mindedand drawingwarrantedconclusions cautiously. In response to this problem, some people will hold that the various aspects of critical thinking are correlatedwith each other, so the lack of direct testing of specific aspects does not matter.For example, being open minded correlateshighly with judging the credibilityof sources and identifyingassumptions, making all of these good indicatorsof the others, so the argumentgoes. However, when the stakes are high, people prefor pare the contentareasthatareexpectedto be on the tests. Even thoughthese contentareasmight correlate highly with other critical thinking aspects when the stakes are low, specific preparationfor the expected aspects in order to deal with them on the tests will lower the correlations, destroying their validity as indirect measures of the missing aspects of critical thinking. The danger is to accept correlations obtained in low-stakes situations as representativeof the correlationsobtainablein a high-stakessituation. A possible exception to this warning about the use of the listed tests for high-stakes situationsis the

182

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

Ennis Critical ThinkingAssessment

Table 1 An Annotated List of Critical Thinking Tests Tests Covering More Than One Aspect of Critical Thinking The California Critical Thinking Skills Test: College Level (1990) by P. Facione. The California Academic Press, 217 LaCruz Ave, Millbrae, CA 94030. Aimed at college students, but probably usable with advanced and gifted high school students. Incorporates interpretation,argument analysis and appraisal, deduction, mind bender puzzles, and induction (including rudimentary statistical inference). Cornell Critical Thinking Test, Level X (1985) by R.H. Ennis and J. Millman. Midwest Publications, PO Box 448, Pacific Grove, CA 93950. Aimed at Grades 414. Sections on induction, credibility, observation, deduction, and assumption identification. Cornell Critical Thinking Test, Level Z (1985) by R.H. Ennis and J. Millman. Midwest Publications, PO Box 448, Pacific Grove, CA 93950. Aimed at advanced or gifted high school students, college students, and other adults. Sections on induction, credibility, prediction and experimental planning, fallacies (especially equivocation), deduction, definition, and assumption identification. The Ennis-Weir Critical ThinkingEssay Test (1985) by R.H. Ennis and E. Weir. Midwest Publications, PO Box 448, Pacific Grove CA 93950. Aimed at grades 7 through college. Also intended to be used as a teaching material. Incorporates getting the point, seeing the reasons and assumptions, stating one's point, offering good reasons, seeing other possibilities (including other possible explanations), and responding to and avoiding equivocation, irrelevance, circularity, reversal of an if-then (or other conditional) relationship, overgeneralization, credibility problems, and the use of emotive language to persuade. Judgment: Deductive Logic and Assumption Recognition (1971) by E. Shaffer and J. Steiger. Instructional Objectives Exchange, PO Box 24095, Los Angeles, CA 90024. Aimed at grades 7-12. Developed as a criterion-referenced test, but without specific standards. Includes sections on deduction, assumption identification, and credibility, and distinguishes between emotionally loaded content and other content. New Jersey Test of Reasoning Skills (1983) by V. Shipman. Institute for the Advancement of Philosophy for Children, Test Division, Montclair State College, Upper Montclair, NJ 08043. Aimed at grades 4 though college. Incorporates the syllogism (heavily represented), assumption identification, induction, good reasons, and kind and degree. Ross Test of Higher Cognitive Processes (1976) by J.D. Ross and C.M. Ross. Academic Therapy Publications, 20 Commercial Blvd., Novato, CA 94947. Aimed at grades 4-6. Sections on verbal analogies, deduction,

assumptionidentification,word relationships,sentence sequencing, interpretinganswers to questions, information sufficiency and relevance in mathematics problems, and analysis of attributesof complex stick figures. Test of Enquiry Skills (1979) by B.J. Fraser. Australian Council for Educational Research Limited, Frederick Street, Hawthorn, Victoria 3122, Australia. Aimed at Australian grades 7-10. Sections on using reference materials (library usage, index, and table of contents); interpreting and processing information (scales, averages, percentages, proportions, charts and tables, and graphs); and (subject-specific) thinking in science (comprehension of science reading, design of experiments, conclusions, and generalizations). Test of Inference Ability in Reading Comprehension (1987) by L.M Phillips and C. Patterson. Institute for Educational Research and Development, Memorial University of Newfoundland, St. John's, Newfoundland, Canada A1B 3X8. Aimed at grades 6-8. Tests for ability to infer information and interpretations from short passages. Multiple choice version (by both authors) and constructed response version (by Phillips only). Watson-Glaser Critical ThinkingAppraisal (1980) by G. Watson and E.M. Glaser. The Psychological Corporation, 555 Academic Court, San Antonio TX 78204. Aimed at grade 9 through adulthood. Sections on induction, assumption identification, deduction,judging whether a conclusion follows beyond a reasonable doubt, and argument evaluation. Tests Covering Only One Aspect of Critical Thinking Cornell Class Reasoning Test (1964) by R.H. Ennis, W.L. Gardiner,R. Morrow, D. Paulus, and L. Ringel. Illinois CriticalThinking Project, University of Illinois, 1310 S. 6th St., Champaign,IL 61820. Aimed at grades 4-14. Tests for a variety of forms of (deductive)class reasoning. Cornell Conditional Reasoning Test (1964) by R.H. Ennis, W. Gardiner,J. Guzzetta, R. Morrow, D. Paulus, and L. Ringel. Illinois Critical Thinking Project, University of Illinois, 1310 S. 6th St., Champaign, IL 61820. Aimed at grades 4-14. Tests for a variety of forms of (deductive) conditional reasoning. Logical Reasoning (1955) by A. Hertzka and J.P. Guilford. SheridanPsychological Services, PO Box 6101, Orange, CA 92667. Aimed at high school and college students and other adults. Tests for facility with class reasoning. Test on Appraising Observations (1983) by S.P. Norris and R. King. Institute for Educational Research and Development, Memorial University of Newfoundland, St. John's, Newfoundland, Canada, A1B 3X8. Aimed at grades 7-14. Tests for ability to judge the credibility of statements of observation. Multiple choice and constructed response versions.

183

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

THEORYINTO PRACTICE/ Summer 1993

Teachingfor Higher Order Thinking

criticalthinkingessay test, which does test more comprehensively than others. But it is not secure. Furthermore,it is more expensive in time and/or money than multiple-choice tests to administer and score. The problem is serious in high-stakestesting. We do not yet have inexpensive critical thinking testing usable for high stakes. Research and development are needed here. The listed multiple-choice tests can, to varying degrees, be used for the first five listed lower stakes purposes:diagnosis, feedback, motivation, impact of teaching, and research. But discriminatingjudgment is necessary. For example, if a test is to be used for diagnostic purposes, it can legitimately only reveal strengthsand weaknesses in aspects for which it tests. The less comprehensivethe test, the less comprehensive the diagnosis. For comprehensiveassessment,unless appropriate multiple-choice tests are developed, open-ended assessment techniquesare probablyneeded. Until the published repertoireof open-ended critical thinking tests increases considerably,and unless one uses the publishedessay test, or partsof otheropen-endedtests, such as College Board's advanced placement (AP) tests, it is necessary to make your own test. Making Your Own Test In making your own test, it is probably better that it be at least somewhatopen ended anyway, since makinggood multiple-choicetests is difficult and time consuming, and requiresa series of revisions, tryouts, and more revisions. Suggestions for makingmultiplechoice critical thinkingitems may be found in Norris and Ennis (1989), but I do not present them here, because open-ended assessment is better adapted to do-it-yourself test makers and can be more comprehensive. Norris and Ennis also make suggestions for open-endedassessment,and the discussionhere grows out of that presentation. Multiple-choiceassessment is labor intensive in the constructionand revision of the tests. Open-ended assessment is labor intensive in the grading, once one has developed a knack for framing questions. One promising approachis to give a multiple-choice item, thus assuring attentionto a particularaspect of critical thinking, and to ask for a brief written defense of the selected answer to the item. As in the previous example, open-endedassessment can be fairly well structured.Or it can be much

less structured-in the form of naturalisticobservation of a student. Greater structureusually means greatereffort beforehand,but also greaterassurance that there will be opportunitiesto assess specific aspects of critical thinking.Less structuregenerally requires greatereffort duringand after the observation, and gives the opportunityfor more life-like situations, but provides less assurancethat a broad range of specific aspects of critical thinking will be assessed. The sections thatfollow illustrateseveraltypes of open-endedcritical thinkingtests that teacherscan make themselves. Multiple Choice With Written Justification In the Illinois CriticalThinkingProject-in conjunction with the Alliance for Essential Schools in Illinois-we are currently exploring the use of the format.We multiple-choice-plus-written-justification have taken 20 items from the Cornell Critical Thinking Test, Level X, and requesteda brief writtenjustification of the student's answer to each. In the following example of an item focusing on judging the credibilityof a source, the situationis the exploration of a newly-discoveredplanet: WHICHIS MOREBELIEVABLE? Circleone: A. The healthofficer investigatesfurtherand says, "Thiswatersupplyis safe to drink." B. Several others are soldiers. One of them says, "Thiswatersupplyis not safe." C. A and B are equallybelievable. YOURREASON: One advantageof this promising format is that specific aspects of critical thinking can be covered (including an aspect not effectively tested in existing multiple-choicetests: being appropriatelycautious in the drawingof conclusions).Anotheradvantageis that answersthatdifferfromthosein the key, if well defended, can receive full credit.Answersthatdifferfrom the key, as I notedearlier,are sometimesdefensible,given thatthe test takerhas differentbeliefs aboutthe world than the test maker. We have found that high interrater consistency (.98) can be obtained if guides to scoringare carefullyconstructedand if the scorersare proficientin the same conceptionof criticalthinking. I recommendthis approachto makingyour own test. It is fairly quick, can be comprehensive, provides forgivenessfor unrefinedmultiple-choiceitems, and allows for differencesin studentbackgroundsand interpretationof items.

184

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

Ennis Critical ThinkingAssessment

Essay Testing of Critical Thinking Several approachesto making one's own essay test of critical thinking are viable, depending on the purpose. High structure. The use of the argumentative essay to assess critical thinking can vary considerably in degree of structure.The Ennis-WeirCritical ThinkingEssay Test is an example of a highly structuredessay test. It provides an argumentativepassage (a letter to an editor) with numberedparagraphs,most of which have specific built-in errors. Students are asked to appraisethe thinking in each paragraphand the passageas a whole, and to defendtheirappraisals. A scoring guide assigns a certainnumberof possible points to the appraisal of each paragraphand the passage as a whole, and provides guidance for the grader. But the grader must be proficient in critical thinkingin orderto handle responses that differ from standardresponses in varyingdegrees. Responses that are radically different, if well defended, receive full credit. Scoring by proficient graders takes about 6 minutes per essay. Medium structure. Structurecan be reduced by providing an argumentativepassage and requesting an argumentativeresponse to the thesis of the passage and its defense-without specifying the organization of the response. College Board AP tests use this approach. Scoring can be either holistic (one overall grade for the essay), or analytic (a grade for each of several criteria).Holistic scoring is quicker and thus less expensive. Proficient graderstake roughly 1 or 2 minutes for a two-pageessay. Analytic scoringgives more information and is more useful for most purposes. Proficientgraderstakeroughly3 to 6 minutesfor a twopage essay, dependingon how elaboratethe criteriaare. Minimal structure. Structurecan be furtherreduced by providingonly a question to be answeredor an issue to be addressed.The Illinois Critical Thinking Essay Contest uses this approach(Powers, 1989). In one year, studentswere asked to take and defend a position about the possible regulation of music television, a topic of great interest to students. Reduced structuregives students more freedom, but provides teachers with less assurance of diagnostic information, not a problem for the essay contest. Again, either holistic or analytic scoring is possible. At Illinois we are also using this formatfor the developmentof the IllinoisCriticalThinkingEssay Test.

We developeda six-factoranalyticscoring system, an adaptationof scoring guides developedby the Illinois StateBoardof Education,and have securedhigh interraterconsistency(.94). This approachalso looks promising. Gradingtakes us about 5 minutesper essay for essays writtenin 40 minutesof class time. Performance Assessment Performanceassessment is the most expensive of all, since it requires considerableexpert time devoted to each student.It has the greatestface validity for whateveris revealed,since the situationsare more realistic-possibly real life situations. However, the greaterthe realism, the less assuranceof comprehensiveness. In real life situations, people generally reveal only what the situation requires, and most observable situationsdo not requireall aspects of critical thinking.So real-life performanceassessment encountersa difficulty similarto one found in multiplechoice assessment: reduced comprehensiveness.Another possible danger in performanceassessment is excessive subjectivity. The least structuredperformanceassessment is naturalistic observation, as in a case study (Stake, 1978). Here, a trainedobservertakes extensive notes describing a series of events and focuses on the activities of one person or group.Interpretationis inevitable, but "rich"descriptionis the goal. An example of slightly more structuredperformance assessment is the use of a student's portfolio of work to determine graduationfrom high school (recommended, for example, by Sizer in Horace's Compromise,1984). Validity of this type of assessment is not yet established. It is an attractiveidea, but many problems exist, including probablelack of comprehensivenessof critical thinking assessment. A more structuredperformance assessment is exemplified by an exploratoryassessment effort by the NationalAssessmentof EducationalProgress(Blumberg, Epstein,MacDonald,& Mullis, 1986). A student is given a varietyof materialsand asked to investigate what factors affect the rate at which sugar cubes dissolve. The observerasks questionsand watchesto see whetherthe studentgoes aboutthe taskscientifically.In this kind of performanceassessment, structureis provided by the assignmentof a task, which is designed to check things of interest. Performanceassessmentseems valid on the face of it. Expense, possible lack of comprehensiveness, 185

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

THEORYINTO PRACTICE/ Summer 1993

Teachingfor Higher Order Thinking

possible excessive subjectivity, and lengthy reports are dangers. Summary Critical thinkingtesting is possible for a variety of purposes.The higher the stakes and the greaterthe budgetaryrestraints,the fewer the purposes that can be served. In particular,comprehensivenessof coverage of aspects of critical thinking is threatened in high-stakes testing. A number of published tests focus on critical thinking. Almost all are multiple-choicetests, an advantage for efficiency and cost, but currentlynot for comprehensiveness.More research and development are needed. Viable alternativesinclude the additionof justification requests to multiple-choice items, essay testing with varyingdegreesof structure,andperformance assessment.All are considerablymore expensive than multiple-choice testing when used on a large scale, but on a small scale, they offer a feasible alternative in terms of validity and expense. However, grading them does take more time than grading a prepackaged multiple-choice test. Note: The author deeply appreciates the helpful comments of Michelle Commeyras, Marguerite Finken, Stephen Norris, and Amanda Shepherd.

References Arter, J.A., & Salmon, J.R. (1987). Assessing higher order thinking skills: A consumer's guide. Portland, OR: Northwest Regional Educational Laboratory. Blumberg, F., Epstein, M., MacDonald, W., & Mullis, I. (1986). A pilot study of higher-order thinking skills assessment techniques in science and mathematics. Princeton, NJ: National Assessment of Educational Progress.

Dewey, J. (1910). How we think. Boston: D.C. Heath. Educational Policies Commission. (1961). The central purpose of American education. Washington, DC: National Education Association. Ennis, R.H. (1962). A concept of critical thinking. Harvard Educational Review, 29, 128-136. Ennis, R.H. (1981). Eight fallacies in Bloom's taxonomy. In C.J.B. Macmillan (Ed.), Philosophy of education 1980 (pp. 269-273). Bloomington, IL: Philosophy of Education Society. Ennis, R.H. (1987). A taxonomy of critical thinking dispositions and abilities. In J. Baron & R. Sternberg (Eds.), Teaching thinking skills: Theory and practice (pp. 9-26). New York: W.H. Freeman. Ennis, R.H. (1991). Critical thinking: A streamlined conception. Teaching Philosophy, 14(1), 5-25. Ennis, R.H. (in press). Critical thinking. Englewood Cliffs, NJ: Prentice-Hall. McPeck, J.E. (1981). Critical thinking and education. New York: St. Martin's Press. The nation's report card. (1987). Cambridge, MA: National Academy of Education, Harvard Graduate School of Education. Norris, S.P., & Ennis, R.H. (1989). Evaluating critical thinking. Pacific Grove, CA: Midwest Publications Paul, R.W. (1987). Dialogical thinking: Critical thought essential to the acquisition of rational knowledge and passions. In J. Baron & R. Sternberg (Eds.), Teaching thinking skills: Theory and practice (pp. 127-148). New York: W.H. Freeman. Powers, B. (Ed.). (1989). Illinois critical thinking annual. Champaign, IL: University of Illinois College of Education. Shepard, L.A. (1991). Will national tests improve student learning? Phi Delta Kappan, 73, 232-238. Sizer, T. (1984). Horace's compromise. Boston: Houghton-Mifflin. Smith, M.L. (1991). Put to the test: The effects of external testing on teachers. Educational Researcher, 20(5), 8-11. Stake, R.E. (1978). The case study method in social inquiry. Educational Researcher, 7(2), 5-8.

Tip

186

This content downloaded from 50.155.233.239 on Mon, 18 May 2015 22:26:22 UTC All use subject to JSTOR Terms and Conditions

Related Documents

Critical Thinking Assessment
September 2019 37
Critical Thinking
June 2020 43
Critical Thinking
April 2020 38
Critical Thinking
April 2020 35
Critical Thinking
April 2020 31

More Documents from ""