Assessing Innovation (final Report)

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Assessing Innovation (final Report) as PDF for free.

More details

  • Words: 20,319
  • Pages: 69
1

1

Context and brief

In 1999, the latest version the National Curriculum was published, including the most recent formulation for design & technology. One of the welcome additions to each of the subject areas for NC2000 was the articulation of ‘importance’ statements, in which the vision of subjects is encapsulated. The Statement for design & technology reads as follows: The importance of design and technology Design and technology prepares pupils to participate in tomorrow's rapidly changing technologies. They learn to think and intervene creatively to improve quality of life. The subject calls for pupils to become autonomous and creative problem solvers, as individuals and members of a team. They must look for needs, wants and opportunities and respond to them by developing a range of ideas and making products and systems. They combine practical skills with an understanding of aesthetics, social and environmental issues, function and industrial practices. As they do so, they reflect on and evaluate present and past design and technology, its uses and effects. Through design and technology, all pupils can become discriminating and informed users of products, and become innovators. (DfEE 1999 p15)

At the time of publication, the DfEE, in concert with the Design & Technology Association (DATA) established a Strategy Group for design & technology, charged with the task of steering the subject through the following years. The group undertook a number of development tasks, including (eg) an externally commissioned review of the literature concerning the impact of Design & Technology (Harris & Wilson 2002), and a review of new technologies that might be encouraged to support the growth of design & technology (hereafter d&t) in the immediate future. One task – undertaken by members of the group itself – was to review the internal coherence of design & technology as presented in NC2000, with particular regard to the match between the vision statement, the Programmes of Study (PoS) and the Attainment Target (AT). It was noted that the vision statement encapsulates the need for creativity, innovation and teamwork in design & technology. • ‘intervene creatively’ • ‘creative problem solvers’ • ‘members of a team’ • ‘become innovators’

It was also noted that whilst the PoS is less clear on these points, there is at least an implicit recognition of their importance and the scope or flexibility to interpret these imperatives into school curricula. However is was noted that the Attainment Target is starkly bereft of any reference to, or recognition of, these key factors. Accordingly the following recommendations were made (March 2002) to the DfES Strategy Group: To facilitate further progress in teaching and learning in Design and Technology, which exemplifies the exciting and important rationale for the subject, the following proposals are made:

2

• The POS for Design and Technology to remain unchanged at the present time • The attainment target for Design and Technology is developed and refocused to include qualitative criteria for the following fundamental pupil abilities: • •

the development of creative and innovative design proposals the identification of appropriate starting points for Design and Technological activity (from suggested contexts?)



an understanding of the issues and values associated with rapid design and technological development and their impact on the citizens of the world



to work effectively as a member of a team

• Research is carried out into effective Design and Technology practice, including appropriate assessment tools and processes, with special reference to: •

creativity and innovation



the ephemeral nature of some design decisions



the use of ICT in creative activity



the relationship between process and product

• Support and exemplification materials are then developed to increase teacher confidence in teaching the following: •

a wide range of design strategies which lead to creative, innovative and challenging work



how pupils can identify appropriate starting points for Design and Technology activity



the impact of technological developments on a global society



effective team work

A copy of the full report is available in appendix 01 ‘David Prest report’ on the enclosed CD-ROM. Beyond NC requirements, related problems were evident with GCSE assessments, partly through the syllabus specifications themselves (which equally lack reference to innovation, creativity and teamwork), and partly, inadvertently, through the impact of ‘league-tables’. Teachers, departments and schools are now almost as dependent upon the GCSE results as are their students, and a typical response in schools is that teachers impose ever-more rigid formulas on student project portfolios to ‘guarantee’ success. The concern of the DfES Strategy Group was that as GCSE project work portfolios become more formulaic, innovative students may be penalised by comparison with well organised, rule-following students. This has the result that - in relation to the d&t vision statement - the wrong students (or at least some of the wrong students) are rewarded with the best grades in GCSE assessments. Accordingly, for all the reasons outlined above, the Strategy Group recommended that research be undertaken to examine the extent to which – and the ways in which – innovation and teamwork might be more fully recognised and rewarded in assessment processes, particularly within GCSE. The Technology Education Research Unit (TERU) at Goldsmiths College was asked to undertake the work and develop a system of assessment that would measure and reward design innovators. The project was launched in Jan 2003 and concludes in Dec 2004. In association with

3

the DfES Strategy Group and the design & technology subject consultant in QCA, the following research aims were identified. • • • •

2

To research into the mismatch between the National Curriculum Importance of design and technology statement, the GCSE grade descriptions and the modes of assessment of student performance in d&t adopted in pilot schools. To develop strategies and suggest methodologies that would encourage a range of approaches to curriculum delivery and assessment processes within D&T from Year 6 to GCSE. To en-skill pilot school teachers in the use of appropriate assessment tools and processes with special reference to how creativity and innovation is fostered, the use of ICT in creative activity and the relationship between process and product. To make provision for increased teacher confidence in developing: innovative and challenging work; an understanding of the impact of technological developments on a global society; effective team-work.

Research design

Our approach to this project has moved through three phases: initially (phase 1) exploring performance descriptors of design innovation, then (phase 2) examining classroom practices that encourage it, and finally (phase 3) developing assessment activities that promote evidence of innovative student performance.

strand

1 creating descriptors (performance)

strand

strand

2

3

creating evidence (pedagogy)

creating activities (briefs)

A project team was established in TERU, directed by Professor Richard Kimbell, and arrangements were made to work principally in association with three LEAs that could offer advisory and other support to the project. • Cornwall

(adviser: David Prest:; principal author of the initial report to the Strategy Group)

• Shropshire

(adviser: Paul Clewes)

• Durham

(adviser: John Chidgey)

4

In each LEA, these d&t advisers identified a small number of primary and secondary schools for development activities. Phase 1 centred on collecting work from these schools, and from GCSE Awarding Bodies, and analysing it to identify categories, groupings, and ultimately descriptors of design innovation in the context of students’ work (in KS2,3 and 4 ) in design & technology. Phase 1 therefore involved ‘post-event rationalising’ of work that had already been completed. Phase 2 was substantially different, and involved ‘live’ projects in schools in all three LEAs and within KS 2,3,4 & 5. The aim was to observe and explore the repertoire of pedagogic strategies that teachers used in order to promote team-work and innovation in students’ designing. Equally it was to involve observation of the effects of these strategies on learners’ performance. We sought to create a kind of intelligent toolbox of strategies within which we could understand the effects of using the tool and thereby evaluate its potential for assessment purposes. Phase 3 was planned to take forward the toolbox and systematically build assessment activities that reflect good design & technology. Specifically, we were concerned to create activities that had the effect of encouraging teamwork and rewarding design innovation. The assessment activities therefore had to pass three different kinds of test. (i)

Do the assessment activities reflect good design & technology?

(ii)

Do they promote design innovation?

(ii)

Do they provide good assessment data?

Overriding and interacting with all of these concerns was the belief that if we could find good solutions to the problems outlined in the brief we would – at the same time – have a powerful beneficial impact on the future development of design & technology.

3

Methodology

Phase 1 At the outset in phase 1 we consulted widely with teachers, advisers and GCSE awarding bodies to find collections of work that could be categorised on a scale of innovation from non-innovative to highly innovative. Using teachers understandings of these terms, we collected 4 pieces of work (2 “innovative” / 2 “pedestrian”) from each school and across the key stages, along with all the papers / briefs / background material that could help to explain to us why the teachers had made the judgements that they had of the work.

5

To support this process, we developed a ‘why?’ form for teachers to complete – in which we invited them to explain to us why they had chosen these particular projects as characteristic of innovative or non-innovative performance. A copy of this form is available in appendix 02 ‘Why form’ on the enclosed CD-ROM.

Teachers bring a range of work to a discussion group Thereafter in TERU we analysed the resulting 96 pieces of work – along with the ‘why?’’ forms to identify discriminators between high and low innovators and thereby to derive a preliminary framework for describing innovation. The following list for ‘highly innovative’ work was derived from those processes and is in priority order - the top of the list being mentioned most frequently. different exciting novel unusual risky bending the rules brave determined marketable professional ‘wow’ confident powerful unique

By contrast, the following list was seen to be characteristic of non-innovative work – again in priority order. controlled focused orderly predictable honest reliable thorough thoughtful

6

Another distinguishing feature that emerged from the comparison of the two collections of work was that the projects judged to be creative/innovative were typically based on and driven by ideas. By contrast, the projects judged to be competent/adequate (but not innovative) were more typically based on the conventional steps of project management (brief / specification etc). From this process, we derived a preliminary assessment framework that scrutinised the extent to which students were able to: • have ideas • grow their ideas • prove their ideas The middle one of these - growing ideas (typically through modelling, sketching and discussing) was seen to be the cornerstone of creative capability in design & technology. As we evolved this way of looking at project-work, we undertook a systematic analysis of the projects that we had been sent, to see whether the framework was robust in accounting for the individuality and the innovation that existed in the work

A copy of this framework is available in appendix 03 ‘First framework’ on the enclosed CD-ROM.

7

The final element of phase 1 centred on a project day at QCA with the research team along with all the teachers and advisers to scrutinise a range of work and to validate the framework of descriptors that we had derived. As a culmination to phase 1, the framework enabled us to recognise and agree upon the classification of particular pieces of work on a scale of innovation and to share a vocabulary that enabled us to describe the ways in which it did (or didn’t) demonstrate such innovation.

Phase 2 In May 2003 we launched phase 2 of the project – examining the activity ‘live’ in classrooms, studios, and workshops in schools in the three LEAs. The purpose here was to get beyond the work itself (the outcome) and examine the activity and the pedagogy in use with teachers that enabled them to promote creative performance in their students. We asked teachers to run 2-day projects (two whole days – typically 10 hours) to briefs of their own devising and to feel completely free of the normal constraints of examination assessment criteria. What do they do to encourage design innovators? In May and June 2003, schools ran these projects with groups of students in years 3-6 (in primary schools), and with years 9, 10, and 12 in secondary schools. Members of the research team observed every minute of each of these two-day projects. Our hope was that we might be able to identify at least some of the qualities (in teachers and students) that might lie at the heart of innovative performance. Our approach to this task was to have two observers in the classrooms, one observing the teacher in everything that he/she did – especially in relation to encouraging design innovation and the other observing four pre-selected learners. These observation were based upon – and supported by – observation schedules. These schedules were derived in part from our previous experience of observing classroom projects (eg for ESRC) and in part from the specific issues we wished to highlight concerning design innovation. The final observation data sheet allowed us to observe the interaction of groups of learners, if that happened during the 2 day project. Copies of the observation sheets for teachers, for learners, and for groups are available in appendices 04, 05 and 06 on the enclosed CD-ROM. The outcomes of this round of trials were twofold. First we had masses of interesting work from students (and photographs of them doing it) – and this work formed the basis of our first

8

assessment trials using the framework derived in phase 1. And second, we had masses of observation data (a) about the kinds of things that teachers did to prompt and support design innovation, and (b) about the ways in which students reacted to those pedagogic prompts. Moreover we had this observation data in a VERY carefully structured (minute by minute) framework through the 2 days of activity. Accordingly we could examine the unfolding nature of the experience, both from the teachers perspective and from that of the students. Several things were immediately obvious: • we could see the empowering effect of having blocks of time to work on the tasks without the constant interruption of lesson changes; • we could see the typical distribution of time across the day and learners reaction to it; • we observed the very different approaches of students - some using sketching as a prelude to modelling, and some going directly to modelling as their primary exploration device; • we observed very different teacher styles – some operating as co-designers encouraging exploration, and some seeking to be more directive of the outcomes; • we observed the incidence of teachers operating with the whole class, with groups, or with individuals; • we observed the pace of the work in response to the task and teachers’ activities classified (according to work in a former TERU project) as stationary; poddling; motoring. At the end of the 2 day project we collected further data directly from the learners – using a feedback sheet to enable them to tell us what worked well for them - what helped them (or didn’t help them) with their ideas during the project. We also conducted interviews with the teachers exploring in detail all the phases of the task and all the techniques they used to support and encourage learners’ innovation. Copies of the ’What I think’ sheet for learners and interview sheets for teachers are available in appendices 07, 08 and 09 on the enclosed CD-ROM. The illustrations that follow are of one of these phase 2 projects. They focus on a textiles GCSE group (year 10), and the task developed by the teacher concerned a protective hat for the summer. The student in this case study evolved a ‘cooling hat’ based on a cooler block from the freezer. The idea originated in her recollection of her mother using such a block from the freezer to take the heat out of sunburn. The student took this idea and created her own ‘freezer-block’ by laminating together (with a glue-gun) two sheets of acetate, leaving a space to be filled with water. Having created the water holder, she dyed the water to look more like water and added

9

goldfish and other pond life cut from metal foil for added effect. The felt and fabric hat was then created to reflect the watery / fishy aesthetic of the whole. Finally she tackled the assembly problem – sewing and glueing the cooler-block into the crown of the hat. The finished piece is a prototype (created from start to finish in 10 hrs) of a hat that lives in the freezer and comes out on very hot days! It is – as the student fully recognised – merely a starting point for a real product, and one that would need a good deal of development and detailing, as well as some research into the effects of ice on heads. But as an idea and a prototype it was thought (by all who have seen it) to be an innovative and exciting starting point.

photo-record of the cooling hat project From these combined data we were able to gain a clear understanding of the 2 day events; • what worked and what didn’t; • what the students found helpful and what they didn’t; • what the teachers were comfortable with and what they weren’t; • what was possible in this time-scale and what wasn’t; • which resources were most effective and which weren’t. So, in total, we learned a huge amount about the process of undertaking tasks in these conditions. But overlaying these successes was the undeniable fact that whilst the learner outcomes from the 2 day event (their prototypes) were exciting and valuable, their was scant evidence of learners design and development process. The focus of their work had been on generating a successful

10

outcome, and they had not been concerned (or required) to develop a portfolio explaining everything that they had done. Accordingly the best evidence that we had of the evolution of their work was based on our observation data. And this was a real problem, since no examination can realistically be based on a fully employed ‘observer’ recording what students do as they do it. It seemed necessary to find ways of structuring the activity so that, as students undertook it, they would automatically leave behind a trail of evidence that would provide some means for interpreting the evolution of the outcome and enriching our ability to make judgements about it. As a step towards this end, we noted that one part of our observation processes - the photographic record - had (in some cases) been particularly helpful in reconstructing the student’s process after the event. This was to prove an important observation.

Phase 3 With this at the forefront of our thinking, in September 2003 we launched phase three of the project, which involved the construction of assessment activities that would explicitly promote evidence of the process of design innovation. We went back to some of our APU thinking from the late 1980s – in which we had constructed 90 min activities structured into student-response work-books. And we began to explore how the principles embedded in those APU tasks might be extended into the two-day activities that we had recently been observing. In November 2003 we launched the first prototype of an assessment activity with a year 12 group in Cornwall. In the light of this original trial, we modified the structure through other localised trialing in London schools. The format evolved into a 6-7 hour task: two consecutive mornings of 3 to 3.5 hours. In that time, students start with a task and work through from an initial concept to the development of a prototype solution – a ‘proving’ model to show that their ideas will work. The whole 7 hours is run by the teacher – following a script that choreographs the activity through a series of sub-tasks each of which is designed to promote evidence of students’ thinking in relation to their ideas. These ‘steps’ in the process all operate in designated spaces in a booklet. The following structure is characteristic of the activities so far developed. The task (‘light fantastic’) centres on re-design of a light-bulb packaging box, so that (once the bulb is taken out for use) the package/box can be transformed into a lighting feature – either by itself or in association with other ‘liberated’ light-bulb package/boxes.

11

Copies of activity booklets and an overview of the 7 hours of activity are available in appendices 10, 11, 12, 13 and 14 on the enclosed CD-ROM

(i)

read the task to the group and (through brief Q&A) establish what is involved

(ii)

explore a series of ‘idea-objects’ on an ‘inspiration table’ and in a handling collection designed to promote ideas about how boxes / packages / containers might transform into other forms and functions.

(iii)

put down first ideas in a designated box in the booklet

(iv)

working in groups of 3, students swop their booklets and each team-mate adds ideas to the original

(v)

team-mates swop again so that each team member has the ideas of the other two members

(vi)

booklets return to their ‘owner’ and team members discuss the ideas generated

(vii)

the teacher introduces the modelling/resource kit that can be used throughout the 2 mornings

(viii)

students develop their ideas in the booklet – and/or through modelling with the resources

(ix)

students stop to reflect on the user of the end product and on the context of use, before continuing with development

(x)

at intervals, students are asked to pause and row a dice – with questions on each face. The questions focus on procedural understanding eg “how would you ideas change if you had to make 100?’ and students answer the questions in their booklet

12

(xi)

(xii)

photographs are used at approx 1 hr intervals to develop a visual story line to illustrate the evolution of models & prototypes

st

at the end of the 1 morning, students – and their team members reflect on the strengths and weaknesses of their evolving ideas

nd

(xiii)

the 2 morning starts wth a celebration of the work emerging from day 1. This is based on post-it labels that highlight students’ thoughts about the qualities in their ideas

(xiv)

further prototype development

(xv)

regular hourly photos and pauses for reflective thought on strengths and weaknesses

(xvi)

final team reflections, when (in turn) team members review each others’ ideas and progress

(xvii)

individually, students then ‘fast-forward’ their idea illustrating what the product will look like when completely finished and set-up in context

(xviii)

students finally review their work from start to finish.

13

In this example, the light-bulb box has been redesigned as a tapering pentagon that fits with many more to build into a complete hanging sphere (or hemisphere standing on a surface). With a bulb suspended at the centre of the sphere, the letter cut-outs (with inset lighting film) project the letters onto the wall. The student titled the prototype ‘your name in lights’.

The workbooks were carefully designed to unfold throughout the activity ensuring students always have sight of the instructions for the sub task they are currently working on and the work they have just completed. The illustrations below show the back and front of two student booklets. The first is for ‘yourname in lights’ and the photo story-line demonstrates the progress of the ideas from inception to final prototype. It is clear that the strength of this idea emerges predominantly through the medium of 3D modelling. The 2nd booklet illustrates a student who is equally comfortable with developing ideas through drawings and with 3D modelling. In both cases, the booklet allows us to ‘see’ their very different modes of working in operation.

14

Booklet 1: the development process for ‘your name in lights’

Booklet 2: the development process for another student in the same group

15

These illustration are of course drastic reductions from the original, but we trust that the reader will be able to see that the finished booklet (a folding A2 sheet) ends up packed with ideas, drawings, notes and photos. It is an immensely rich data source and a vivid record of students’ experience of 7 hours of (often) frenetic activity.

4

Factors underlying the design of the activity

There seems little doubt that the activity was successful in generating excitement in the event and moreover the work-books and the 3D outcomes provide ample evidence through which we can scrutinise a rich variety of student responses. The trials suggest that the following factors seem to be important in creating this effectiveness. • the script in association with the folding booklet In line with the original APU principles (see Kimbell et al 1991, esp ch 7) the 7 hours of activity is structured to alternate between active and reflective sub-tasks. The former typically is about making a proposal, while the latter is typically about thinking through the consequences of it. Each sub-task is timed (the shortest is about 2 mins and the longest about 40 mins) with specific instructions about what is expected. For example, after the 1st session of day 1, ‘putting down’ their first tentative early ideas, the script runs as follows: We are now going to ask you to help each other take your ideas forward. Can you pass you activity booklet to one of your team-mates. You now need to forget all about your own ideas and look at what your mate has put in box 1. Imagine you are now taking THEIR ideas forward. In box 2 put down how you would do this. You have five minutes and remember you are now working in box 2.

The boxes in the booklet are carefully sized to be non-intimidating, and moreover the printing on the booklet is designed to fade into the background as students begin to add their own notes and sketches to it. They spaces are easy to fill up – and frequently the ideas spill over into the adjoining areas. This all adds to the sense of busy-ness in the booklet, which lends confidence to the students. In technical terms – for formal assessment purposes - the scripts and the booklet also serve the purpose of standardisation. The activity sits somewhere between what students might see as an examination and what they will be familiar with as coursework. To overcome the idiosyncrasies of individual teachers and schools, the whole package is supplied centrally and acts to standardise the experience, the process and the focus of students’ responses.

16

But whilst we standardise the focus of the response (eg adding ideas to those of your teammate), we are very careful never to specify the form of the response. We use the words ‘put down’ rather than ‘write’ or ‘draw’, and at various times we explicitly ask them to use whatever form of response seems most appropriate to them. Look at your ideas so far (in box 4) Look at your success criteria list (in box 5) We would like you to develop your design ideas for your product so that it really works well. You can develop your ideas in any way that will be helpful to you • you can draw or make notes • you can make models, • or a combination of both. Please keep on using box 4 to jot down all your ideas, notes and sketches. If you are modelling – use box 4 to make it really clear to us what you are trying to do.

A further feature of the booklet design is its folding. It is based on an A2 sheet but is pre-printed and folded in such a way that whilst you work (eg) in box 4, you are immediately presented with what was going on in boxes 1, 2 and 3. This works throughout the whole 7 hours – covering both sides of the sheet – and at no time do we cut off the forward moving development process from its own immediately preceding steps. This is really important for encouraging students evolving design ideas. There is nothing more depressing than turning over from a full and busy page to a clean and empty one. It is a matter of maintaining the impetus behind the growing ideas – and adding to the confidence of students that they are making progress. They are unaware of the points at which the sheet is turned over – and are typically astonished at the end to look back at the mass of ‘stuff’ on both sides of the sheet. The effects of these elements of the structure are reported in section 9 ‘Reactions in schools’ • the handling collection and inspiration table At the start of the activity – having been presented with the task – we give each group a ‘handling collection’ of bits and pieces to handle and explore. These have been chosen as ‘idea-objects’, ie objects that are probably nothing to do with the task itself – but contain ideas that might be interesting or helpful when tackling the task. For example, when confronted with a task to redesign light-bulb packages so that the package can be transformed into a lighting feature (or a component thereof), the handling collection is full of bits and pieces that fold, bend, clip-together, stretch, and generally allow for various transformations. In addition, the inspiration table is a kind of collective display for the whole group and comprises bigger things than will fit into the handling boxes – or things that we only had one of. The groups we were testing were typically 21 students, ie with groups of 3, that required 7 handling boxes as well as the central inspiration table. We believe that the practicality of handling and fiddling with the bits and pieces was important to get the activity going.

17

• the team-work There is a mass of literature concerning the importance of team-work both in teaching/learning situations and in designing situations. But for assessment purposes, there is a pathological fear of using the massive support that it provides to students because of the association with ‘cheating’. We were determined to overcome this problem and arrived at a solution involving the groups of three students – but we used the group explicitly to support and enrich the individual work of the team members. It is this individual work that is then assessed. The group is invoked at various points in the activity. Critically it is used to spur and enrich the first ideas – then at the end of the 1st morning to reflect on the progress of the group members and how they might best move forward in the 2nd morning. And it is used again at the end of the 2nd morning for individuals to reflect on the work of each of the other two individuals in their group. In total the group time is less than 30 mins in the 7 hours, but it is worth it’s weight in gold both in getting the creative activity up-and-running, and in sharpening the critical appraisal of the students. Furthermore the working atmosphere in the room is relaxed – with lots of informal chatting within and between groups. • the modelling resources We are well aware that conventional practice in schools with early design ideas is to constrain it to pencil and paper sketches. We take the view that this ritual is unhelpfully restrictive, particularly for those students (commonplace in design & technology) who may be more comfortable with touching and manipulating things as a way of developing their thoughts. Accordingly, the activity offers students a range of materials with which to explore their ideas. There is nothing exotic about these materials. They are what might be described as soft-modelling resources. Card, plasticene, string, plastic tube, aluminium foil, paper, lolly sticks, pipe cleaners, etc etc, along with a wide range of ways for cutting them and fastening them together; staples, eyelets, string, blue-tack, tape etc etc. These resources are introduced after the ‘early ideas’ sessions (individual and group) but before the main time-block for developing ideas on the 1st morning. Some students chose to use modelling to develop their ideas from very early on. Others chose to stay with sketching in the booklet and only model later. Neither is right or wrong.

18

• the photo story-line We mentioned earlier the importance of the booklet design for maintaining the impetus behind students’ growing ideas – adding to their sense of confidence and the sense that they are making progress. A key part of this is the digital photos that are taken approximately every hour through the activity, growing into a ‘story line’ of 6 or 7 images that are pasted down the spine of the booklet. To take these photos we used a digital camera, taking shots onto a memory card, and using a small printer that would take the cards direct (ie no need for a computer). At a given time we would tell the students that we would be coming round to photograph where they had got to with their work. We took a photo of every students’ work, and immediately printed them out as ‘thumbnails’ that were given straight back to the students for pasting into their booklet. The whole process takes perhaps 10 mins and one of its strengths is that students lose the fear of cutting up or adapting a preliminary concept model into a developing model. The photos ensure that the original model is not lost. We developed this photo-story-line for assessment purposes, to make sure that we had some visual record of where students had been along the development path towards their prototypes. But we had not realised at the outset that the story-line would have an additional, quite different, value. As the 1st thumbnail image goes into the booklet, students merely note it. As the 2nd image goes in (an hour later), they note the change: the growth from the 1st image. By the time of the 3rd image is taken, they are typically looking forward to it. The images have become a motivational element that reflects back to students the progress they are making. What started as a recording device for assessment purposes, grew into a motivational aid with developmental value. • the red pen and blue pen One of the challenges that exists in conventional design practice in schools is to establish the idea that reflective / evaluative thinking in not something that only happens at the end of the project – but is essential throughout the development process. We developed two strategies for this. In the middle of the 1st morning we asked students to stop what they were doing, to pick up a red pen, and to make notes and jottings identifying what they saw as the strengths and weaknesses of the work so far - anywhere in their booklet. The benefit of this is twofold, firstly acting as a pause-for-thought in the (often frantic) development process, but also, at the end of the activity, we can see at a glance each student’s reflective comments on the work at a moment in time. The same process is then operated on the 2nd morning – but using a blue pen to distinguish it. The difference between red pen and blue pen comments is frequently one of sophistication – the blue pen being comments from deeper into the development process.

19

• the dice A completely different approach to encouraging pauses for thought and reflective comment is the dice. This is a hexagonal prism about 150mm long and with a nominal diameter of approx 40mm. On each of the eight faces is printed a different question, and at intervals the script requires students to stop work – to roll the dice – and to answer the question that comes to the top. It is a form of randomised questioning that prompts students to think beyond the current frame of reference of their ideas. Questions such as: - how would you idea change if you had to make 100 of them ? - what bits of your ideas are risky and which play safe? The answers to these questions are of course personalised to the individual ideas of the students – and reflect where they are in the process – but they still provide interesting comparable data between students. To that extent, they operate as a set of examination questions that are embedded within students’ tasks and related to their evolving prototype. • fast-forwarding After the 1st trials of the activity, we became aware that students were typically working flat out on their prototypes up to the very last moment. It was also often apparent (from subsequent discussion with students) that their prototype building had given them new insights into the task and their solution, but that these more sophisticated thoughts were typically not evident in the final form of the prototype. Accordingly we constructed a final step in the activity that invited students to ‘fast-forward’ in their minds to a point at which their ideas and their solutions where fully complete and installed in context. We invited them to describe for us (in any way they chose – typically drawings and notes) how it would work in this finally developed form. This was an opportunity to stand back from the prototypes on which they had been working, to think about refinements, and to give us a fully worked out description of the final version. It proved to be a valuable supplement to the activity – with the fast-forward box frequently taking the ideas a significant step forward.

20

By the end of June 2003, we were confident that we had established the basis of a short, sharp ‘innovation activity’ that was not strictly an examination, nor strictly a piece of coursework. Rather is sat between the two – a hybrid – that is sufficiently centrally controlled (by the teacher script and the student booklet) to be equivalently administered across schools, but at the same time was sufficiently open, dynamic and encouraging of students ideas, to encourage and reveal genuinely creative responses.

5. The conceptual framework of design & technology In this project we used a model of the design & development process that was first developed for the Assessment of Performance Unit (DES 1991). That model describes the process of design & development as a continual iteration of ideas in the mind with expressions of those ideas in the external world.

…the act of expression is a crucial part of the development of thinking… the concrete expression of ideas not only clarifies them for us, but moreover it enables us to confront the details and consequences of them …. Cognitive modelling by itself - manipulating ideas purely in the mind's eye - has severe limitations when it comes to complex ideas….. It is through externalised modelling techniques that such complex ideas can be expressed and clarified, thus supporting the next stage of cognitive modelling. It is our contention that this inter-relationship between modelling ideas in the mind, and modelling ideas in reality is the cornerstone of capability in design and technology. Choosing the most appropriate form of modelling involves thinking not only about what the idea is that needs to be expressed, but equally about how the modelling is supposed to help. ….discussion (verbal modelling) …diagramatic, or computer simulated … graphic ... 3D models… There are many ways of modelling ideas and each has its advantages and

21

disadvantages. Accordingly pupils need a rich awareness of the diversity of possibilities for modelling to enable them to grapple with the particular requirements of their task. (Kimbell et al 1991 p20-21)

This model, underpinned our developed of the booklets-based activities. So (for example) the sequencing of sub-tasks is very deliberate; each reflective sub-task being followed by an active sub-task, supporting the continuous iteration that we believe is central to effective designing. This process is illustrated below. At the outset of the activity it is

procedural interventions in the booklet

important that learners find a starting point from which their inspiration table & handling collection a ‘cloud’ of ideas

ideas can grow. We used the ‘inspiration table’ and the ‘handling collection’ to provide a ‘cloud’ of idea-objects, ie

learners’ expressions and outcomes

learners’ thoughts and ideas

objects that contain interesting or novel ideas that might be helpful in the task. Learners may or may not pick up on one or more of them.

procedural interventions in the booklet inspiration table & handling collection a ‘cloud’ of ideas

learners’ thoughts and ideas

learners’ expressions and outcomes

After putting down their very early (first) ideas, the booklets are passed around the team of three, generating, sharing and extending the ideas. The starting point for each individual is pushed forward and enriched.

procedural interventions inspiration table & handling collection a ‘cloud’ of ideas

modelling resources ‘cloud’ of opportunities team ideas

learners’ thoughts and ideas

learners’ expressions and outcomes 22

Further into the activity, the modelling resources provide another ‘cloud’ of opportunities for expressing and exploring ideas. Learners may or may not pick up on one or more of these opportunities.

procedural interventions

Each succeeding procedural intervention has the effect of pushing

team ideas

ideas forward but

photo storyline

learners’ thoughts and ideas become more sophisticated

red pen

learners’ expressions and outcomes becomes more detailed and richer

without prescribing what those ideas should be or in what form they should be represented. The

team reflections post-it valuing ideas blue pen

interventions are designed to enable learners to enrich their own ideas, and to express them in

fast forward

their own ways.

We have created an assessment activity that is very tightly controlled by the administrator; using the script, the booklet, the handling collection, the modelling resources etc.. But learners’ reaction to it, reported in section 9, is that they feel a great sense of freedom in developing ‘their own’ ideas and making ‘their own’ products. The procedural framework is the secret to this. It is rich in support systems, creating fertile ground for learners’ independent ideas to take root and flourish. There are important issues here that go well beyond design & technology and impact across the whole curriculum; or at least the whole of the curriculum that purports to support procedural learning. Design & technology stands out as one of the richest areas of the curriculum to develop learners’ procedural capability; in our case concerning the processes of design & development. It has long been argued that in priority terms, the acquisition of knowledge and skill is secondary to

23

the development of procedural capability. HM Staff Inspector Hicks described it in the following terms. Teaching facts is one thing: teaching pupils in such a way that they can apply facts is another; but providing learning opportunities which encourage pupils to use information naturally when handling uncertainty, in a manner which results in capability, is a challenge of a different kind. (Hicks 1983)

This is the challenge that we have responded to in these assessment activities. The model that we have outlined above, and that guided the evolution of our activity-based assessments, is centred on the primacy of developing learners’ procedural capability in design & technology. And in the process we are seeking to promote innovation in their responses. This section of the report has attempted to explain how this arises and why we believe learners interpret what might be seen as the constraints of the activity as freedom to operate autonomously.

6.

Developing the assessment framework

The challenge for our assessment framework was to find a way to celebrate learners emerging ideas and to reward those learners that are able to grow their ideas into innovative design solutions. Our starting point for this began in phase 1 – in two ways. First through the identification of the words associated with innovative performance and the recognition that ideas lay at the heart of this process. Second, we identified the fact that ‘ideas’ need to be seen in terms of • having them • growing them • proving them. As early as August 2003, we were developing this framework for analysing portfolios that we had been sent by schools or by examination bodies. A copy of this form is available in appendix 03 ‘First framework’ on the enclosed CD-ROM The development process for the framework thereafter operated as follows. Initially, the proposed framework was used to examine samples of learners’ work to see what qualities it did enable us to recognise and reward – and equally those qualities that it did not acknowledge. This resulted in two things; (I) a ‘score’ for the work, and (ii) a list of doubts and uncertainties about

24

qualities that need to be re-organised within the framework or added to it. This generated a further draft and the process was repeated with new samples of work. By the summer of 2004, we had arrived at a final version of the assessment form, and it is shown in full below. This is the form that was used by the team to score all thej pieces of work produced by students within this research study. A number of features of the form are worthy of a brief description.

25

A copy of this form is available in appendix 15 ‘Final rubric’ on the enclosed CD-ROM First, there is an order to the assessment process, that starts with making an overview judgement (in grey) about the whole piece of work, and progressively works down into the detail.

26

The initial overview judgement is (broadly speaking) where the marker notes whether this piece of work is a ‘wow’ or a ‘yawn’, or somewhere in between. Initially this judgement is on a 4 point scale, but thereafter is refined so that (for example) if the work is judged to be a ‘wow’ (ie a 4), the marker is required to say whether it is a REAL WOW (ie the top of level 4) or a solid middling ‘wow’, or only just a ‘wow’. So the 4 point sale becomes a 12 point scale. Starting with this holistic judgement is a techniques that we have systematically adopted in all our research projects since APU data confirmed our ‘belief through practice’ that such holistic judgements are not only easier to make but also more reliable than any atomised form of judgement. The holistic creates an overview frame of reference which can then be teased apart to examine elements of detail. The detail in this case concerns learners’ ability to have ideas, grow ideas and prove ideas. These judgements are teased out as step 2 in the process. Once again, an initial judgement on a 4 point scale is refined into a 12 point judgement – supported by criteria that have been drawn from detailed examination of students work. Having ideas is seen as ‘sparkiness’, rewarding learners for the quality and quantity of the raw material of ideas that they throw into the melting pot. These ideas may arise at the start of the activity or in the middle of development, or towards the end of the activity. In any case we reward those scripts that demonstrate a rich supply of interesting ideas, regardless of whether or not they are grown into innovative prototypes.

Growing ideas is seen as the heartland of capability in design & technology and it finally emerged in two forms;

27

• growing ideas through modelling (eg notes/sketches/3D/photos). This is seen as a kind of horse-power driving the development process forward. Are the ideas growing and transforming and (hopefully) improving? Or, conversely, are they static, unmoving, going no-where.

• growing ideas through optimising. This is a more subtle aspect of growth, and concerns learners’ ability to see (and control) the complexity in their ideas, so as to keep the project on the road. Some students are well aware of the complexities that they are consciously managing, Other are driven before the storm of their ideas – unable to control events or even to see the complexities that exist in their ideas.

Proving ideas is about criticality and thoughtfulness. In the midst of a development process it is easy to be carried away by the excitement and challenge of new ideas, but good designers can stand back from their work and view it dispassionately. Some students are autonomously self

28

critical, others only reflect when they are required to (eg when ‘red pen’ intervention are demanded), and others are unable at all to be thoughtfully self aware in their development process.

Having made these judgements in step 2, we found it useful also to note the content areas of work that students had engaged with in their project. This was step 3 of the assessment process. The three groupings that we used for this are work that is dominantly aesthetic (A), work that is dominantly concerned with users (U), and work that is dominantly technical (T). An individual student’s work may of course not be dominantly one or other of these, but may be balanced between two or even all three. In most cases however, we found that work was either in one area or (at most) two. We were not concerned to make quality judgements about these categories, but merely to note what areas of content learners were principally grappling with in their work. Whilst this tells us something about learners and their performance, we also recognise that this tells us a good deal about the task that is set. Some tasks are more susceptible to (eg) aesthetic or technical development than others. Ideally tasks would be neutral in this respect – allowing equally for all three forms of development.

7.

Awarding Bodies Pilot

On the strength of this work, the d&t consultant in QCA requested that we address the GCSE Awarding Bodies (Edexcel; AQA, OCR, WJEC), to gauge their reaction to the approach. And in the light of that presentation – in June 2003 – they all committed their organisations to development trials in schools where their senior moderators were teaching. Accordingly we moved into an ‘Awarding Bodies’ pilot stage (the AB Pilot) with eight more schools, two attached to each awarding body, and distributed across England and Wales.

29

In this AB pilot, we invited the eight teachers to develop their own tasks that could be the focus of the activity. We created a task framework for this – so that all tasks had common features and could therefore all be embedded into our common student-response booklet. We were then (in TERU) able to print standardised booklets embedded with individualised tasks for each of the eight schools. All the activities therefore had a standard script (and hence time-scale) and all learners in the AB pilot had a common experience of the process. They ‘felt’ the same. The only difference lay in the specifics of the task being undertaken. The tasks were designed to cover the breadth of design & technology, and reflected the personal specialist areas of the senior moderators in the AB pilot schools. • textiles • graphics • systems & control • product design • food

The AB tasks • “Bend-it-like-Beckham” moving shop window display based on Euro 2004 • “Transform”

textile fashion items: eg hats into bags

• “A day on the beach”

multi function / transforming product: table into body/board

• “FruiTease”

a tea or coffee based fruit drink with an edible stirrer

• “Gimme the money”:

a collapsing and expanding charity collection box

• “Re-construct”

fashion garments from charity shop clothes

• “Body storage”

small containers to hold keys/money/mobile while clubbing

• “Bacchus”

wine glass packaging for transportation

In addition to creating the task, in order to run the tasks, schools had to have available the modelling materials, the photography set-up and the handling collections that launch the projects. To support this, from TERU, we arranged a supply system for the modelling materials (via Cornwall LEA Designing & Making centre) and the photography kit (vie TAG software). But the teachers themselves had to build their own handling collections as inspiration / starting points for the activity. The teachers took this task very seriously, and the collections were invariably excellent stimulus resources to kick-start the activities. By Jan 2004, the AB pilot was ready to run, and we observed every one of these activities. Most had been completed by Easter 2004.

30

It was clear however that one-off tasks – of various kinds – would be only partially helpful to us in understanding how the activity operates as a diagnostic tool for design innovation. It would be far more helpful if we had a 2nd test of all the students in these AB schools, that could be used to standardise the data. Accordingly we asked all the AB schools to additionally run the “light fantastic” project in May / June 2004, and with exactly the same group of learners as had completed the 1st task. Most of the schools agreed and before the summer we therefore had completed booklets from most of the AB pilot students for two different test activities; • their own teachers task • our ‘Light Fantastic’ task

8. Supplementary pilot studies The majority of the research objectives that we had agreed with DfES and QCA were covered by the work undertaken in our early pilot activities and the AB pilot. However, we were aware that in two areas in particular we needed to undertake further development. The first concerned the use of the booklet / activities at KS2, and specifically in years 5/6. We proposed to try the ‘light fantastic’ task (with booklets, scripts, handling collection, and all the attendant organisational arrangements) and see what happened when it was run in a normal year 5/6 classroom with approx 30 youngsters. The second concerned a development of group-work such that the group was no longer merely a support for individual designing. Rather we wanted to see what happened when the group became a coherent design group, working together to achieve a group outcome. To do this, we designed a modified form of the light fantastic task and ran it with a year 10 group.

KS2 pilot As we explained in section 3, the phase 1 and phase 2 work of this project was conducted throughout within KS2 as well as in KS3 and 4. We drew upon many examples of design & technology projects in years 3-6, and we discussed the ideas and the developing assessment framework with teachers of those years. However in the summer of 2004 we decided

31

that the ‘light fantastic’ activity that we had developed primarily for year 10 students should be the focus for a year 5/6 activity. It was clear that some modifications were required due to the age and the design & technology experience of the children with whom we proposed to work. Nevertheless the task remained the same; focused on transformation and involving the two functions of creating a package for a light bulb which can then become part or whole of the lighting feature.

• the teacher script in association with the folding booklet A number of minor modifications to the language in both the script and worksheet were made in order to make it more accessible and manageable for 10 and 11 year olds. The font size on the task box was also increased.

• the handling collection and inspection table The handling collection for each group and the ideas object table remained unchanged. However the time taken to contextualise the children into the task was much longer although very worthwhile. The items within the inspiration collection were firmly anchored into the world around them. • the team arrangements, the modelling resources, and the photo story-line remained exactly the same. • the red pen and blue pen We were aware from our phase 2 explorations at KS2 that the concept of ‘carpet time’ was well established and an important opportunity for reflective thought. We built on this, developing carpet time into the use of red and blue pens, encouraging children to gather their thoughts. • the dice The dice was substituted with “bubbles” printed onto the booklet (and scripted) posing questions to encourage children to think around their task. The bubbles performed the same reflective function as the dice and were used in the same way, to evaluate and extend their work throughout the activity.

32

With these modifications the activity was run with year 5&6 groups. The reactions in school are noted in section 9, and the performance analysis of the children’s work is reported in section 11.

Group-work pilot We initially explored two different approaches to structuring a group-work assessment activity, both having a major focus on the learners working as a group: the first aimed at assessing the individual within the group and the second assessing the group as a whole. All of this group-work pilot was done with year 10 groups. Our intention was to utilise the assessment rubric that had been developed for the individual activities and to customise the ‘Light Fantastic’ activity in order that we retained a level of comparability between group and individual work. The activity structures were developed in parallel, as indicated in the table below. In both structures the outcome was to be a group response. The major differences between the two were as follows: • the structure on the left produced a single outcome working as a group, with some individual sub task development along the way • the structure on the right resulted in a range of outcomes, planned and agreed by the group, each part of the range being developed by an individual. In the event, it was decided that the latter activity structure was too close to the individual activities (the ‘light fantastic’ structure) and that our focus within the group-work pilot should be on the group assessed task.

Group Assessed Early ideas Early Ideas Early Ideas Group discussion & ideas, client, need. Group decision Development Box Photo Photo Photo (copied to individual)

boo klet Ind Ind Ind

Box 1 2 3

boo klet Ind Ind Ind

Grp

4

Grp

Grp Grp Grp Grp

5 6 7 8

Ind Ind Ind Ind

Individual Assessed Early Ideas Early Ideas Early Ideas Group discussion: range. Group decision. Photo for ind. sheets Development box Photo Photo Photo

33

Sub-task one thumbs up and down Sub-task two thumbs up and down Sub-task three thumbs up and down Day 2 Group discussion and decision on sub task roles Identify sub-task Development space (as box 5) Photo Photo Group sharing of sub-tasks Photo Photo Group evaluation Group Fast-Forward Group persuasive argument Audit

Ind Ind Ind

9 10 11

Ind Ind Ind

Grp

12

Grp

Grp Ind Ind Ind Grp Grp Grp Grp Grp Grp Grp

13 14 15 16 17 18 19 20 21 22 23

Ind Ind Ind Ind Ind Ind Ind Ind Ind Ind Grp

Thumbs up and down Thumbs up and down Thumbs up and down Day 2 Discussion of ideas + boxes 9 to 11 – group decision What will you do next Development Space (as box 5) Photo Photo Photo Photo Partners thoughts Partners thoughts Fast Forward How group comments have been acted on Persuasive argument

We drew on our previous experience of developing group-work assessment activities for the S Africa Department of Education (Kimbell et al 2000) and for the Royal Society of Arts (Stables et al 2003). Based on groups made up of three mixed ability learners, we modified the activity booklet such that part of the activity was developed and recorded through a group booklet, and part through a collection of individual booklets. The task itself was also modified to prioritise three aspects of the brief that could be focused as subtasks, the responsibility for which could be allocated to individuals within the group. We were aware that, for all to play an effective part in a group, it was essential for each to have something of their own to bring to the table and to be given individual space to develop their own ideas before sharing these with the team. Hence the activity began with the early ideas development that all tasks started with, which led to a team review and brainstorm to identify which ideas would be taken forward. This was followed by an extensive time working as a group to develop early ideas. The first morning concluded with individual reflection - not on each other’s ideas, but on the group’s ideas, using the subtasks as the main criteria for evaluation. At the start of the second morning the group reviewed the points made through individual evaluations, decided what now needed to be developed and allocated subtasks to individuals to move things forward. Each individual was then given time to work on their own focus, all of which was then brought back to the group for final development. Most other features of the activity - the photographic record of modelling, the ‘dice questions’, the ‘fast-forwarding’ of the final development were retained. The feature that was abandoned was the use of ‘post-its’ to refocus at the start of day two - this being replaced by the group review of evaluations. Having detailed the activity structure through the booklets, the administrator’s script was then modified to match.

34

The group activity was piloted in a single school - an inner city, girls, non-selective, Roman Catholic school. The learners were year 10 d&t students who worked in six teams with three to each team. The groups were of mixed ability, established by their regular class teacher. The work that resulted was assessed using an identical rubric to all other Phase 3 work, but the work of each group (comprising one group work booklet and three individual booklets) was assessed as a complete unit. This enabled a corporate profile of marks to be provided for each group. The reactions in school are noted in section 9, and the performance analysis of the children’s work is reported in section 11.

9. Reactions in schools By the summer of 2004 we had run trials in 12 schools and with approx 390 students, approx 100 of whom had completed two activities, one of which (‘light fantastic’) could be used for standardising / benchmarking purposes right across the sample. 9.1

The reactions of teachers

The following comments are from teachers involved in the trials, initially concerning the activity as a whole. • Students enjoyed the challenge. Great atmosphere in the room. Students were totally engaged for the majority of the time. Loved the photographs! • Students were developing products that were more interesting and creative than those that materialise from the brief, specification, research, ideas, development route! • Through a highly focused activity, students had the opportunity to experiment with a concept that may or may not work. Given permission to take a risk and not to be too worried about a quality outcome • One of the remarks that I recall from reading the self assessment sheet was " it shows what I can do in a positive way" – this was written by a pupil who is a school phobic and finds school work difficult. • Able to use their imagination (have a free hand) • Students realising how much they can do in a short time • The set task appealed to their imagination. The whole process is "pacey" and nothing becomes overworked or laboured – the quick response time sharpened pupils decisions and hence pushes achievements/attainment. Taking a chance/risky – as it is a prototype it matters but we can learn from the process. The end product is not just the key aspect of this task – it is how the task in undertaken. The exchange/evaluation of ideas in the initial part orientated pupils to be aware that other people’s opinions could help and more importantly they function as a team. Pupils felt a range of emotions during the project – apprehension, edgy, risky, exciting, familiar but also a sense of achievement and pride. The photos spurred them to work at a pace and also gave a sense of achievement. The task is divided into two morning – pupils have a gap to reflect on the previous days thinking.

35

The following comments relate to specific aspects of the activity structure discussed in section 4. • the script in association with the folding booklet • The layout and the way it (the worksheet) folds so that the pupils can see what they have done is great. • It is a safe structure to work with. The language is accessible to all pupils, considerable thought has been given to the wording and phrases. • Well-paced format

• the handling collection and inspiration table • The inspiration table was a really successful way to motivate / excite the students by showing them, and getting them to think about the ‘wow’ factor within a successful product.

• the team-work • The pupils think the support team (their group of 3) is very helpful in achieving their task and a lot more critical commentary is going on than is recorded. • It demonstrates the advantages of listening to others and team work

• the photo story-line • Students enjoyed the challenge. Great atmosphere in the room. Students were totally engaged for the majority of the time. Loved the photographs! • Photo time line to show development

9.2

The reactions of students

From the students involved, the message was equally clear. At the end of each activity we asked students to complete an evaluation form in which we identified all the sub-components of the activity (eg the photo story-line). On a Likert scale of 4-1 (very helpful, helpful, unhelpful, very unhelpful) we asked them to identify their reaction to all these components. The chart below presents all the data from yr 10 groups split by gender. A copy of this form is available in appendix 16 ‘evaluation’ on the enclosed CD-ROM

36

helpfulness of activity structures 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0

yr10 boys yr10 girls

For girls, it is clear that the most popular features of the activity are the group generation of ideas, the photo story-line and the use of modelling resources (all approx 3.5 on the Likert scale), and these three are shortly followed by the group evaluation of ideas (Likert 3.4). Overall they agree – or strongly agree – that it is a good way of assessing their capability in design & technology (Likert 3.5). For boys, six features rank at almost the same level of helpful/very helpful. The handling collection, the group generation of ideas, the photo story-line, the modelling resources, the booklet space for sketches/notes, and helping the group with ideas (all approx Likert 3.2). Again they think it is a good way of assessing their capability in design & technology. The category that stands out most obviously for not being favoured by students is the dice and its questions. The reaction to these dice question is consistent. It is disliked by many students, who see the questions as irrelevant to their immediate concerns and an interruption in the development process. It scored 2.38 on the Likert scale, with almost as many students seeing it as very unhelpful as saw it as very helpful. But it is liked by teachers, who can see the value of the evidence that is collected as a result. The free response part of the evaluation form invited students to comment more openly on the activity, and the following comments were characteristic. Hull AB112 - Had other people to give their opinions on your ideas AB114 - Helping my group with different ideas AB117 - The sheet was helpful, as it helped you know what you had to do AB119 - Being able to use my own ideas how I wanted to AB1111 - These have been the best two days in school AB1117 - Its easier to understand when you try it out for yourself AB1119 - The pictures help, cos I could always look back and see how good I'd done

37

Leicester AB214 - You had more time to get on with it AB215 - Working in groups but having the ability to work independently AB216 - You can make things you didn't expect AB218 - The regular photos AB219 - Working in groups helped for inspiration with ideas AB2110 - Its fun designing AB2111 - Using different materials AB2116 - I could show people my ability Staffordshire AB226 - Good layout of worksheets AB228 - You got to show your ability AB2210 - Starting from scratch to make you think Wales AB314 - Using your own ideas ; making models instead of drawing - they work better AB315 - It was fun to design something and actually make it AB319 - Seeing your idea develop AB3110 - Only having a limited time to work AB3111- We got to try out different things, using our ideas AB323 - Being able to make something by myself and what I wanted to make AB324 - I realised that I could do more than I thought I could AB326 - The photos was the best way to show the steps, which I think was good Berkshire AB411 - I liked getting my imagination go wild AB4110 - It was a very good way of finding how many skills you have AB4112 - I learned how to make a lot more stuff Northumberland AB423 - It made you think about how you could do things and with different materials AB4214 - It is more laid back than a test - which is better

The teachers have further commented on the extent to which the experience has affected their own classroom practice. This trial has had a real effect on my teaching. It has reinforced things I do, reminded me of things that I have done, and prodded me to think of things I have never done. My PGCE student is completely ‘gob smacked’ with the method of working and is implementing many of the principles in the trial in his teaching. His lessons are showing real pace and focus. We found the project to be a very rewarding experience – we have had time to reflect and it will enrich our learning style considerably

9.3

The reactions from the group-work pilot

The chart below shows the comparison between the phase 3 Y10 girls who undertook the individual activity with those engaged in the group-work pilot. The chart does not include those aspects where there was agreement between the two groups, and it has been organised to descend from the highest to the lowest scoring activity feature, as ranked by those engaged in the individual assessment. It is clear that the group response is generally less favourable than

38

the individual, with the exception of using the group to generate ideas, and the value of the dice questions. For the former, this is a very interesting piece of data when linked to the distinctly higher average mark for ‘having ideas’ that was seen in the group-work pilot (see sect 11). We have shown above that students rate very highly the opportunity provided by a supportive group to critique and explore ideas for the individual. It would seem that, from the group-work pilot, there was far more explicit interaction in respect of generating and developing ideas, and that that this has really been appreciated. However, the downside is that the greatest variance between the individual and group-work evaluations is in relation to whether the activity allowed an individual to show what they could do. For those involved in a group-work response there appears to be considerable doubt about the extent to which they feel that their abilities have been able to show through.

4 3.8 3.6 3.4 3.2 3 2.8 2.6 2.4 2.2 2

all girls Ind average

all girls Grp average

It is interesting to note however, that those groups who felt the activity had allowed them to show what they could do (as individuals) were also those groups who were most positive generally about the activity. As examples, group 4 recorded a mean of 3.33 or above for 9 out of 14 aspects of the activity, whilst group 3 recorded 7 out of 14. The final aspect of the activity that the group-work learners valued more highly was the dice questions. This is intriguing because, as has been mentioned earlier, this was the least poplar of all features for the learners, and, while this is still the case for the group-workers, it does appear

39

to have been seen as more worthwhile. While there is no clear evidence to explain this, one might speculate that the dice questions took on more value when it was possible to engage in dialogue as a result of being presented with them – and through the interaction, genuinely use them to critique the work in hand.

10. Deriving assessment data We describe here two aspects of the marking and moderating process. First, as a research team, we decided to mark all the work ourselves. However, for reasons of transferability of the system into other settings, we decided that it would be helpful to explore how we might train others to undertake the marking process. Both are explained here. (i) Team marking We decided on a two-marker system in which a 1st marker and a 2nd marker would then agree a moderated mark. This moderated mark would subsequently be used for all analysis. Choosing markers from the team was not a random process. We were aware of the potential impact on markers of the different ways that they might understand or ‘read’ a piece of work if they had themselves watched the project running. So in each marker pair, we sought to have one marker who had observed the project and one who had not. This ideal state was not always achievable, but in most cases it was. Our approach was to read and scan the whole script – from start to finish – and then work through the three steps of the assessment process. At the completion of a school group, the scripts were re-ordered and passed to the 2nd marker. Initially each script could take as long as 15 mins to understand and mark, but with some experience this reduced to a manageable level at which a group of 21 could be assessed in approx 2 hrs. During the marking process we were alert to the need to identify good exemplars of particular qualities (for subsequent training purposes) and to identify any elements in the work that were not picked out for attention by the assessment form itself. After about 3 weeks, we had a complete data set, with two mark profiles for each student (test 1 and test 2), both of which were moderated and agreed by the team. These mark profiles were entered into a spreadsheet along with all the background data that we held for each student (eg school / gender / KS3 SAT scores for Maths/English/Science and predicted GCSE grades for d&t).

40

(ii) Marker training The teachers and Awarding Body Officers who had taken part in the AB pilot agreed to take part in the marker training exercise. Thereafter the process was broadly as outlined below: • We selected 12 pieces of work from the sample, covering the full range of performance levels with three pieces judged by the team to be at each of the 4 broad holistic levels (3 one-ers; 3 twoers; 3 three-ers; 3 four-ers). • We prepared a training schedule that involved introducing the assessment form and all the qualities that are identified within it. • We identified exemplar pieces of work (sometimes whole worksheets, and sometimes elements of worksheets) that would enable markers to pinpoint the qualities we were concerned with and ‘level’ the performance against these exemplar bench-marks. • We put the markers into groups of 3 and invited them to mark work individually and then moderate it in discussion to thrash out any disagreement. • We compared their moderated marks with the research team moderated marks and asked the markers to complete a questionnaire identifying the easy and difficult things about the marking process. The following table illustrates the variance between the markers moderated mark and the team moderated mark; firstly for holistic and thereafter for the four detailed qualities. All the marks are on a 12 point scale, and (on average) the markers varied from the team by 1.6 marks for holistic (the best fit) and up to 2.3 marks for having and optimising ideas (the worst fit). Of the detailed qualities, modelling was the best fit with a variance of only 1.7, and this probably reflects the power of the booklet to capture this evidence and give a rich picture of the growth of the idea. Holistic

1.6

having

2.3

growing/modelling

1.7

growing/optimising

2.3

proving

2.0

In the evaluative questionnaire, we asked teachers to comment on the ease of identifying the evidence for assessment, and then (separately) the ease of making the judgements about the

41

evidence. Both were done on a 4 point Likert scale to the question of whether it was “easy” to do those things (strongly agree=4; agree=3; disagree=2; strongly disagree=1). The mean figures across all the markers and across all assessment headings are as follows, and these figures again highlight the power of the booklet in providing evidence for assessment. In four of the five areas, teachers agreed or strongly agreed that it was easy to identify the evidence for assessment. Growing/modelling emerged as having the highest confidence level (3.6) and proving as the lowest (2.9) Holistic

3.1

having

3.1

growing/modelling

3.6

growing/optimising

3.2

proving

2.9

It is one thing however to be able to identify the evidence, and a different thing to be able to value it accurately, and the following table illustrates teachers’ confidence in their ability to do that. On the same Likert scale, all the areas emerge at around 3 (ie markers agree that it was easy to make judgements). And once again growing/modelling emerges as the area of greatest confidence, and proving as the area of least confidence. Holistic

2.9

having

2.9

growing/modelling

3.2

growing/optimising

3.0

proving

2.7

Teachers’ comments add contextual colour to these somewhat bald numbers. Easy things about the assessment process: • Good *** photographs - time story (in most cases) • 'wow' factor • Modelling - if all pics were taken effectively and in sufficient numbers • Identifying the levels was straightforward (ie 1-4), but the sub-divisions were more difficult to decide upon

Difficult things about the assessment process: • Ensuring that all forms of evidence are fully recognised and valued • In my gang, the sub-divisions differed between the three members almost always and were resolved by discussion / persuasion. If marking independently there may be larger discrepancies in final marks

42

• Rewarding several pieces of evidence which were combined; Poor English or poor sketching; (ie) no commentary from teacher or pupil.

At the end of this one-day marker training and evaluation exercise, we felt justified in taking the view that markers could be trained to undertake this assessment process. The exercise highlighted those areas that need particular attention and exemplification, and equally it identified where we had been very successful with the materials as supplied on the day. On balance we were satisfied that, even in this short training exercise, we had achieved an acceptable level of marker reliability.

11. Findings in the data In total we have approx 390 booklet responses from students, the vast majority being in year 10, the principal target for this research. The spread of holistic marks across this whole sample is shown below. The centre of gravity of the sample (38%) is between 4 and 5. At the limits, 25% of the sample total

scored within our lowest band of marks (1,2,3) and 8% of the sample achieved our top band of marks (10, 11,12). This distribution is reflected in the

80 75 70 60

60

50

50 year 10 Poly. (year 10)

40

polynomial trend-line. This trend-

30

line shows the general tendency

20

of the performance levels to be

10

‘bottom-heavy’ and for

distribution

33

29

32

18

18 12

9

7 2

0 1

2

3

4

5

6 holistic

7

8

9

10

11

12

score

progressively smaller percentages of the sample to be able to achieve the upper levels. This we believe is a reflection of the current general performance of students in design & technology. Design innovation has not received the attention that it deserves and this is one of the reasons why this project was established. Equally however, we believe that if teachers awareness can be raised in this area, and if appropriate approaches are developed in design & technology curricula, then we would see a shift in this trend in the performance data, with the ‘bulge’ of performance moving up to show a more ‘normal’ distribution.

43

However, these bald numbers disguise as much as they reveal, and in the following pages we present a series of analyses that illuminate what we believe may be interesting features in the data.

11.1

Context & task effects

Performance assessment in design & technology is typically open to criticism through what the literature terms ‘context effect’. This effect tends to reduce the likelihood that performance on one task is able to predict the likely performance on a different task in a different context. This is clearly a matter of some importance for an assessment that is based on one test activity and is the reason why, in our research design, we had as many schools as possible undertaking two very different activities. Does student performance on these two tasks remain constant? Does performance vary enormously across tasks? In short are our activities measuring students generalised capability in design innovation, or are they measuring something very much more limited; students capability in design innovation in task X or task y? To answer this question we undertook an analysis of students’ holistic performance on their two test activities; with the following results. On a 12 point holistic scale, across the whole sample, the mean difference between test 1 and test 2 was +/- 1.6. A student scoring 9 on test 1 would score between 7.4 and 10.6 on test 2. A student scoring 3 on test 1 would score between 1.4 and 4.6 on test 2. This does indicate a level of consistent discrimination in the range of performance of individual students.

44

Another way of describing these data is to note the variance between performance on test one and test two – reporting the percentage of cases where marks differ by 0 (ie an identical score on both tests) or by 1,2 3 or more marks. The percentages are as follows: In 23% of cases the marks are identical In a further 30% of cases there is difference of 1 mark In a further 26% of cases there is a difference of 2 marks In a further 9% of cases there is a difference of 3 marks In a further 5% of cases there is a difference of 4 marks In a further 6% of cases there is a difference of 5 marks In 80% of the cases, the scores on two tests varied by 2 marks or less on a 12 point scale. In only 11% of cases did marks vary by 4 marks or more. The data tell us more than this however, for the consistency from test 1 to test 2 varies from school to school. The most consistent schools had a mean distribution of +/- 1 across the pupil sample in the school, whilst the least consistent had a mean distribution of +/- 2.1. It would seem that some schools are more effective than others at developing a transferable culture of design innovation such that students can repeatedly demonstrate this capability. Nonetheless, the general consistency of the data from test 1 to test 2 does suggest that in the vast majority of cases, design innovation is a sufficiently generalised quality of capability that it can consistently be diagnosed through our assessment activities.

11.2

Performance against predicted GCSE grades

Schools were asked to supply information concerning the predicted performance of students in their up-coming GCSE in design & technology. Across the sample, these predictions ranged from A*-D. These scores were then set against the performance scores in our activities to see whether the performances were predicted. Rather than examining the whole sample, we believed that it would be important to examine the extremes. If on our test activities students had achieved a very high score (10, 11, or 12) would this performance have been expected from the predicted grade? Equally if students were predicted to do very well or very poorly in their design & technology GCSE, how did they perform in our test activities?

45

In the sample for which we had all the appropriate data, there were four very high scores; three 10s and one 11. Those four students were predicted to achieve the following: • 10 predicted A • 10 predicted A • 11 predicted C • 10 predicted C So two of the four most innovative students in our sample were predicted grade C, and none of them was predicted an A*. So how did the students perform who were predicted to get A*? Seven students were predicted to get A* in their design & technology GCSE and their performance in our test activities was as follows; 6,7,7,7,7,6,6. In short, they all performed at the middle of the scale, neither particularly innovative nor non-innovative in their designing. Finally we examined the group of students who were predicted to perform most poorly in their design & technology GCSE. Five students were predicted to get D, and they performed as follows; 4,2,3,2,1. In short their poor performance predicted for GCSE is reflected in equally poor performance in our test activities. These are of course all small samples, and it would be dangerous to generalise too far. But they do at least indicate that high GCSE predicted scores are not based on (ie do not predict) high levels of design innovation. The most innovative students are not predicted the best marks, and those predicted best marks go instead to those whose performance in design innovation could best be described as ‘adequate’. Predicted poor performance at GCSE is however a better predictor of low level, inadequate, design innovation. These findings suggest that the fears of the DfES Strategy Group, whose concerns initiated this research, are well founded. 11.3

Performance against “general ability”

The concept of ‘general ability’ is a tortured one in the literature on assessment. Some argue that it does not exist, and others that it is unavoidably present in the sense that the ‘brightness’ of a student (however that is defined) will have an effect across a wide range of capabilities. In order to attempt to investigate this factor, we used KS3 SAT scores for English, Maths and Science. These were combined into a singe score and used as our ‘general ability’ measure. This was then divided into three broad ‘ability’ bands; high, mid, low. In each band there were approximately 45 students and we calculated average holistic scores from our test activities and we noted the range of scores. They are as follows:

46

High ability: mean holistic = 6.5

range = 3-12

Mid ability: mean holistic = 5.3

range = 2-11

Low ability: mean holistic = 4.5

range = 1-10

Another way of looking at the range data is to see how many students in the sub-groups achieve the lowest scores (1,2,3) and the highest scores (10,11,12) High ability:

4 lowest and 7 highest

Mid ability:

5 lowest and 2 highest

Low ability:

9 lowest and 1 highest

Two things are evident in these data. First, ‘general ability’ is not a determinant of design innovation – since the lowest ability band has a student performing at the highest level, and the highest ability band has several student performing at the lowest level. Second, there is however a general trend such that performance follows the ‘general ability’ spectrum. On average, performance is higher in the high ability band and lower in the low ability band.

11.4

Performance against “general ability” and gender

A somewhat different perspective is thrown on these data by examining the gender composition of the groups. The mid and low ability groups are almost evenly distributed by gender Mid ability =

23 girls

21 boys

Low ability =

22 girls

18 boys

But the high ability group is very disproportionately female:

36 girls

13 boys

This suggests that our sample is heavily skewed by the over-representation of high ability girls and this makes direct gender comparisons somewhat suspect. However, if we examine gender performance levels within these ability groups, it is clear that the girls outperform the boys in all areas of the assessment. • high ability girls outperform high ability boys by approx 2 holistic marks (7-5) • mid ability girls outperform mid ability boys by approx 1.5 holistic marks (6-4.5) • low ability girls outperform low ability boys by approx 1 holistic mark (5-4)

47

Girls

Boys

performance

performance

6.00

8.00 7.00

5.00

6.00 4.00

5.00 High ability Mid ability Low ability

4.00 3.00

high ability Mid ability Low ability

3.00

2.00

2.00 1.00

1.00 0.00

0.00 holistic

having

modelling

optimising

holistic

proving

having

modelling

optimising

proving

These strongly gendered performance levels become somewhat less skewed in the data from year 12/13. This is only from one school group of 19 students (7 girls 12 boys) and is therefore only indicative of a possible realignment in gender performance levels (I) as students are more mature and (ii) as students opt-in to design & technology at AS/A levels

year

12/13

girls/boys

8.4 8.2 8.0 7.8 7.6 7.4

girls boys

7.2 7.0 6.8 6.6 6.4 6.2 holistic

having

modelling

optimising

proving

Here the holistic difference is reduced to 1.1 (girls 8: boys 6.9), and in ‘growing/modelling’ in particular their performance is almost equivalent (girls 7.3: boys 7.1). It is interesting to examine these ability/gender effects across the various tasks in the test activities that were developed for the Awarding Bodies pilot. The reader will recall that these tasks span right across the spectrum of design & technology: • textiles • graphics

48

• systems & control • product design • food Light Fantastic is used here as the standardising task – as it was taken by far more students than any of the other tasks. The other AB tasks were each completed by only one school, and if the gender groups were unbalanced, as they frequently were, these data may be based on one or two students only.

tasks - boys performance 8.0

7.0

6.0

5.0

holistic having modelling

4.0

optimising proving

3.0

2.0

1.0

0.0 Light

reconstruct

Gimme

Fruitese

tasks

-

girls

Body

Bend

Beach

performance

7

6

5

holistic

4

having modelling optimising

3

proving

2

1

0 Light

Reconstruct

Gimme

Fruitese

Body storage

Bend-it

Beach

Nonetheless the differences are interesting. Generally the girls outperform the boys, and significantly in the light fantastic activity for which the data is much more secure. The one boy in the product design ‘body storage’ project, significantly under-performed the mean score for the

49

girls, and in the systems & control task ‘bend-it-like-Beckham’ again the girls significantly outperformed the boys, as they did also in the product design task ‘a day on the beach’. But in two of the tasks the data moves the other way. In the textiles task ‘reconstruct’, the boys outperform the girls – and particularly so in ‘having’ ideas and ‘growing/modelling’ ideas. And equally in ‘Gimme the money’ a graphics/product design task, the boys outperform the girls particularly in ‘having’ ideas, ‘growing/modelling’ ideas, and optimising ideas. It would seem that – within these very small samples – there is evidence that the traditional gender strengths and weaknesses are being dismantled. Girls out-performing boys in systems & control and some product design tasks, while boys outperform girls in textiles and graphic/product tasks. The poorest performance by girls and boys is in the food task, but whether this is a feature of the task, or of students’ performance is difficult to say. 11.5 Summary data from the group-work pilot The first matter of interest in respect of group-work was how well the learners had performed in comparison to those undertaking the activity on an individual basis. While it was not possible to make a direct comparison, we compared the group results with the closest group in respect of the overarching activity – that is, those who had undertaken Light Fantastic. As the group-work was undertaken in a girl’s school, the comparison was only with girls’ responses. The numbers involved were quite different (51 involved in individual responses and 17 in group-work) and so the results are only indicative. Nonetheless it is interesting to note that in all respects other than ‘optimising’ the group-work responses were marginally higher and in the case of having ideas, considerably higher.

Average Performance Girls Ind v Group LF Y10 Phase 3 7.00 6.00 5.00 4.00 3.00 2.00 1.00 0.00 holistic

having

growing mod

LF Girls Ind (n=51)

growing opt

proving

LF Girls Group (n=17)

50

Looking specifically within the groupwork sample, we explored the data to see whether there was a link between performance and general ability, predicted GCSE D&T grades or the ‘creativity ranking’ of the girls provided by the teacher. The table below shows a summary of the background data and the performance data, with the list ordered from the group with the highest holistic to the one with the lowest. What is clear is that general ability clearly does not explain the performance levels. It should be noted that, in putting the groups together, the teacher was asked to ensure they were mixed ability – and the evenness of the CATs scores show that this has been done well. GCSE predicted grades are somewhat better at predicting the group performance – but are well out for group 3.

Group 2 Group 4 Group 1 Group 5 Group 3 Group 6

General ability (mean CAT score) 112 111 117 112 114 105

Creativity ranking (mean) 5 6.7 10 12 13 9

Predicted GCSE (mean point score) 3.3 2.7 2.3 2.3 3 2.5

Holistic mark

9 7 5 5 4 2

‘having’ mark

‘growing – modelling’ mark

‘growing – optimizing’ mark

‘proving’ mark

11 6 8 7 5 3

8 7 5 6 4 2

7 6 4 5 3 2

8 9 4 4 3 1

By far the most startling (and encouraging) data is the (almost) direct link between the creativity rankings. The teacher was asked to provide a ranking order for the students, based on his perception of their creativity. The groups therefore have a mean creativity ranking and in all cases but one (group 6) this ranking reflects their group holistic scores. Where the ability is mixed, but the group includes more than one individual with a high creativity ranking (eg group 2 contained those ranked 1st, 4th and 10th) this has had a very positive effect on performance. Where an individual with a high ranking has been mixed with those with low rankings, they appear to have been pulled down. This is indicated in group 5 where the rankings are 2nd, 16th and 18th. It is also worth commenting on group 6 – a group where one learner was away, which left two girls who didn’t know each other well and who didn’t seem to be able to interact effectively when 11.6

Individual performance profiles

This section of the findings examines the performance of students not from a summary perspective (using averages and graphs and talking about trends) but rather looking into

51

individual performances. It is at this deeper, personal level that patterns of working become clearer and enable us to say more about the nature of design innovation.

Integrated performance At the outset it is important to note that performance across our assessment framework is distinctly not perceived to be a sequential linear process. Students do not start with ‘having’, then move to growing, and finally end up ‘proving’ their ideas. Rather, they have ideas all the way through, and develop them continually and seek to prove them all the time. The following examples demonstrate two extreme cases of this integrated performance. • having ideas towards the end of the activity (in fast forward box 22) While “having ideas” may be seen as an appropriate focus at the start of an activity, many students continue to come up with new ideas throughout the whole process, even as they ‘fast forward’ their proposals in the final stages of the activity. In the example below, the idea for a retaining lid is not apparent as a feature in any of this student’s previous modelling, photos or sketches.

• proving ideas at the start of the activity (in early ideas box 1-3) Similarly “proving ideas” is something we might expect to look for at the end of a task. While less obvious, there are students who have sufficiently well developed ideas at the start of the activity to begin to consider strengths and weaknesses when we ask them for their early ideas (box 2).

52

Students also frequently return to their earliest ideas to clarify strengths and weakness (box 1).

These examples serve to illustrate the integrated nature of D&T performance. Having ideas, Growing ideas and Proving ideas are valuable performance indicators but it is important to remember that they do not necessarily manifest themselves in students’ work in a linear pattern. Exemplifying high level performance ( level 4) The following script exemplifies high, level 4 performance (scoring 10,11&12 on the 12 point scale) in the Light Fantastic activity, where students were asked to design packaging for a light bulb that minimises waste by transforming into a lighting feature that is not thrown away when the light bulb is taken out and used. We have split the evidence from the workbook into 4 periods of activity; starting points, developing ideas, modelling ideas, and reflection. While these correspond to having, growing, optimising and proving ideas they are not an exclusive match and it is possible to observe students having ideas for example in all phases of the activity.

53

Level 4 starting point This response builds from a slow, unimpressive start as the student gathers ideas from her team mates...

Level 4 developing ideas Taking ideas from both team-mates, this student synthesises and reflects, using a confident range of notes and informal sketching to develop her own personal response to the task. At this stage she is playing with a range of hazy ideas referring to “dangly pieces” and asking general questions about “how to hold it up”?

As her understanding and vision develop, her later sketches become more formal and specific including nets, dimensions and construction notes.

54

Level 4 modelling ideas It is clear from this photo story-line that modelling began after ideas had been explored graphically (although this is not always the case). This student develops her sketched ideas initially in rapid modelling materials, before moving on to more representative materials in the second session.

Level 4 reflecting on ideas This student has a sophisticated grip on her developing ideas, she reflects on her progress independently in notes around her sketches and responds to her team-mates’ reflections throughout the activity.

55

Level 4 Overall After a slow start, this response grows significantly as a collection of ideas are synthesised into an exciting solution which is considered from a number of different perspectives. moderated marks: • holistic

12

(high 4)

• having

11

(mid 4)

• growing

12

(high 4)

• synthesising

12

(high 4)

• proving

11

(mid 4)

A copy of this script is available in appendices 17 and 18 on the enclosed CD-ROM

56

Exemplifying performance at level 2 By contrast with the script discussed above, the following script exemplifies level 2 performance in the Light Fantastic activity (scoring 4,5 & 6 on the 12 point scale). Level 2 starting point This response also builds from a slow start. There fewer ideas to play with, which are expressed as non-specific, generalised requirements using notes with hardly any images.

Level 2 developing ideas This student also takes ideas from both of her team-mates and synthesises these into her own idea for a decorated box. The proposals lack detail and (more significantly) reflection, even when we demand it in “red and green pen” as part of the activity structure. This student does not reflect on her ideas and this clearly inhibits the development of the work.

Level 2 Modelling ideas

57

This student has explored a range of materials but with little evidence of purpose. The modelling indicates a fascination with the materials themselves.

Level 2 Reflecting on ideas There is little evidence that this student has a real grip on the consequences of her modelling. She does pick up on comments from her team-mates but is unable to communicate how she would develop her ideas to meet the original challenge she identified.

Level 2 overall

58

After a slow start there is some growth, stimulated by the modelling resources, however there is limited grip on the possibilities and consequences of the proposals. moderated marks: - holistic

4

(low 2)

- having

4

(low 2)

- growing

5

(mid 2)

- synthesising

4

(low 2)

- proving

5

(mid 2)

A copy of this script is available in appendices 19 and 20 on the enclosed CD-ROM

exemplifying individual qualities within the work

59

• having ideas This piece of work demonstrates a very sparky quality that provides all kinds of interesting starting points – and which grows somewhat - but which is never successfully optimised and developed into a convincing prototype. The dominant idea is clearly the ‘leaf and stalk’, which is supported by a further technical idea for joining them with a coil in the wire stalk. Another idea exists in the various shapes and colours of the ‘leaves’ and in a rudimentary support system (a block of foam). Many ideas but never fully developed and resolved. • growing ideas modelling This piece of work was developed almost exclusively through modelling in paper and card. Within minutes of the start of the activity – and with only the crudest drawn image in the booklet – this student (male) went to a sewing machine and stitched together two pieces of paper with a zig-zag stitch. Thereafter the prototype grew and grew through a series of ideas that emerged as he tried to make the ‘flower’ fold out (to be a lighting feature) and fold in (to be a protective case for the bulb. • growing ideas synthesising/optimising

60

This piece of work got to almost it’s fully developed form very early on in the activity. It is a clever design that is essentially a rectangular box that – viewed from the front and the side has an hour-glass effect. Having achieved this in paper, the student relentlessly pursued the idea to make it work properly in her eyes. She altered the proportions of the curvature, she printed the net from a CAD package, and then printed it again onto a card sheet that was preprinted with a coloured pattern. She made absolutely sure that it would work. And it did.

• proving ideas This piece of work demonstrates several qualities, the most obvious beig the evolution of her modelling of the flower that emerges out of the cylindrical container. Its strength in ‘growing’ however is very largely dependent upon her critical self-analysis as she goes along. At every stage she pauses to review what has happened and how she might move on. The emerging prototype is a triumph for her modelling powers – but just as importantly for her ability to reflect of what she has achieved and make good decisions about what to do next.

61

12. Issues and recommendations In earlier sections of this report we have described the project, outlined our approach to assessing design innovation, discussed the pilot activities in schools, explored the resulting performance data and presented some snapshots of characteristic performance. In this section we propose to stand outside the strict confines of our methodology and our data, to explore some of the issues that we believe have been raised by this work, and which have significance for design & technology curriculum and assessment. We have developed these issues into specific recommendations. 4

Autonomy and constraint

There is a real counter-intuitive conundrum within this project. Students are effectively frogmarched (by the teacher script) through a series of steps that are tightly timed and within which they have to put their thoughts in delineated sections of a pre-printed worksheet. At first glance it might be thought to be a bit like painting-by-numbers. And yet the comments of both teachers and students all talk of the freedom that students had to develop their own ideas. Why is it that such a tightly structured and choreographed experience can feel like freedom? We have described in section 5 the conceptual framework within which we believe design & technology operates. The choreographing of the activity can be seen as the spine of the framework, and it is really important to recognise that students’ ideas and expressions are not constrained within this spine. Rather they iterate through it and are enhanced and supported by it. It is a bit like jazz with a rigid 12 or 16 bar rhythm. Within that tight structure, the most outlandish improvisation can be liberated. So too with our activity booklets and the teacher script. By taking away from students the need to think about how they will organise and present their work, they are empowered to concentrate on the ideas that drive their designing. Recommendation 12.1 Particularly at KS3, it is essential that teachers develop the ability to structure activities tightly whilst at the same time leaving wide open the opportunities for students to develop their own ideas and their own prototypes. We recommend that teachers be supported to develop this understanding into classroom practice. See also recommendation 12.11 ‘dissemination’.

(iii)

The power of modelling.

Many (probably a majority) of the students in our sample have chosen to do most of their development work through progressively more sophisticated models – supported by (not led by)

62

sketching. This is NOT the normal approach in schools, which assumes that drawing must precede modelling. However, our approach chimes very closely with practice in the commercial world of design innovation. Myerson (2001) talks to IDEO designers about the centrality of modelling. we build lots and lots of imperfect prototypes not because we think we’ve got the right answer, but to get responses from buyers and users. Then we can fix their complaints. We’re into multiple realisations of what the future can be. ‘Faking the future’ describes the rough and ready IDEO formula of building lots of crude prototypes....Kelley describes this as ‘fast fearless prototyping’ (Myerson 2001)

In our activities we have liberated students to create ‘multiple realisations of what the future can be’ through ‘fast, fearless, prototyping’. The current fixation of examinations in design & technology is to reward the absolute opposite of this; ie slow, painful, beautifully rendered, nonsense (with pretty borders). Recommendation 12.2 We strongly recommend that teachers be encouraged to see ‘modelling’ as one kind of ‘making’; as prototyping; as provisional; as a means of learners thinking through their ideas. This kind of making needs to be understood as very different from the kinds of quality manufacturing that is appropriate once the ideas have all been fully resolved. In schools, and specifically within GCSE specifications and syllabuses, this distinction needs to be made very explicit, so that teachers encourage their students to take advantage of modelling throughout their designing activities; from the very haziest first ideas to the most refined and detailed prototype. See also recommendation 12.8 ‘GCSE assessment criteria’, 12.10 ‘New GCSE examinations’, and 12.11 ‘Dissemination’.

(iv)

The power of photographs

A key element in the effectiveness of the booklet-based activity that we created for this project was the story-line of digital images that revealed students unfolding design and development process. Whilst these images were initially designed as a recording device for assessment purposes, it soon became clear that they also had a motivating developmental power. Three issues arise from this: • Regular snap-shots of work in progress operate as a very powerful motivator for students, who can see the progress that they are making. • If students have a solid photographic record of a model they are working on, they are more willing subsequently to take it apart and develop it into a further stage. They have not ‘lost’ the original, but rather have gained by taking another step forward.

63

• Teachers and examiners can subsequently more easily reconstruct the process that students have been through – based on the ‘story-line’ of images. Recommendation 12.3 For all these reasons we recommend that digital story-lines should be encouraged as normal project practice for design & technology in schools. See also recommendation 12.11 ‘Dissemination’.

(v)

The power of ideas

This is the centre-ground on which we have built this whole project. It is abundantly clear, in a significant number of pieces of work, that students’ responses are full of interesting, unusual, risky ideas that they are struggling to grow into new products and systems. Moreover students frequently report that the freedom to develop their own ideas is a pleasant change from normal practice. Back in 2002, in the DATA conference keynote, Kimbell analysed five student qualities that might shape the future of performance assessment in design & technology and pointed to the central importance of two of them. I have two nominations for you ……. without which design can be effective but lifeless; can be adequate but unexciting. I suggest to you that the central qualities that we should be assessing if we value design innovation are the ability to be playful in restructuring the world and the ability to spark ideas. Where do you find these qualities in the current assessment regime? Nowhere. (Kimbell 2002)

It is one thing to recognise these qualities when they arise but quite another to deliberately promote them, and we may now have taken a step forward with this. We have developed a structure for activities that does deliberately promote learners’ ideation and make it accessible for assessment. Recommendation 12.4 We recommend that teachers be encouraged to see the development of students’ ideation as a key part of their responsibility – just as important as the development of tool and material skills & knowledge. See also recommendation 12.8 ‘GCSE assessment criteria’ and 12.11 ‘Dissemination’.

12.5

Concerning pace & time

In our assessment activities, teachers noted that students made far more progress far more quickly than would normally be the case in design & technology lessons. Several commented that

64

students achieved more in 6 hours than in their 20 hour coursework projects. This is attributable in particular to two features. First the blocks of time that we gave students to work in (6 hrs in 2 consecutive mornings), and second, the pace at which we drove forward the activity – with the booklet and teacher script. This latter feature depended on a series of sub-tasks; tied coherently into the development process; all with particular space in the booklet and time in the script; and all designed to force the iteration between ideas and action that we have outlined in section 5. Recommendation 12.5 We recommend that the approach we have developed through structured, paced, sub-tasks within activities be developed and encouraged in design & technology in schools. This approach brings benefits in any setting, but the evidence suggests that it is even more beneficial when working in bigger blocks of time. Therefore, we also recommend that, wherever possible, design & technology activities be conducted in half-day blocks of time. See also recommendation 12.11 ‘Dissemination’

12.6

Concerning group-work

In the development of our assessment activities in phase 3 of this project, we have exploited the supportive power of group-work in two ways: • as a device to support individual performance (idea generation & review by peers) • to explore how group performance (peers collaborating) differs from individual performance In all our work we have found group-work to be immensely helpful. It was enjoyed and appreciated by students and its virtues were recognised by teachers. Recommendation 12.6 We recommend that teachers be encouraged to see the development of group-work as a natural and helpful part of students’ designing. This has repercussions for assessment particularly at GCSE. See also recommendation 12.8 ‘GCSE assessment criteria’, 12.10 ‘New GCSE examinations’ and 12.11 ‘Dissemination’.

12.7

The development of new activities

The early work in phase 3 of this project centred on developing and piloting the ‘light fantastic’ task and structure. Thereafter, in the ‘Awarding Bodies pilot’ teachers developed a series of tasks – from across the spectrum of design & technology - and we embedded them within the common structure that we had developed for ‘light fantastic’. This structure makes use of the issues we have raised above in recommendations 12.1-12.5. Teachers need to see these two

65

features as separate: (I) the task, and (ii) the activity structure, and the message of this project is that the resulting design & technology activities can operate effectively across key stages from year 5/6 to year 12/13. Recommendation 12.7 Given an understanding of how these two features (tasks and structures) operate for learners, teachers across key stages 2-5 can build a rich variety of tasks and activities across the design & technology spectrum. We recommend that teachers be encouraged to create their own tasks and build their own activities, making use of this rich toolbox of activity components. See also recommendation 12.11 ‘Dissemination’

12.8

Concerning the NC Attainment Target and GCSE assessment criteria

The assessment framework that we evolved for this project – first from an examination of students work and thereafter in association with the emerging assessment activities – centres on ideas (having them, growing them and proving them). We have demonstrated that these are key features of design innovation, and that they can reliably be assessed. We also recognise the importance of these issues for formative assessment in the classroom, and locally designed assessment schemes. Recommendation 12.8 We recommend that the NC attainment target and GCSE assessment criteria be revised to take account of the criteria that we have developed within our assessment process. We also recommend that teachers be encouraged to see these qualities as central to formative assessment of design activities in schools. See also recommendation 12.10 ‘New GCSE examinations’, and 12.11 ‘Dissemination’.

12.9

Concerning CAD and new technologies

Within the activities we developed for this project, students were able to use any of the modelling and development tools available in their normal classroom/studio/workshop setting. On several occasions, this included the use of computer systems and specifically CAD facilities. In these settings we have several examples of students making use of the technology. This happened naturally in the group-work pilot as well as in individually focused activities. However it proved difficult to drive the use of technology through the activity, and where we sought to do this, students largely ignored our efforts. Where students see the use of technology as appropriate and helpful, they will use it. Where they don’t, they won’t.

66

Recommendation 12.9 The use of new technologies could be an important facilitator in design portfolios, but only when students see the value of the technology for exploring and developing their ideas. We recommend that developments in digital portfolios take this message to heart. The technology does not lead the process, it facilitates it, and only when students see it as helpful. See also recommendation 12.11 ‘Dissemination’.



Concerning new GCSE examinations

This has been a research project to explore possibilities for the redesign of assessment in design & technology so as to reward design innovators. At one level therefore, at the conclusion of this research it will be for Awarding Bodies to take forward this work into a development phase with GCSE examinations. Recommendation 12.10 We recommend that QCA/DfES encourage Awarding Bodies to take forward this work. Furthermore we recommend that if and when one or more of the Awarding Bodies does take up the ideas for national assessment purposes, that QCA/DfES support the venture with finances that would enable the research team to monitor, support and evaluate the initiative.

12.11

Concerning the wider dissemination of this work

This project started out with a focus on assessment. However, in the process of developing assessment activities, we have evolved a series of techniques for fostering design innovation based on our developing understanding of the problem as we tackled it. It is important that these techniques and understandings be fully disseminated to design & technology teachers, advisers, teacher trainers, examiners, and policy makers. Recommendation 12.11 We recommend that DfES / QCA support the publication of print-based and web-based dissemination materials that can bring the messages of this research and development project to the design & technology community. As a first step in this process, we strongly recommend the publication of a customised version of this report and its distribution to schools, LEAs, Teacher Training Institutions, DATA, NAAIDT, Ofsted, the Specialist Schools Trust, GCSE Awarding Bodies and any other agencies with responsibility for curriculum and assessment in design & technology.

67

12.12 A long-term vision for design & technology Most of the issues we have raised in this section, and the recommendations that flow from them are based on our perception of the current condition of design & technology in schools and in our understanding of its assessment practices. These are, however, subject to change and development. In the long term, we envisage a time when teachers will not need to be supported to build structured, pacey, activities, because such activities will be the stuff of common good practice. Equally we envisage a time when students will not need to be supported in modelling their ideas, or in taking photos of their emerging work – because they will see for themselves that it is helpful and therefore needs to be done. In short we envisage a time when procedural autonomy for teachers and students will reach the point at which these recommendations become redundant. But we are not there yet. Our work on this project began with the recognition that the ‘importance of d&t’ statement in NC2000 substantially raises our expectations for learners in design & technology. It claims the centrality of procedural autonomy for learners – and the need for them to focus this autonomy towards creative intervention to improve the quality of life. And in the final key phrase, it cites the role of design & technology as the vehicle through which learners can become innovators. We entirely endorse this statement, and our work has been (and remains) to find ways of supporting teachers in the achievement of this great vision. The importance of design and technology Design and technology prepares pupils to participate in tomorrow's rapidly changing technologies. They learn to think and intervene creatively to improve quality of life. The subject calls for pupils to become autonomous and creative problem solvers, as individuals and members of a team. They must look for needs, wants and opportunities and respond to them by developing a range of ideas and making products and systems. They combine practical skills with an understanding of aesthetics, social and environmental issues, function and industrial practices. As they do so, they reflect on and evaluate present and past design and technology, its uses and effects. Through design and technology, all pupils can become discriminating and informed users of products, and become innovators. (DfEE 1999 p15)

68

Bibliography Department for Education and Employment (DfEE) 1999 Design and technology: The National Curriculum for England: Departarment for Education & Employment (DfEE) and the Qualifications and Curriculum Authority (QCA). London Harris M & WilsonV. 2002 Designs on the Curriculum? A review of the literature on the impact of design & technology in schools in England. Research Report 401 Department for Education & Skills (DfES) London. Hicks, G 1983 “Another step forward for design & technology” in APU newsletter No 4 Autumn. Department of Education & Science; London Kimbell, R. & Stables, K., 2000 South Africa: Technology Education Project: an Evaluation Report to Department for International Development (DFID) pp 26 Goldsmiths College, London, UK ISBN 0 902 986 66 X st

Kimbell R 2002 “Assessing design innovation: the famous five and the terrible two”. The 1 John Eggleston Memorial Lecture at the DATA International Research Conference. 2-5th July 2002 Proceedings Ed Norman E. pp 19-28. DATA Wellesbourne. Kimbell R Stables K Wheeler T Wozniak A Kelly V 1991 The Assessment of Performance in Design & Technology: the final report of the APU design & technology project. School Examinations & Assessment Council (SEAC) and the Central Office of Information (COI) D/010/B/91for HMSO London Myerson J 2001 IDEO: Masters of Innovation

Laurence King

London

Stables, K., Bain, J., Rogers, M. & Kimbell, R., 2003 Researching Assessment Approaches: Final Report, pp. 48, Goldsmiths University of London,

working together. This pair included one girl who was predicted to achieve an A at GCSE and who was also ranked 5th for creativity – and yet the performance was very disappointing.

69

Related Documents