Teachers in Scotland are presently witnessing the phased publication of a series of draft assessment benchmarks. These are linked to the call in last year’s OECD review of Scottish education to simplify the narrative of the curriculum in response to OECD recommendations. The first benchmarks, for literacy and numeracy, were published in August 2016 (https://tinyurl.com/zjtogmb). They have subsequently been followed by draft benchmarks in a range of subjects such as Science (see https://tinyurl.com/zaj4s93), Expressive Arts and Social Studies, with more to follow for each curriculum area by the end of the year. Each set of benchmarks comprises around 50 pages of text, with groups of Experiences and Outcomes (Es & Os) listed alongside sets of benchmarks related to the applicable outcomes. If early drafts are any indication, we can expect to see around 4000 benchmarks covering the whole curriculum. The example below, from the draft Third Level Social Studies benchmarks, provides a flavour of this new approach.

People, past events and societies  

I can use my knowledge of a historical period to interpret the evidence and present an informed view.             SOC 3-01a

I can make links between my current and previous studies, and show my understanding of how people and events have contributed to the development of the Scottish nation.

SOC 3-02a

 

I can explain why a group of people from beyond Scotland settled here in the past and discuss the impact they have had on the life and culture of Scotland.                     SOC 3-03a

 

I can explain the similarities and differences between the lifestyles, values and attitudes of people in the past by comparing Scotland with a society in Europe or elsewhere.

SOC 3-04a

 

I can describe the factors contributing to a major social, political or economic change in the past and can assess the impact on people’s lives.

SOC 3-05a

I can discuss the motives of those involved in a significant turning point in the past and assess the consequences it had then and since.            SOC 3-06a

Through researching, I can identify possible causes of a past conflict and report on the impact it has had on the lives of people at that time.      SOC 3-06b

 

·       Evaluates a range of primary and secondary sources of evidence, to present valid conclusions about a historical period.

·       Draws on previous work to provide a detail explanation of how people and events have contributed to the development of the Scottish nation.

·       Provides reasons why a group of people from beyond Scotland settled here.

·       Describes the impacts immigrants have had on life and culture of Scotland.

·       Provides an account with some explanation as to how and why society has developed in different ways comparing Scotland to another society in Europe or elsewhere.

·        Describes factors which contributed to a major social, economic or social change in the past.

·       Draws reasoned conclusions about the impact on people’s lives of a major social economic or social change in the past.

·       Draws reasoned conclusions about the motives of those involved in a significant turning point or event in history.

·       Provides a justifies view of the impact of this significant historical event.

·       Identifies possible causes of past conflict, using research methods.

·       Presents in any appropriate form on the impact of people at that time.

It is immediately clear that the benchmarks add a new layer to the existing specification of Curriculum for Excellence. This is difficult to reconcile with the stated desire to simplify the narrative of the curriculum. It is thus hardly surprising that the benchmarks have been met with considerable scepticism by teachers on social media, and this week saw the publication of a thoughtful and considered, yet highly critical response from a group of STEM Learned Societies (see https://t.co/B3mDnglL9B). So what exactly is happening here, when a call to simplify the curriculum is met with a further spiral of specification (Wolf, 1995)? And what is wrong with this approach in any case?

Attempts to specify curriculum and assessment in detailed ways are not new. It is around hundred years since Bobbitt published his taxonomy of educational objectives. More recently in the United Kingdom, we have seen the emergence of the competency-based model that has underpinned vocational qualifications such as those produced by NCVQ in England and Scotvec in Scotland. Related to this has been the genesis and subsequent development of national curricula: from 1988, England’s National Curriculum set out attainment targets, expressed as lists of detailed outcomes, arrayed into hierarchical levels. Subsequent worldwide curriculum developments (for example, Scotland’s 5-14 curriculum, New Zealand’s 1993 Curriculum Framework, CfE in Scotland) have exhibited similar thinking. This approach has an instinctive appeal to those concerned with measuring attainment and tracking a school’s effectiveness. It provides a superficially neat way of categorising and measuring learning. The approach also attracted some support (especially in its early days) from some educationists. For example, Nash has talked of enabling learners “to have a sense of direction through planned and well-defined learning targets which are in turn based on defined criteria in terms of knowledge, skills and understanding”( Nash, in  Burke, 1995, p.162). Gilbert Jessup, the architect of the GNVQ competency-based model, stated that “statements of competence set clear goals for education and training programmes” and that “explicit standards of performance …… bring a rigour to assessment which has seldom been present in workplace assessment in the past” (Jessup, p.39). Jessup saw little difference between the competency-based model for vocational education and the emerging models of outcomes-based national curriculum, predicting that the National Curriculum would “result in more individual and small group project work, and less class teaching” (Jessup, 1991, p.78). Subsequent experience has of course demonstrated quite the opposite effect.

So what are the problems associated with this approach? I list some of well-documented issues here, focusing on generic critique of the model, rather than on a detailed analysis of specific benchmarks/subjects. Further posts on this blog will look at some of the subject areas such as social studies and science, offering a more finely focused critique of particular sets of outcomes.

  • The approach is complex, jargon-ridden and lends itself to bureaucracy. This criticism was levelled at the NCVQ model by Hyland who said that the model was “labyrinthine” in complexity and entirely “esoteric,” and as a consequence of all these factors, the model has proven to be unwieldy and difficult to access for both students and assessors (Hyland, 1994, p.13). Such issues have certainly been evident in Scotland in the creeping development of time-consuming, bureaucratic processes, and the subsequent exhortations for schools to reduce bureaucracy.
  • Specification of learning in this way has been shown to narrow learning, reducing the focus of lessons to what has to be assessed. Critics of this approach such as Hyland (1994) and Kelly (2004) were quick to point out that far from encouraging learner autonomy and flexibility in learning, the model inhibits it because of the prescriptive nature of many of the outcomes. Recent research in New Zealand (Ormond, 2016) indicates that specification of assessment standards has seriously narrowed the scope of the curriculum. Ormond provides an example of the Vietnam War, where some teachers omitted to teach the role of the USA in the war, while still meeting the requirements of the assessment standard.
  • Where assessment standards/benchmarks are too specific, they reduce teacher autonomy by filling lessons with assessment tasks and associated teaching to the test. Teaching thus becomes assessment-driven. In turn, this places great pressure on both teachers and students to perform – to meet the demands of the test. Performativity has been well-documented in the research. Its effects include stress on students and teachers, pressure to fabricate school image and manipulate statistics, and even downright cheating (see Priestley, Biesta & Robinson, 2015, chapter 5)
  • Focusing on ticking off benchmarks encourages an instrumental approach to curriculum development. Our research in Scotland documented instances of strategic compliance – box-ticking – with the Es & Os (e.g. see Priestley & Minty, 2013). There is a tendency to only visit an area of learning until enough evidence has been gathered that it has been covered, then to move onto to another required area. This is not an educational approach designed to build deep understanding or construct cross-curricular links. Instead it atomises learning.
  • There are philosophical arguments about whether it is ethical in a modern democracy to define in detail what young people should become. The assessment benchmarks can be framed as narrow behaviourist statements of performance, which mould people to behave in particular ways – as such, they can be seen as being more about training (at best) and indoctrination (at worst), rather than as educational (see Kelly, 2004).

The above objections to the specification of tightly specified assessment criteria suggest that it is extremely unwise for Scotland to take Curriculum for Excellence in this direction, which moves the practical curriculum yet further from the aspirational goals set out in early documentation. It is clear that such specification has political appeal, offering the (arguably spurious) opportunity to track achievement; moreover, it can be framed as a response to those teachers who have long decried the Es & Os for being too vague. Nevertheless, this spiral of specification is dangerous, and Scotland would do well to learn from prior history of curriculum reform. A salutary example lies in the GNVQ model: initially this was specified as Units, Elements and Performance criteria; later specification added range statements and evidence indicators, as curriculum designers engaged in a Holy Grail quest to achieve total clarity. The result was anything but clear; instead teachers experienced all of the issues outlined above, as courses became increasingly complex, bureaucratic and difficult to teach.

References

Burke, J. (ed.) (1995). Outcomes, Learning and the Curriculum: Implications for NVQs, GNVQs and other qualifications. London: Falmer Press.

Hyland, T. (1994). Competence, Education and NVQs: Dissenting Perspectives. London: Cassell.

Kelly, A.V. (2004). The Curriculum: theory and practice, 5th edition. London: Sage.

Jessup, G. (1991). Outcomes: NVQs and the Emerging Model of Education and Training. London: Falmer Press.

Ormond, B.M. (2016, in press). Curriculum decisions – the challenges of teacher autonomy over knowledge selection for history. Journal of Curriculum Studies. (http://dx.doi.org/10.1080/00220272.2016.1149225).

Priestley, M., Biesta, G. & Robinson, S. (2015). Teacher Agency: An Ecological Approach. London: Bloomsbury Academic.

Priestley, M. & Minty, S. (2013). Curriculum for Excellence: ‘A brilliant idea, but. . .’. Scottish Educational Review, 45 (1), 39-52.

Wolf, A. (1995). Competence-Based Assessment. Buckingham: Open University Press.

7 thoughts on “The endless quest for the Holy Grail of educational specification: Scotland’s new assessment benchmarks

  1. And SLT want us to update spreadsheets every month with SALs as well! We had to spend a whole INSET Day showing non English/Maths staff how to use SALs in Literacy/Numeracy … so we now have to plan using E/Os, get success criteria with SALS and Benchmarks… staff turnover going through the roof! 8-(

  2. Your blog is thought provoking and challenging. I’m philosophically against the atomisation, as you put it, of learning and I’m sad to see that what we appear to be doing is some kind of simulacrum of knowledge acquisition. I agree that the initial drivers of confident individuals and successful learners seems to have been lost.
    Purely pragmatically though, we are as SLT, under pressure to be able to talk to our learners’ current achievements, produce evidence to support this. CTs are expected to work with pupils to use these benchmarks or the progression pathways to create individualised next steps for each pupil. What should we be using if we don’t use the benchmarks? Or, how should we be collecting data on our children’s achievements? It’s such a quagmire.

    1. We can assess formatively without benchmarks or performance criteria. And summative data on pupils/cohorts and evaluative data on school performance can be collected by periodic testing that is designed for the task. The benchmarks sound good in theory, but it is their effects that are the problem.

  3. This excellent analysis reminds me again that, while there must be guidance at national level regarding the curriculum, attempts nationally to prescribe ever-more tightly what gets taught are indeed counterproductive because they lead to the loss of an overall vision (and CfE did have at the beginning a reasonable vision); and the reduction of the teacher’s role to a kind of technician.

Leave a comment