Some commentary on History, progression and the Social Studies benchmarks

The following is a guest post from Dr Joe Smith, University of Stirling, on the assessment of History using the new benchmarks.

Curriculum for Excellence is presently being equipped with ‘benchmarks’ to clarify what a child at each ‘level’ might be expected to know and do.  In terms of history, this means that Education Scotland have addressed the messy question of progression in historical understanding.  This blog posts explores some of the problems with the proposals. (NB. Some of the arguments here are similar to those I raised in The Curriculum Journal 2016)

….

There exist several models for progression in history education, but all are based on the uncontroversial premise that ‘getting better’ means something other than ‘knowing more’.  There is, after all, a literally infinite amount that one might know about the past and so to say that, ‘I know more history than you’ is to say that ‘I know a tiny bit more about a tiny sliver of the past than you do’.  There is not a totality of historical knowledge against which our knowledge can be cross-checked and this awkward fact means that we can’t assess children based on how much they know, because we, all of us, know very little.

Instead, progression in history refers not to a more complete understanding of the past, but a more sophisticated one.  For example, Shemilt (1983) produced one of the first workable models about how progression in history might be conceived. He argued that children moved through levels of understanding through which the complexity of the past was slowly realised:

  • Level One – There is a story of the past which can be learned. Things happened because they happened.
  • Level Two – There is a single simple story of the past which is easy to learn. Evidence which doesn’t fit this story is wrong. The past could never have been other than how it is.
  • Level Three – There is an appreciation that accounts of the past necessarily differ.
    Level Four – There is a recognition that there is no single story about the past and the nature of the narrative depends on the questions one asks.

Shemilt’s is by no means a perfect model, but it demonstrates how measuring progression requires an assessment of how children are thinking about the past. If teachers must measure children, then they must assess the sophistication of the child’s thinking as revealed through their written and spoken responses. They cannot and must not, simply put a tick or cross against what the child knows (or is perceived not to know).

The approach to progression seen in the new CfE benchmarks contains none of this sophistication.  The benchmarks are problematic in at least four ways.

  • They are, in many cases, so vague that they are devoid of meaning
  • They assess ‘knowledge’ that is pointless
  • They are startlingly undemanding
  • They ask children to behave in a way which is fundamentally unhistorical

In the following paragraphs I deal with each issue in turn, drawing upon examples from the benchmarks.

Vague and Meaningless

The second level benchmark says that a child,

‘Researches a historical event using both primary and secondary sources of evidence.’

This is nothing more or less than a description of the discipline of history. It is possible to achieve this ‘benchmark’ at every level from lower primary to university dissertation. What, specifically, does a child have to do to say that they have met this?  How independent do they have to be? What are they meant to produce at the end of this? How even can you assess the process of researching something? Historians research so that they can produce an account of the past – we assess the quality of an account informed by research, not the act of research itself.

Pointless Knowledge

Recognises the difference between primary and secondary sources of evidence.

The concept of ‘primary’ or ‘secondary’ is not inherent in a source: whether a piece of evidence can properly be called ‘primary’ or ‘secondary’ depends entirely of the questions that are being asked of it.  A school textbook is a primary source is a secondary source about the events is describes, but a primary source to an historian of education. In any case, it’s not even a useful distinction to be able to make. Being able to label ‘X’ as a primary source is of no practical to children. In fact, it encourages formulaic thinking along the lines of ‘X is a good source because it is primary’ which is actively unhelpful to the child’s development of historical understanding.

Undemanding

At the second level, a child of eleven and a half…

Describes and discusses at least three similarities and differences between their own life and life in a past society.

I am pretty certain that my five year old child could meet this benchmark, yet this is the benchmark for a child on the verge of secondary school.  Apart from anything else, this kind of ‘spot the difference’ activity is not particularly historical because it is devoid of explanation: i.e. ‘I put clothes in the washing machine, they use a mangle’ is not really historical thinking.  Whereas,

‘Before the electrification of homes people needed to do their washing on a mangle. This took a lot of time.   Since electrification, we have washing machines which means that we spend less time washing clothes’

contains elements of causation and change.

Unhistorical

Contributes two or more points to the discussion (in any form) as to why people and events from the past were important.

The qualities of important or unimportant (or more properly significance) are not inherent in a particular historical topic, but imputed by the person talking about the topic. The phrasing of this benchmark presupposes that Event or Person X was ‘important’ and expects children to say why that is the case.  The idea that we get to decide for children which actors in the past were and were not significant, is deeply troubling. Instead, the expectation should be that children can disagree about why an event is significant (or even whether it was significant at all) rather than assuming it is important and asking them to tell us why. A better benchmark would be something like ‘Can choose a historical event and say why they think it should be remembered’.

So where has the problem come from?

The fundamental unsuitability of these benchmarks stems from the fact that they are based on the Experiences and Outcomes document which was, itself, wholly unfit for purpose.  I have written about their shortcomings at length (Smith, 2016), but the basic problem is that they were never intended to be used as the basis of a progression model. The Es and Os address two incompatible functions – prescribing content and defining procedural knowledge.  When these two functions are translated into ‘benchmarks’, curious things start to happen. For example, the ‘E and O’ SOC 2-04a reads, ‘I can compare and contrast a society in the past with my own and contribute to a discussion of the similarities and differences’.  This is a worthwhile activity for children to undertake – it asks children to appreciate change and continuity over historical time. However, when it is uncritically turned into a ‘benchmark’, a worthwhile activity loses its value; as eleven year olds are asked to ‘Describe and discuss at least three similarities and differences between their own life and life in a past society.’

Another example is SOC 2-06a which reads, ‘I can discuss why people and events from a particular time in the past were important.’ In this phrasing, discussion is the thing that the child does – the child is writing or talking discursively. However, in the benchmarks the active verb ‘to discuss’ morphs into the passive noun ‘a discussion’ to which the child now contributes.  In the process, any semblance of historical thinking is lost.

So what is to be done?

The great pity here is that there already exists a document which might be used as the basis for more effective benchmarking – the 2015 Significant Aspects of Learning (SALs) document.  Up until very recently, advice from Education Scotland was for teachers to defer to the SALs in planning the learning of their classes, not to the Es and Os.  The 2015 SALs are predicated on an assumption that historical understanding is conceptual understanding; it is not a matter of knowing more.

  • understanding the place, history, heritage and culture of Scotland and appreciating local and national heritage within the world
  • developing an understanding of the world by learning about how people live today and in the past
  • becoming aware of change, cause and effect, sequence and chronology
  • locating, exploring and linking periods, people, events and features in time and place

By using the SALs it is much easier to conceive progression. For example, we have a well-established model for assessing progression in children’s understanding of causation which derives ultimately from Shemilt.

  1. It was always going to happen, hence we cannot explain causation
  2. It was caused by one thing
  3. It was caused by many things
  4. It was caused by many things and we can categorise and prioritise these things
  5. It was caused by many factors which were interlinked and interdependent.

Everyone involved in education in Scotland wants its system to remain among the best in the world, but this means having a clear idea of what ‘the best’ looks like.  Progression models are always simplifications of cognitive development, but they are underpinned by a disciplinary understanding of what ‘more sophisticated thinking looks like’. If we reduce progression to a series of performative tasks, then teachers will inevitable teach to these tasks.  Instead we should be empowering teachers by demonstrating our aspirations for our children and trusting teacher’s professionalism to deliver on it.

Advertisements

The endless quest for the Holy Grail of educational specification: Scotland’s new assessment benchmarks

Teachers in Scotland are presently witnessing the phased publication of a series of draft assessment benchmarks. These are linked to the call in last year’s OECD review of Scottish education to simplify the narrative of the curriculum in response to OECD recommendations. The first benchmarks, for literacy and numeracy, were published in August 2016 (https://tinyurl.com/zjtogmb). They have subsequently been followed by draft benchmarks in a range of subjects such as Science (see https://tinyurl.com/zaj4s93), Expressive Arts and Social Studies, with more to follow for each curriculum area by the end of the year. Each set of benchmarks comprises around 50 pages of text, with groups of Experiences and Outcomes (Es & Os) listed alongside sets of benchmarks related to the applicable outcomes. If early drafts are any indication, we can expect to see around 4000 benchmarks covering the whole curriculum. The example below, from the draft Third Level Social Studies benchmarks, provides a flavour of this new approach.

People, past events and societies  

I can use my knowledge of a historical period to interpret the evidence and present an informed view.             SOC 3-01a

I can make links between my current and previous studies, and show my understanding of how people and events have contributed to the development of the Scottish nation.

SOC 3-02a

 

I can explain why a group of people from beyond Scotland settled here in the past and discuss the impact they have had on the life and culture of Scotland.                     SOC 3-03a

 

I can explain the similarities and differences between the lifestyles, values and attitudes of people in the past by comparing Scotland with a society in Europe or elsewhere.

SOC 3-04a

 

I can describe the factors contributing to a major social, political or economic change in the past and can assess the impact on people’s lives.

SOC 3-05a

I can discuss the motives of those involved in a significant turning point in the past and assess the consequences it had then and since.            SOC 3-06a

Through researching, I can identify possible causes of a past conflict and report on the impact it has had on the lives of people at that time.      SOC 3-06b

 

·       Evaluates a range of primary and secondary sources of evidence, to present valid conclusions about a historical period.

·       Draws on previous work to provide a detail explanation of how people and events have contributed to the development of the Scottish nation.

·       Provides reasons why a group of people from beyond Scotland settled here.

·       Describes the impacts immigrants have had on life and culture of Scotland.

·       Provides an account with some explanation as to how and why society has developed in different ways comparing Scotland to another society in Europe or elsewhere.

·        Describes factors which contributed to a major social, economic or social change in the past.

·       Draws reasoned conclusions about the impact on people’s lives of a major social economic or social change in the past.

·       Draws reasoned conclusions about the motives of those involved in a significant turning point or event in history.

·       Provides a justifies view of the impact of this significant historical event.

·       Identifies possible causes of past conflict, using research methods.

·       Presents in any appropriate form on the impact of people at that time.

It is immediately clear that the benchmarks add a new layer to the existing specification of Curriculum for Excellence. This is difficult to reconcile with the stated desire to simplify the narrative of the curriculum. It is thus hardly surprising that the benchmarks have been met with considerable scepticism by teachers on social media, and this week saw the publication of a thoughtful and considered, yet highly critical response from a group of STEM Learned Societies (see https://t.co/B3mDnglL9B). So what exactly is happening here, when a call to simplify the curriculum is met with a further spiral of specification (Wolf, 1995)? And what is wrong with this approach in any case?

Attempts to specify curriculum and assessment in detailed ways are not new. It is around hundred years since Bobbitt published his taxonomy of educational objectives. More recently in the United Kingdom, we have seen the emergence of the competency-based model that has underpinned vocational qualifications such as those produced by NCVQ in England and Scotvec in Scotland. Related to this has been the genesis and subsequent development of national curricula: from 1988, England’s National Curriculum set out attainment targets, expressed as lists of detailed outcomes, arrayed into hierarchical levels. Subsequent worldwide curriculum developments (for example, Scotland’s 5-14 curriculum, New Zealand’s 1993 Curriculum Framework, CfE in Scotland) have exhibited similar thinking. This approach has an instinctive appeal to those concerned with measuring attainment and tracking a school’s effectiveness. It provides a superficially neat way of categorising and measuring learning. The approach also attracted some support (especially in its early days) from some educationists. For example, Nash has talked of enabling learners “to have a sense of direction through planned and well-defined learning targets which are in turn based on defined criteria in terms of knowledge, skills and understanding”( Nash, in  Burke, 1995, p.162). Gilbert Jessup, the architect of the GNVQ competency-based model, stated that “statements of competence set clear goals for education and training programmes” and that “explicit standards of performance …… bring a rigour to assessment which has seldom been present in workplace assessment in the past” (Jessup, p.39). Jessup saw little difference between the competency-based model for vocational education and the emerging models of outcomes-based national curriculum, predicting that the National Curriculum would “result in more individual and small group project work, and less class teaching” (Jessup, 1991, p.78). Subsequent experience has of course demonstrated quite the opposite effect.

So what are the problems associated with this approach? I list some of well-documented issues here, focusing on generic critique of the model, rather than on a detailed analysis of specific benchmarks/subjects. Further posts on this blog will look at some of the subject areas such as social studies and science, offering a more finely focused critique of particular sets of outcomes.

  • The approach is complex, jargon-ridden and lends itself to bureaucracy. This criticism was levelled at the NCVQ model by Hyland who said that the model was “labyrinthine” in complexity and entirely “esoteric,” and as a consequence of all these factors, the model has proven to be unwieldy and difficult to access for both students and assessors (Hyland, 1994, p.13). Such issues have certainly been evident in Scotland in the creeping development of time-consuming, bureaucratic processes, and the subsequent exhortations for schools to reduce bureaucracy.
  • Specification of learning in this way has been shown to narrow learning, reducing the focus of lessons to what has to be assessed. Critics of this approach such as Hyland (1994) and Kelly (2004) were quick to point out that far from encouraging learner autonomy and flexibility in learning, the model inhibits it because of the prescriptive nature of many of the outcomes. Recent research in New Zealand (Ormond, 2016) indicates that specification of assessment standards has seriously narrowed the scope of the curriculum. Ormond provides an example of the Vietnam War, where some teachers omitted to teach the role of the USA in the war, while still meeting the requirements of the assessment standard.
  • Where assessment standards/benchmarks are too specific, they reduce teacher autonomy by filling lessons with assessment tasks and associated teaching to the test. Teaching thus becomes assessment-driven. In turn, this places great pressure on both teachers and students to perform – to meet the demands of the test. Performativity has been well-documented in the research. Its effects include stress on students and teachers, pressure to fabricate school image and manipulate statistics, and even downright cheating (see Priestley, Biesta & Robinson, 2015, chapter 5)
  • Focusing on ticking off benchmarks encourages an instrumental approach to curriculum development. Our research in Scotland documented instances of strategic compliance – box-ticking – with the Es & Os (e.g. see Priestley & Minty, 2013). There is a tendency to only visit an area of learning until enough evidence has been gathered that it has been covered, then to move onto to another required area. This is not an educational approach designed to build deep understanding or construct cross-curricular links. Instead it atomises learning.
  • There are philosophical arguments about whether it is ethical in a modern democracy to define in detail what young people should become. The assessment benchmarks can be framed as narrow behaviourist statements of performance, which mould people to behave in particular ways – as such, they can be seen as being more about training (at best) and indoctrination (at worst), rather than as educational (see Kelly, 2004).

The above objections to the specification of tightly specified assessment criteria suggest that it is extremely unwise for Scotland to take Curriculum for Excellence in this direction, which moves the practical curriculum yet further from the aspirational goals set out in early documentation. It is clear that such specification has political appeal, offering the (arguably spurious) opportunity to track achievement; moreover, it can be framed as a response to those teachers who have long decried the Es & Os for being too vague. Nevertheless, this spiral of specification is dangerous, and Scotland would do well to learn from prior history of curriculum reform. A salutary example lies in the GNVQ model: initially this was specified as Units, Elements and Performance criteria; later specification added range statements and evidence indicators, as curriculum designers engaged in a Holy Grail quest to achieve total clarity. The result was anything but clear; instead teachers experienced all of the issues outlined above, as courses became increasingly complex, bureaucratic and difficult to teach.

References

Burke, J. (ed.) (1995). Outcomes, Learning and the Curriculum: Implications for NVQs, GNVQs and other qualifications. London: Falmer Press.

Hyland, T. (1994). Competence, Education and NVQs: Dissenting Perspectives. London: Cassell.

Kelly, A.V. (2004). The Curriculum: theory and practice, 5th edition. London: Sage.

Jessup, G. (1991). Outcomes: NVQs and the Emerging Model of Education and Training. London: Falmer Press.

Ormond, B.M. (2016, in press). Curriculum decisions – the challenges of teacher autonomy over knowledge selection for history. Journal of Curriculum Studies. (http://dx.doi.org/10.1080/00220272.2016.1149225).

Priestley, M., Biesta, G. & Robinson, S. (2015). Teacher Agency: An Ecological Approach. London: Bloomsbury Academic.

Priestley, M. & Minty, S. (2013). Curriculum for Excellence: ‘A brilliant idea, but. . .’. Scottish Educational Review, 45 (1), 39-52.

Wolf, A. (1995). Competence-Based Assessment. Buckingham: Open University Press.

Scotland’s review of educational governance: where should it take us?

The recent publication of the document Empowering teachers, parents and communities to achieve excellence and equity in education: A Governance Review (http://www.gov.scot/Publications/2016/09/1251) is potentially the most significant watershed moment in the history of Curriculum for Excellence (and we seem to have had a few of these recently!). Despite the usual self-congratulatory rhetoric1, this publication represents a significant recognition that things need to change. For the first time, the government is seriously posing the question about what is required to successfully enact the curriculum, as opposed to simply shoehorning the new curriculum into existing structures. Radical change to Scotland’s educational governance could be on the cards. But, what does it all mean? And where should we be heading if we are serious about supporting genuine change in schooling in Scotland?

While reading the document, I was struck by the following, paraphrased from an OECD publication Governing Education in a Complex World (https://www.oecd.org/edu/governing-education-in-a-complex-world-9789264255364-en.htm)

Successful systems, however, are those where governance and accountability are inclusive, adaptable and flexible. Roles and responsibilities across the system must be clear and aligned; teachers, practitioners, schools, early learning and childcare settings and system leaders should collaborate across effective networks to improve outcomes; parents and communities require to be engaged; and funding and decision making should be transparent. (p.4)

These seem to be admirable principles. I offer some thoughts here about issues that should be taken into account if we are indeed serious about updating the current governance system to meet the needs of a modern system of school education.

  • Form should follow function. It is good to see mention of new structures (for example regional support organisations), rather than an acceptance that existing structures (e.g. local authorities and Education Scotland) will simply take on new functions. We should be clear by now that prevailing cultural norms in organisations can preclude them from buying into radically new ways of thinking about education – and after all, CfE was supposed to be exactly that. It is therefore vital that the review should first and foremost consider what the function of the new strengthened ‘middle’ tier of the system (or meso-level – see https://tinyurl.com/h5kr5tk) should be. In my view, it should not be about producing reams of additional guidance, as has largely been the function of Education Scotland’s curriculum development endeavours. Nor should it be about mirroring the inspection process, as has largely been the case within local authority quality improvement procedures – we have an inspectorate to do that. Instead we need to develop a view of the middle as being about support – expert advice and hands-on leadership – for curriculum development. This in turn will allow us to develop the structures to achieve these goals. The review document talks about regional organisations and local clusters. These seem to be logical developments, but will not reach their potential if: 1] existing organisations (with their agendas and power structures) are left in place; and 2] if there continues to be a lack of clarity about the proper function of meso-level structures. Thus, clarity of function must precede reform of governance.
  • Autonomy is not the same as agency. I partly welcome the calls to devolve decision-making to schools. Subsidiarity is a worthy goal; however, it comes with many dangers. We should avoid simplistic talk about empowering schools. Research (e.g. http://rms.stir.ac.uk/converis-stirling/publication/18567) suggests that simply granting autonomy to schools is problematic. Schools do not necessarily have the expertise to develop the curriculum. Autonomy can easily lead to the reproduction of habitual forms of practice – going with the flow, recycling old solutions, etc. – rather than genuine innovation. As we have argued in our recent book, teacher agency is important, and this requires a number of conditions:
    1. Skilled and knowledgeable teachers who have a wide repertoire of responses, upon which to draw, and who are able to work from foundational educational principles.
    2. Genuinely different future imaginaries about what is possible. We have found that the role of external agents to stimulate new thinking is vital here (e.g. see http://hdl.handle.net/1893/24179).
    3. Access to resources to support curriculum development. Time is a key issue here, but cognitive resources (e.g. from research) are crucial. Here, again, the role of external agents is important – especially to facilitate access to new ways of thinking and doing, but also to act as leaders of a curriculum development process.
    4. A comprehensive understanding of the system features that enable and constrain curriculum development

The key point here is that schools and teachers may lack the capacity to work in this way. Local clusters of schools working together are helpful, but there still needs to be an infrastructure to support such working. Regional structures can provide this, for example offering the availability of leaders for curriculum development processes. We need, therefore, as part of this process to think about not only establishing the structure, but also about building system capacity – a cadre of expert teachers, for instance, who can work in their own schools and spend part of the week supporting colleagues in other schools. The large numbers of teacher currently undertaking funded Master’s programmes are an obvious pool for this.

  • Accountability should serve rather than drive curriculum development. All too often, we have seen curriculum development derailed by subservience to accountability processes. This can take the form of risk aversion (as a barrier to innovation), strategic compliance with new policy, or worse still game playing (see https://tinyurl.com/hprbyll). The governance review poses some questions about accountability. Scotland should heed the message currently being disseminated in Wales, by Graham Donaldson amongst others – ‘let’s get the curriculum right, then worry about accountability’. Nevertheless, there are some principles that can be considered now, as we develop new forms of governance. One is that, as stated above, we need to be absolutely clear about the function of the ‘middle’. If it is reduced to accountability, it will continue to shape school practices in unhelpful ways, as we have seen in recent years. We need to be clear about what attainment data can (and should not be) used for. And we need to think carefully about the ways in which accountability mechanisms impact on school practices. How are self-evaluation frameworks like HGIOS being used in schools; instrumentally (in a tick box fashion, or developmentally). And should we be moving away from the notion of an external inspection, as putatively an ‘objective’ process, towards inspections that accept the contextual nature of schools, and which involve teachers much more as peer and self-assessors in the inspection process. Apart from anything else, this is excellent professional development for teachers.

Finally, John Swinney states in the foreword to the review document that “This governance review offers an opportunity to build on the best of Scottish education and to take part in a positive and open debate”. All teachers should be contributing to this debate.

Footnote

  1. For example, the unnecessary statement on page 5 that “Delivering Excellence and Equity in Scottish Education, builds on an impressive track record of improvements and reforms which have been driven forward across education and children’s services in recent years.”

 

 

A Statement for Practitioners: how useful is the new CfE guidance?

The publication this week of the much-awaited clarification of CfE raises more questions than answers for me (for the guidance documents, see https://education.gov.scot/improvement/Pages/CfE-delivery-plan.aspx). I felt a little dubious that the OECD’s call for a bold new approach and a simplified narrative was being addressed in such a short timescale, with a deadline of the start of the new school year. It seems to me that this is a complex and difficult undertaking that requires careful and critical reflection over a more sustained period of time. The guidance has now arrived; the question is – does it achieve its putative aims of clarifying and simplifying?

In some ways, it undoubtedly does. I am heartened to see a reinforcement of the message that bureaucracy should be reduced. This is now a consistent message from the government and its agencies, and it is one which local authorities and schools should heed. It is good to see a clear steer that assessment should not be driven by a process of ticking off Es & Os, but that instead these should inform planning; this is, after all, what they were originally designed for, only becoming identified as assessment standards in later documentation. Early guidance (for example the cover paper accompanying the draft Es & Os) demonstrated a sensitivity towards the dangers of assessment driving the curriculum, stating clearly that the outcomes ‘are not designed as assessment criteria in their own right” (CfE overarching cover paper, 2007). Similarly BTC3 stated:

The curriculum must be designed around the experiences and outcomes. They should be used to identify essential content, key skills and experiences. These should then be used to establish progression for learners by setting out the main elements which differentiate performance as learners progress within, and through, the levels.” (BTC3 summary paper, 2008)

Of course subsequent developments revealed a shift in emphasis. BTC5 (2010) not only characterised the Es & Os as standards for assessment, but went so far as to define a standard as ‘something against which we measure performance’. It is good see a return here to the thinking that underpinned the early days of CfE

Despite these encouraging messages, I am left disappointed by the guidance. First, there are inconsistencies in the document. For example, the first part talks about the primacy of the Es & Os and the benchmarks, and does not mention the purposes, principles and values that should underpin curriculum development. These are subsequently highlighted as key messages in the appendix, which offers a different focus. Significant aspects of learning are not mentioned in the first part of the document, then linked explicitly to the benchmarks in the appendix.

Second, the guidance seeks to simplify, but then adds a new layer of complexity – the benchmarks – which may drive assessment in the same tick-box fashion as did the Es & Os. This is a good example of the curricular phenomenon described as a spiral of specification by Alison Wolf (1995). It may be that a perceived additional clarity provided by the benchmarks will be welcomed by many teachers, who find the Es & Os vague and unhelpful. My view is that providing teachers with hundreds of detailed assessment criteria will simply continue to encourage bureaucratic box-ticking and convergent approaches to learning – and cause a concomitant increase in teacher workload. The following examples (Literacy: Listening and Talking) provide an illustration of the complexity and detail involved:

  • Contributes regularly in group discussions or when working collaboratively, offering relevant ideas, knowledge or opinions with supporting evidence.
  • Responds appropriately to the views of others developing or adapting own thinking.
  • Builds on the contributions of others, for example, by asking or answering questions, clarifying or summarising points, supporting or challenging opinions or ideas.
  • Applies verbal and non-verbal techniques appropriately to enhance communication, for example, eye contact, body language, pace, tone, emphasis and/or some rhetorical devices.
  • Uses appropriate register for purpose and audience.
  • Identifies features of spoken language and gives an appropriate explanation of the effect they have on the listener, for example, body language, gesture, pace, tone, emphasis and/or rhetorical device

These benchmarks form half a page out of the 48 pages of the Literacy benchmarks – we have similar statements for Numeracy and are promised a similar level of detail for all subject areas by Christmas. This does not look like a simplified narrative to me. The OECD exhorted Scotland to be bold in its CfE reforms. A bold approach would have been to abolish the Es & Os altogether. I believe these have greatly contributed to the bureaucracy afflicting schools, by encouraging audit approaches to curriculum development. While they remain (or are replaced by detailed benchmarks), Ministers’ calls for teachers to reduce bureaucracy will be ineffective; the main causes of bureaucracy are structural, and the Es and Os are a significant (although not the only) factor here. Other countries such as Ireland are moving away from this multi-level, over-complex approach to defining learning outcomes. And as the OECD review stated clearly, “How clearly aligned can be a curriculum that is both about four capacities, on the one hand, and about extensive Experiences and Outcomes, on the other?”. For an analysis which has informed the Irish approach, see http://dspace.stir.ac.uk/handle/1893/23225.

As illustrated by the example above about assessment standards, CfE has often seemed to be an ever-shifting kaleidoscope of terminology and concepts. The Statement for Practitioners appears to continue this tradition, by adding yet more guidance, while apparently stating in the appendix that much of the existing documentation of CfE is still current. This does not seem to me to be in keeping with the call for a new narrative. It may be that subsequent guidance will move us towards a new and simplified narrative. At present I do not see this happening. The existing and often contradictory documentation that has emerged over time needs to be replaced by a single set of detailed CfE guidance that offers a clear and consistent message about the curriculum and its development processes. It may have been better to have avoided the short-term calls for the short piece of clarification represented by this document, and spent the time working on a more radical overhaul of the existing guidance. Of course this can still happen, but it requires a boldness that is not evident in this week’s publication of the Statement for Practitioners.

Reference

Wolf, A. (1995). Competence-Based Assessment. Buckingham: Open University Press

Delivering excellence and equity – some commentary on Scotland’s national education plan

The post-election period in 2016 has heralded a flurry of activity from the re-elected SNP and the appointment of a new Cabinet Secretary for Education. The next year or two will see policy development that puts education at the forefront of the nation’s priorities, all underpinned by the stated goal of closing the ‘attainment gap’ between those who have traditionally achieved well in education, and those who have not. The publication of Delivering Excellence and Equity in Scottish Education: a Delivery Plan for Scotland, coming as it does on the heels of the OECD review of Curriculum for Excellence (CfE), marks an important turning point in Scottish education policy. There is much to be welcomed in this plan, but in my view there are also tensions and ambiguities in the document that will need acknowledging and addressing as the plan is implemented.

The positives

It is clear that there the plan contains some positive and constructive ideas to improve Scottish Education, including in places an acknowledgement that some of the developments associated with CfE have been unhelpful. Positive aspects include the following points.

  • A new emphasis on the importance of research in informing and underpinning the development of educational policy and practice. The forthcoming development of a national research strategy is well overdue, and it is to be hoped that this will provide a clear picture of how existing research can be utilised and applied, where the gaps lie, and how practitioners and policymakers might engage more constructively with research.
  • A restated commitment to root out overly bureaucratic practices around the new curriculum. There are strong statements here about the importance of not using the Experiences & Outcomes for assessment purposes ‘or in a tick-box way’ (p8) – although intriguingly a cryptic statement that ‘this is the exclusive role of the Significant Aspects of Learning’ (p.7); care will be needed to ensure that these do not simply replace the Es & Os as the audit tool for curriculum development, as seems to be the case already in some schools I have visited.
  • A decluttering of the curriculum. This has become a real problem for many schools, despite the clear aspiration in the early days of CfE to declutter (a sense of déjà vu here!). I note here that while this is a worthy aspiration, it is likely to not occur unless we develop systematic approaches for deriving content from curricular purposes (see https://mrpriestley.wordpress.com/2012/06/13/curriculum-for-excellence-and-the-question-of-knowledge/).
  • A rewriting of key policy guidance, and a recognition that there has been a proliferation of documentation associated with CfE. This new narrative will, I am sure, be welcomed by teachers, provided that it is clear and coherent.
  • The establishment of expert groups, including a panel of teachers to advise on workload issues. I am heartened here that the government is seeking to listen to those who bring specific expertise, whether of research or the day-to-day lived realities of working in schools. None of these perspectives provide a full picture on their own, but in combination they might allow for a better understanding of how policy is enacted, and the system dynamics that shape its enactment.
  • A review of governance. Presumably this has been announced with a view to creating the infrastructure for curriculum development – the strengthening of the middle that the OECD view called for. It has long been my view that the existing middle – the meso-level constituted by Education Scotland the local authorities – has focused on the wrong sort of leadership for developing the curriculum. I hope that the review will identify that what is required is not the large scale proliferation of guidance, nor the strengthening of micro-management of schools through accountability mechanisms – as has tended to be the case in recent years. Instead, I hope to see a recognition that the development of expertise – particularly the capacity to lead curriculum development – is badly needed. In a conversation with the celebrated American educationist Michael Apple this week, he described curriculum development as a lost art. He is right, and we need to rediscover it. The proposal in the plan to establish local networks of champions (presumably teachers with suitable experience and higher degrees in education) is a welcome part of this.

Tensions and ambiguities

Despite these positive messages, I also see some problems (and potentially some bear traps) in the plan. The document is an odd mixture of the vague and the highly specific; it fails to set out processes for some of its aspirational goals, but states, for example, that ‘in the first round of Read, Write, Count gift bags will be gifted to families of P2 and P3 children in November 2016’ (p.18). In particular, I would like to see more details on the following issues.

  • While the new focus on the importance of research is to be welcomed, I would like to have seen some detail about how new research might be generated, and how it might be used subsequently. To be fair, this may well be articulated more clearly in the forthcoming research strategy, but it would have been good to see some recognition of these issues at this stage. Related to this, the document does what many policy documents have done in recent years – it makes large claims without citing the supporting research evidence. It is important that future policy states clearly which research is being used to underpin developments (and also which research is not being used in some cases, where contradictory evidence exists). Without this reference to evidence, it can be difficult to separate out claims which are genuinely rooted in evidence, and those which are more spurious – and this document contains a few of the latter in my view, including the bold (and unevidenced) claim that ‘Scotland’s children and young people are now much more confident, resilient and motivated to learn’ (p.7) as a result of CfE.
  • The document shows little understanding that curriculum is a multi-layered field with different practices in each layer (see https://mrpriestley.wordpress.com/2015/11/22/mind-the-gap-curriculum-development-through-critical-collaborative-professional-enquiry-part-one/). This is compounded by the continual use of the metaphor ‘delivery’ (see https://mrpriestley.wordpress.com/2013/04/15/milkmen-or-educators-cfe-and-the-language-of-delivery/). In my view an awareness of this sort of language is important, as language frames our understanding of issues, and subsequently shapes our responses. Education is not a product (I fail to see how we can deliver digital literacy [p.6]); it is instead a process, and we need to get into the habit of thinking about how we structure pedagogic relations to develop people’s capacities. As I have written elsewhere in my blog, curriculum making in schools should be about the development of practices that are fit-for-purpose to realise the aims of the curriculum; it should not be a process of product placement or box-ticking. This document seems to recognise this, but persists with language that runs the risk of undermining its aspirations. To counter this we need to move from an outcomes approach to a process approach for developing the curriculum (see https://mrpriestley.wordpress.com/2014/11/17/approaches-to-school-based-curriculum-development/).
  • More serious is a major tension between the stated desire to increase school/teacher autonomy and an apparent tightening of accountability procedures (inspection, use of attainment data etc.). While I recognise the need to make intelligent use of data to inform decision-making, this document does not seem to recognise the well-documented dangers of performativity in the system. We know that accountability systems tend to produce perverse incentives, which can in turn lead to bureaucracy, intensification of workload, practices that can be difficult to justify in educational terms, and a disabling of teacher’s agency (for an analysis of these tensions, see http://hdl.handle.net/1893/20761). I hope that I am wrong, but my fear is that the apparently single-minded focus on attainment, which has emerged from the attainment challenge and the National Improvement Framework, will prove to be self-defeating, undermining some of the worthy aspirations in this document.

I conclude with reference to an issue, which while seemingly trivial, caused me some irritation when reading the plan. This is the repeated assertion that ‘the OECD has applauded the boldness of our approach’ (p.7). In fact, while the OECD mentioned a couple of times that Scotland has bold aspirations, their explicit emphasis was more about Scotland’s need to adopt bold[er] approaches. I may be splitting hairs here; however, my concern is that, historically, such self-congratulatory rhetoric has tended to stand in the way of reform by obscuring the need for change (many will remember the repeated assertion in the early years of CfE that it was just good practice, already in evidence in most schools).

Nevertheless, let us end on a positive note. This document has much to commend it; it contains many useful and constructive ideas for improving Scottish education; and provided we are cognisant of the pitfalls and tensions, it offers ample scope for addressing many of the concerns with current practices.

Note

The delivery plan can be found at http://www.gov.scot/Publications/2016/06/3853/downloads

Further reflections on the OECD review: strengthening ‘the middle’.

The OECD report Improving Schools in Scotland (see https://tinyurl.com/j3vce6g) may have heralded a ‘watershed moment’ (p16) in the development of CfE, but all seems to have gone quiet on this front in the months since its publication. As I stated in my December post on building a new narrative for CfE:

The report, originally set up to provide an external evaluation of Curriculum for Excellence (CfE), offered a broad and mainly complimentary commentary on the health of Scotland’s school system. It also offered a critique, including a range of insightful and helpful recommendations for improving the curriculum. These covered issues such as the need to build capacity for practitioner engagement with curricular issues, advice about simplification of curriculum guidance, and the need to make better use of the expertise residing in Scotland’s research community. (https://tinyurl.com/jko29vk)

I believe that we need to follow the advice of the OECD team now, acting ‘boldly’ (p.10) to seize this ‘watershed moment’ (p.10). One area worthy of immediate attention is the recommendation to strengthen ‘the middle’ (p.98). But what is the middle? What is its function? And how should it be strengthened? The first question is easy – the OECD clearly described the middle as the local authority infrastructure that governs Scottish schools. I would go further and add Education Scotland to the current mix comprising the middle. A similar function is performed by the four regional consortia of schools in Wales. The key insight here is that these organisations have middle- or meso-level functions, being situated between the curriculum policymakers (the government) and the curriculum enactors (schools and teachers). This raises further questions for me about the functions of the middle.

In order to explore this question, it is useful to examine idea that the curriculum operates differently at different levels or layers of the system, that these levels have different functions and, logically, that curriculum development practices should reflect this. It is good to bear in mind here that curriculum is mediated though the prior beliefs, values and priorities of actors at each level, and because each level will generate different beliefs, values and priorities, what is evidently good for actors at one level may not seem so for those at another. Thus, for example, we see tensions arising when accountability demands emanating from the meso-level of the system do not accord closely with the desires and values of teachers (see http://hdl.handle.net/1893/20761).

So might be the function of each level?

  • At the macro-level of policy, the curriculum can be usefully conceptualised as a set of big ideas – a statement of intent – for framing professional practice. Early CfE policy took this view explicitly, ‘looking at the curriculum differently’ (see http://www.gov.scot/resource/doc/98764/0023924.pdf). In this view, the proper function of the macro level is to formulate frameworks that provide intellectual resources for the development of curriculum in schools. It is not to prescribe in detail what should be taught and how it should be taught.
  • At the meso-level of policy development curricular practices should largely operate in terms of facilitation of and support for professional practice in schools. In my view, a particular problem with the development of CfE to date has been a conflation of these levels and their functions, particularly at the meso-level, where a great deal of activity has taken the form of the production of documentation that reinterprets macro-level policy, leading to the development of long chains of dissemination and increasing the risk of a ‘Chinese Whispers’ effect, where emerging practices lose connection with the principles guiding the high level policy.
  • At a micro-level of the educational institution, the curriculum relates to educational practices that are developed to fit the big ideas of the curriculum. This requires both informed professional judgement and flexibility in interpretation to meet local needs. I have written regularly on this blog about the processes required for this type of engagement (e.g. https://tinyurl.com/jnkgyug).

For an excellent, and highly useful overview of these ideas and their applicability for developing classroom practices, see the handbook Curriculum in Development produced by the Netherlands Institute for Curriculum Development (see http://www.slo.nl/downloads/2009/curriculum-in-development.pdf/). Please note that I have adapted their conception of the meso-level slightly to fit with the Scottish context, where there is a clearly defined curriculum development layer that sits between government and schools.

There are a number of implications that emerge once we start looking at curriculum development in this way. Foremost amongst these is the need to strengthen the meso-level, as identified by the OECD. Part of this strengthening can be achieved by a reconfiguring of its function. Many teachers will fondly remember the days when the local authorities provided advisory services related to curriculum development, a function largely replaced by a quality assurance or inspectorial role. I believe that there now needs to be a new reconceptualization of the role of the middle, with a renewed emphasis on the provision of high quality leadership for curriculum development in schools, and a move away from the accountability and curriculum reinterpretation roles more commonly found in today’s meso-level organisations. This might, for example, take the form, for example, of hands-on leadership of curriculum development processes such as collaborative professional enquiry (see http://hdl.handle.net/1893/22518). This in turn comes with the implication that we need to urgently build the capacity within the system for such leadership; surely a worthy role for the Scottish College for Educational Leadership. And it may also come with the further implication that the current ‘middle’ is simply not fit for purpose,and needs to be replaced by a different model, perhaps along the lines of the Welsh regional consortia.

Should we welcome Scotland’s National Improvement Framework (part two)?

January 2016 sees the publication of the revised National Improvement Framework (NIF) document (http://www.gov.scot/Publications/2016/01/8314). The redrafted document has emerged from extensive discussions between the Scottish Government /Education Scotland and a range of stakeholders, including a survey monkey questionnaire and a series of stakeholder events. As such it is good to see some of the criticisms of the previous draft being addressed. In particular, I welcome the removal of the rather contrived reflective ‘I’ statements, and the inclusion of a table on page 22, which goes some way towards saying how data will be used at each level of the education system. Despite this, the document remains, in my view, problematic for a number of reasons. As explained in my previous post on this issue (https://mrpriestley.wordpress.com/2015/11/08/should-we-welcome-scotlands-national-improvement-framework/), there is no intrinsic problem with data per se, or in the act of collecting system data. Nor would I disagree with the rather aspirational aims of the Framework: to enable our children have the best start in life; to tackle the significant inequalities in Scottish society; and to improve the life chances for children, young people and families at risk. These are worthy aims for any society. The issue lies rather in the question of whether what is being proposed here will be effective in addressing the aims of the framework. Added to this is a further question around the potential of an assessment-driven framework to undermine the goals and practices of Curriculum for Excellence – and this has to be a real worry in the light of the experience of the Queensland New Basics curriculum, which has to all intents and purposes been destroyed by the introduction of the NAPLAN high-stakes assessment system (e.g. see: Lingard & McGregor, 2013; Thompson & Harbaugh, 2013). So where does the NIF stand on these issues? I have three fundamental concerns about what is being proposed in the latest draft.

First, the Framework continues to make some quite bold and unsupported statements about the successful implementation of CfE despite contradictory evidence from research and from the OECD review. Thus, for example, the Framework states that “Curriculum for Excellence is now embedded in Scottish schools” and makes the claim that there has been a “deeper shift in understanding amongst Scottish educators of how children learn”. Such claims justify continued inaction in the face of the recommendations of the OECD report for better implementation; if we are getting right, then what is the incentive to change? They are also difficult to justify in the light of a lack of substantive research evidence relating to the curriculum. I note here that the most significant large scale research project on CfE suggested a more partial implementation of CfE, particularly in secondary schools (see Priestley & Minty, 2013). The more recent OECD review appears to concur with this standpoint, as illustrated by the following extended extract:

The main study of implementation of CfE is a 2011 investigation, using an online survey and in-depth interviews in one of the 32 local authorities (by Mark Priestley and colleagues). Their data suggest “a widespread engagement by teachers with CfE in respect of pedagogy, assessment and provision [curricular models]” and CfE’s ideas are welcomed by the profession. However, there are “highly variable rates of progress, both between and within schools”. Primary schools had made more progress with whole-school, topic-based and project-based interdisciplinary approaches than secondary schools where the curriculum remained too often conventionally subject-based. A range of new practices was being embraced by teachers, and their adoption was being enhanced by a move towards collaborative and collegial working”, but the researchers were not convinced that teachers were engaging deeply with CfE’s underpinning ideas in their work together. (p.120)

The OECD reviewers add that “in general, system-level respondents were in agreement with Priestley’s and Minty’s research finding (2013) that implementation progress in secondary schools was less impressive – a problem” (p.121). I note here that the NIF draws frequently on the OECD review to justify its strategy – but does not refer to this more negative evidence in relation to CfE.

A second issue relates to the purpose of the NIF, as a strategic approach to raising attainment and closing the gap. I mentioned at the beginning of this post that there is nothing intrinsically wrong with the collection of data, and welcomed the inclusion in this version of a table which states how data will be used at each level. However, one is left with an overall sense of what is lacking here. I remain unconvinced that this document has a coherent strategy for translating the collection of system data into action to address these issues. The Framework appears to take the somewhat naïve assumption that the collection of data is some sort of magic bullet that will solve the attainment and equity problems in Scotland at a stroke. For example it states,

We are clear that the new Framework is for the benefit of Scotland’s children. It will provide a level of robust, consistent and transparent data across Scotland that we have never had before. (p.5)

To be fair, it is a high level document, and the detail will need to be worked out at each successive layer. And moreover, there is recognition that such an approach comes with problems, with the authors of NIF stating:

We do not underestimate the challenge that presents. It requires very careful balancing of the need for appropriate data and evaluation at every level in the education system, whilst maintaining the principle that informaton [sic] is used effectively to drive improvement in the learning experiences of individual children and young people. (p.5)

However, there is no detail in the Framework about what these problems might be or how they might be mitigated. The Framework does not consider (or even acknowledge) the problem of performativity, let alone what might be done to prevent it. Performativity is a well-documented and heavily researched phenomenon with enormous potential to shape the everyday lives of teachers in schools, colleges and universities. It is a pressure to perform in particular ways, most notably in terms defined and measured by external actors. All teachers in Scotland are already familiar with pressures to perform to the test in the face of a need to please the local authority or the inspector. Under pressure to ensure that pupils achieve the kind of grades that give their school the desired position in league tables, the needs of the school thus potentially trump the educational needs of children and young people. As Michael Apple has argued, school systems have been subject in recent years to a “subtle shift in emphasis … from student needs to student performance, and from what the school does for the student to what the student does for the school” (Apple 2001, p.413). It is easy to see how the introduction of NIF assessment systems might potentially increase performativity in the Scottish system, potentially derailing CfE and narrowing the curriculum. It is worth repeating here Knoke and Kuklinski’s (1982, cited by Emirbayer & Goodwin, 1994, p.1418) suggestion that “the structure of relations among actors and the location of individual actors in the network have important behavioural, perceptual, and attitudinal consequences both for the individual and the system as a whole”. The NIF does not consider this sort of issue.

A third concern is an almost total absence of reference to the role of the research community. A key recommendation of the OECD report is a greater involvement of the research community in Scotland in the development of policy and practice. This is a continual theme through the review, for example:

As regards research, we propose as one of our recommendations that the research community can make a clear contribution in helping to innovate schools as learning environments, especially in secondary schools in deprived areas. (p.19)

It is notable that the NIF makes no mention of research, and does not even acknowledge university Schools of Education as partners (listed as national government, local authorities, schools, parents, children and young people, partners, teachers and other staff employed in education [p.4]). There is no mention of establishing a research agenda to monitor and evaluate this important initiative, apart from some general references to evaluation. This is not out of step with previous policy, as there has been a tendency to disregard the role of researchers in recent years. But is disappointing that the NIF continues this trend despite the recommendations of the OECD (and notwithstanding its selective use elsewhere of evidence from the OECD review). Researchers can play a key role in informing policy and developing practice in schools. They bring a different perspective on educational issues to practitioners and policymakers. They bring a knowledge and understanding of research across the field of education and often an expertise in methodologies for innovation. They can act as critical colleagues to support collaborative professional enquiry. And they can compile literature reviews to inform the development of practice and conduct original research as the development unfolds. As articulated by the OECD:

A strong research and evaluation system requires researchers, those with specialist analytical capacities, policy-makers and practitioners to work together. We believe that strong relationships with the evaluation and research communities and/or with independent and non-government agencies working at some arm’s length from political decision-making would benefit Scotland’s education system. The need for objectivity and credibility derived from independent sources was also stressed in the 2013 OECD review of evaluation and assessment. (p23).

The NIF says a lot about partnership. Let’s up the game and do it properly.

References

  • Apple, M.W. (2001). Comparing neo-liberal projects and inequality in education. Comparative Education, 37, 409-423.
  • Emirbayer, M. & Goodwin, J. (1994). Network analysis, culture and the problem of agency. The American Journal of Sociology, 99, 1411-1454.
  • Lingard, B. & McGregor, G. (2013) High-Stakes Assessment and New Curricula: A Queensland Case of Competing Tensions in Curriculum Development1. In: M. Priestley and G. Biesta (eds.) Reinventing the curriculum: new trends in curriculum policy and practice. London: Bloomsbury Academic, pp. 207-228.
  • OECD (2015). Improving Schools in Scotland: An OECD Perspective. OECD.
  • Priestley, M. & Minty, S. (2013). Curriculum for Excellence: ‘a brilliant idea, but..’. Scottish Educational Review, 45, 39-52.
  • Thompson, G. & Harbaugh, A.G. (2013) A preliminary analysis of teacher perceptions of the effects of NAPLAN on pedagogy and curriculum. The Australian Educational Researcher, 40 (3), pp. 299-314.