The civic consequences of discipline and control in schooling

While out for a run on our local hills recently, two separate, and apparently unconnected occurrences got me thinking about how education should – and can – make a difference to people’s lives; and how it often doesn’t, or least does not in ways that are constructive. The first occurrence was a story – one that shocked me – told to my running partner earlier by one of her undergraduate students. The story was about school uniform practices in a high school in England, where female students are subject to specific gendered rules regarding uniform (e.g. no coloured bras), and enforcement procedures that left me astounded. In this school, female students are routinely stopped by teachers (including male staff), and asked to pull back blazers and jumpers to show that the bra is not coloured. There are rules about tight trousers for girls (incidentally, girls are not permitted to have pockets on their trousers, unlike boys). To enforce these rules, female students are subjected to the Ten Pence test. Here they are required to insert a coin down the waistband of their trousers; if it does not fall down the leg and land on the floor, the trousers are deemed to be too tight.

Quite apart from the obvious question as to how senior staff at this this school have the time to devise such procedures, the whole issue raises some profoundly troubling issues. First, the gendered nature of such practices is disturbing, although I suspect widely prevalent in schools in the UK. Linked to this are potential issues of voyeurism – [male] staff waiting to observe publicly whether a bra is coloured or a coin falls through a pair of trousers – which runs the risk of fetishising school uniform and objectifying its [female] wearers (also see Secondly, I wonder whether some schools have little sense of their educative mission, if this is what consumes them, becoming instead primarily places to discipline young people into accepting subordinate and passive places in society.

The second occurrence relates to a bag of dog excrement. As my running partner was telling me about the above, we spotted a dog poo bag close to the track – an increasingly common sight in the hills. This was full, and had been neatly tied off, then dumped on the hillside. I have no problem with people using these bags to collect their dogs’ excrement, and indeed dog waste is a major problem in many public areas. However, there are a number of issues to unpack here. First, the offending item was on an open hillside, around 2 miles from the nearest road; so why bother? Secondly, someone had clearly gone to the trouble of picking up their dog’s waste and bagging it; so presumably someone who cared about the issue – or alternatively someone deeply socialised into the notion that all dog poo needs to be bagged, regardless of context. Third, the bag had been unceremoniously slung off the track into the grass; so presumably also someone who did not understand the relative bio-degradability of dog poo and plastic, or someone who had not been similarly socialised by a rule about taking the bags back to a place of disposal. I speculate here, but the problem is sufficiently ubiquitous – hanging the bags in trees seems to be the other option for disposal – that one wonders why people can be so rule-bound that they do not question whether the rule is actually appropriate in any given situation, and also why they seem unable to take the longer term responsibility for disposal.

As I suggested in the opening paragraph, these are unconnected occurrences: the first is about control and discipline in schools, and the second about a distorted sense of civic responsibility. Nevertheless, education has effects, and I would argue that the latter is perhaps influenced by a tendency for schooling to discipline and control rather than educate. The dog poo bag seems to be a metaphor for a commonplace rule-bound and unreflexive way of living, and this takes us back to the first story, which is fundamentally about the school as a medium for a narrow socialisation of young people, rather than as a genuinely educative space. We see, in the UK at least, the continued emphasis on old-fashioned school uniforms, including fancy blazers (replete with coloured braid) that would seem more at home in an Enid Blyton novel than in a modern learning environment. This continued focus on how one looks and conforms (as opposed to how one is taught to think critically) seems to me to be the antithesis of education (for a critique of school uniforms, see It combines with a modern focus in school on often purposeless activity, where what one learns and why one learns it have come to be less important than how one learns, and the skills that one develops from the process. As Jenny Reeves has recently written in a powerful article in Scottish Educational Review, we have thrown out ‘the baby of educational purpose along with the bathwater of curricular content’.

I am not arguing here for a return to traditional notions of subject-driven schooling, as exemplified by some current developments in England. I am, however, suggesting that schools need to become more focused on educating young people to be critical and reflexive adults, and this does require a knowledge-rich (as opposed to subject-driven) and balanced curriculum, with its roots in the big question of what education is for. Gert Biesta’s work on the purposes of education is entirely relevant here. Biesta has argued for an appropriate balance between three overlapping and sometimes competing functions of education: 1] qualification (development of skills and knowledge, not the piece of paper that accredits the process); 2] socialisation (into our present societies); and 3] subjectification (the growth of each person into the unique individual that they can become given the right nurturing). Such a view of education is about enabling individuals to become tolerant members of society, whilst developing the faculties required for critical engagement with that society, potentially leading to challenge of structural inequality and social change. The anecdotes that started this blog post suggest that we are a long way from this balanced view of schooling as education.

CfE and PISA: ‘holding our nerve’

The publication this week of the triennial PISA results has produced the usual phenomenon of the PISA shock in various countries. In the UK, England has maintained its position relative to other countries, and this is a source of disappointment to a government that staked its reputation on improving its performance. In Wales, indifferent performance and a disappointing set of results in science are a source for concern, but the political message is, in the words of one headteacher, to ‘hold our nerve’ and see through the current curricular reforms. In Scotland, a dip in performance relative to England is more difficult to stomach, and has raised inevitable questions about whether the decline is due to Curriculum for Excellence. According to Professor Lindsay Paterson, a long-time critic of CfE, the decline in Scotland’s relative and absolute performance on PISA is ‘shocking’ (see Professor Paterson states that the pupils tested in the current PISA round have been entirely educated under CfE, suggesting that CfE is the problem.

I have some sympathy with some of his well-known criticisms of CfE. I have consistently been on record as supporting the broad general direction represented by the curriculum – local flexibility, student-centred approaches and teacher autonomy – but would agree with him in his critique of the lack of attention to knowledge within CfE. To my mind, a progressive curriculum should not preclude, as stated by John Dewey, the learning of ‘the accumulated wisdom of the ages’; it should not mean that teachers should neglect issues of knowledge. I too regularly hear educators telling me that ‘we do not need to teach knowledge anymore because pupils can google what they need to know’, and that ‘education is all about skills now’. In my view, a curriculum should be knowledge-rich, and this entails teachers posing the right questions in their curriculum design about what knowledge is of most worth.

Nevertheless, I would disagree with the notion that CfE is to blame for the decline in PISA scores experienced in Scotland. This seems to be a simplistic explanation, which ignores the complexity of educational reform and of the multi-layered terrain of education in Scotland. Instead I would point to what the OECD’s Andreas Schleicher stated on BBC news on 6 December 2016 – that Scotland needs to move from an intended curriculum to an implemented curriculum. While I have warned for some years that the problem with CfE is not CfE per se, despite its weaknesses in the area of knowledge (see and tensions within its structure (see, I would argue that the original vision for the curriculum was sound, with its basis in a set of clear purposes – attributes and capabilities – to be developed by education. Indeed, CfE sought to provide in Scotland exactly the sort of rich educational experience that is evident in highly performing education systems such as Singapore and Finland, and is typical in many ways of curriculum policy in many such countries.

Instead, the problems lies in its enactment – its translation from policy to practice, as clearly indicated by the OECD in its 2015 review of CfE. There are various issues here, all of which add to a highly complex enactment of policy to practice. They include:

  • The specification of curricular content as detailed learning outcomes, which have encouraged audit approaches and strategic compliance with CfE, rather than full engagement with its principles. The new benchmarks offer more of the same, and will require a great deal of care in their implementation if we are to avoid a continuation of such approaches.
  • A lack of clarity in CfE guidance about processes for curriculum development (see and the sheer volume of CfE documentation; the latter issue has contributed to a lack of clarity amongst schools and teachers, especially as the key messages have often been obscured, and in some cases changed over time. Again, the OECD identified this issue in their call for a simplified narrative.
  • The persistence of accountability mechanisms that have acted counter to the spirit of CfE, often encouraging performativity in schools (see and its accompanying bureaucracy.
  • A teaching workforce that, while highly dedicated and technically skilled,  has often struggled to make sense of a new and different curriculum, in the absence of sustained programmes to engage them with its principles and develop theories of knowledge that are consonant with this approach. I continue to meet teachers who admit to being baffled by CfE. A further, and related issue is cultural; implicit teacher philosophies about education do not always sit easily with CfE, and the lack of adequate spaces for sense-making does not allow this issue to be readily addressed.
  • A paucity of craft knowledge around curriculum development across the system – what Michael Apple has described as a ‘lost art’.

The net result has been an incomplete (at best) enactment of CfE, and a tendency to address new curricular problems through existing practices and assumptions. This was evident in our research in 2011 and 2012 (e.g., and I have seen little evidence since, in my extensive work with teachers, that the situation has improved. Thus the issue with declining scores in PISA is, in my opinion, likely to be due to a failure to enact CfE adequately, rather than being a problem with CfE as a curricular approach.

So how do we address this? A good starting point is the OECD review, which provides legitimation for a revision of CfE in its call to be bold. In all of this we need to remember that the curriculum should not be set in stone as a sabre tooth curriculum (see; instead it should be subject to regular review, and such a process should not be framed as a climb-down or u-turn by policy makers, but simply a part of the normal process of updating the curriculum to adapt to changing societal needs. This means a rationalisation of existing documentation, in my view, to provide the simplified narrative called for by the OECD. It requires the establishment of a strengthened middle – a mid-system leadership stratum that provides support and facilitation for curriculum development, using tried and tested methods of teacher/curriculum development such as collaborative professional enquiry (see Above all, we should ‘hold our nerve’ with CfE, and enact it fully in the spirit of its original aspirations (avoiding the political temptation to ape the testing regimes so familiar in England). The CfE model is much admired around the world – but we need to make it a reality in our schools.

Computing Science in the BGE

I have been recently told by Computing teachers that the new benchmarks for the subject are problematic in certain respects; leaving aside the question of whether this highly specified approach is appropriate, there are issues with coherence. The following post, by Greg Michaelson, Prof of Computer Science at Heriot-Watt University explores this issue, and is part of a wider series of commentaries on the new benchmarks.

The whole world is now utterly dependent on computing based technologies for all aspects of social and economic organisation. As computer use burgeons year on year, so does the shortage of skilled computer professionals competent to build and maintain the resilient and sustainable systems we have come to take for granted.

Thus, it is both baffling and dispiriting that Scottish Computing education seems to have been in a near permanent state of crisis these last few years, most markedly in secondary schools. The numbers of schools offering Computing has fallen, Computing teaching posts lie vacant, the numbers of students taking SQA qualifications in Computing is declining, and Computing teacher training places remain unfilled as graduates can command better salaries elsewhere.

The roots of this malaise are complex and well rehearsed, in a tiresome multi-dimensional cycle of blame involving government, local authorities, higher education  and employers. Nonetheless, there has been broad agreement across the sector that the Curriculum for Excellence (CfE) in Computing has not met the needs of any stakeholders, most markedly Scotland’s school children. In particular, it is characterised by an inadequate separation between broad ICT skills, which are essential for all citizens, and Computer Science, an academic discipline in its own right, concerned with all aspects of computation, with programming at its heart.

Thus, I think that the draft revision to the Computing Science BGE Experiences and Outcomes (Es and Os) is to be welcomed. Its authors have clearly listened to the critics of the status quo, in particular taking seriously the forceful and detailed proposals of an independent team of leading Scottish academics and practitioners.

First of all, the Es and Os have been separated into three well characterised significant aspects of learning (SALs): “Understanding the world through computational thinking”, “Understanding and analysing computing technology”, and “Designing, building and testing computing solutions”. It is particularly pleasing that Computational Thinking (CT) is fore grounded in its own SAL, as the key problem solving approach. It is also pleasing that the underpinning technologies, both hardware and software, have equal weight to CT and programming. This emphasis on computing leading to mechanical, repeatable, predictable procedures is, I think, central to understanding programming which is usually practiced at levels far abstracted from the underlying silicon.

Secondly, the SALS are well sequenced. Each starts at the Early stage with practical activity grounded in the students’ own experiences. These are used to draw out increasingly structured, differentiated and abstracted concerns as the SALs progress.

 Thirdly, the SALs are strongly complementary, especially in the core stream leading to programming competences. Their formulation offers sustained opportunities for integrative  learning activities throughout all stages, linking exploring a problem in the abstract to tease out possible solutions, to their practical realisations in some concrete programming notation, on a concrete platform.

The SALS are well served by the accompanying Benchmarks: Technologies document, which provides considerable concrete guidance for evidence based assessment of competences. It also fleshes out the more general nature of the Es and Os. In particular, the synergies across SALs are well laid out.

However, the devil is in the detail. My main concern is that CT is a new and ill defined discipline, remaining different things to different people. For example, the early stages follow Papert’s pedagogy of structured bricolage in a microworld, where the later stages draw on contemporary characterisations of CT as iterative activities of decomposition, pattern identification, abstraction and algorithm construction.

I think that it is vital to enunciate an explicit pedagogy of CT, as a well defined discipline that all Computing Science teachers can deploy consistently. Scotland is lucky to enjoy considerable research into and application of CT and I hope that this can be integrated in elaborating a unitary approach, backed by systematic support and exemplification material.

My second concern is that, while there is a strong emphasis on procedural and computational aspects of problem solving, little consideration is given to informational aspects. I think that teasing out the information structures that underlie problem domains is an essential component of CT. Indeed, exploring the information relevant to solving a problem offers excellent opportunities for pre-algorithmic CT, which may be easily based in motivational everyday activities.

Here, I can’t help feeling that an integrative cross-curricula opportunity has been lost. The separate Digital Literacy area focuses on information and problem solving in its “Using digital products and services in a variety of contexts to achieve a purposeful outcome” SAL, in particular at stage two. Indeed, the whole of this SAL would not be out of place in the Computing Science area.

Thirdly, I think that there are pedagogical and practical problems in moving from a block-based language at the first stage to a technical language at the second. This is a transition where students could easily come adrift and will need careful finessing.

Blocks based languages, like Scratch or Snap!, are programmed by assembling graphical elements representing entities and operations to solve problems, typically in a simple microworld of moving and interacting avatars. They are an excellent starting point for exploring basic concepts of sequence and repetition. However, they tend to offer impoverished general programming constructs and are unsuitable for constructing new microworlds, the ultimate goal of programming.

In contrast, technical languages are textually based. Choices are often constrained subsets of industrial languages, like Python or Java, which enable steady scaling up to full strength programming. However, getting the same effects as those achieved easily in blocks based languages requires considerable skill in programming to configure components from code libraries.

Fourthly, there is no recognition that the activities of constructing web pages and databases at stages three and four also require CT based problem solving and programming, so another integrative opportunity may be lost.

Finally, there is considerable overlap between stage four and the National 5 curricula. No doubt this will be revisited once the new BGE beds in, but it certainly is worth considering soon.

I think that these limitations are again reflected in the Benchmarks: Technologies document, in particular an overemphasis on procedural and structural aspects of CT at the expense of information. I find it sad that the sole data type to be explored by stage four is the number, which will lead to a repetition of dreary old problem solving based around arithmetic and counting.

Aspects of the Benchmarks: Technologies document seem problematic as a basis for assessment. While benchmarks for the core CT/programming competences have a strong emphasis on seeking evidence for the understanding of concepts, the other competences depend far too much on rote learning of “facts” about computing. Furthermore, the web and database benchmarks are extremely vague, defined simply as building something, without having spelled out what might constitute evidence for competence from the crucial design, usability or performance perspectives.

Nonetheless, despite all my carping, I think that the new Computing Science BGE is a substantial improvement on CfE mark one, and that this refreshing reboot deserves to flourish.


Some commentary on the new CfE Science benchmarks

The following is a guest post from Dr Laura Colucci-Gray, University of Aberdeen. It is somewhat longer than the usual blog post, but has been posted here in full because of the nuanced way it explores the new Science benchmarks.

The newly released assessment benchmarks – as they are presented in the draft consultation document –  aim to clearly set out what learners ‘need to know and be able to do’, moving from Early to First through Fourth Level. Quoting from the draft document, the benchmarks should be used to ‘monitor progress towards achievement’ and “to provide guidance, in a single, key resource to support teachers’ professional judgement”. In line with the expectations of developing successfully learners, one of the hallmarks of the Curriculum for Excellence, students’ learning is presented here as some kind of advancement, a striding forward, towards a clear goal.

I must recognize that such statements make an impact on me, as a reader, for their neatness and apparent simplicity. Three aspects appear to be of greatest importance:

  1. Specified outcomes, or benchmarks, which operate as a proxy for the learning process;
  2. Progression, implying the existence of a gradient or spectrum along which learning gains can be evidenced;
  3. Teachers’ professional judgement, perhaps the most human and possibly less predictable aspect of all, but which is firmly situated at the end of the learning process, after both outcomes (1) and progression (2) have been clearly and truly specified.

I wish to take a closer look at the validity and feasibility of such a plan, its potentially contested relationships with the overall aims of the Scottish Curriculum, and the implications this may have for science education.

The Scottish Curriculum as it was first analysed in the lucid work of Priestley and Hume (2010) is fundamentally a hybrid model, seeking to combine socio-political and economic drivers[1]. In many respects, it seems to wish to combine earlier attempts, by making the science curriculum more relevant for students, seeking to engage them as ‘active contributors’ – while developing their basic literacy and skills of scientifically informed citizens. Yet, the design and guidelines for implementation have already generated concerns and criticism from science education researchers in Scotland. Bryce and Day (2013) argued for further clarity, alluding to the inevitable risks of the hybrid condition producing confoundment of purposes and professional confusion[2]. A hybrid condition brings in itself a form of dualism: which way to turn? Towards critical competencies one way, or towards skills and knowledge for work another way? It is hard not to recognize that a curriculum built around the design of ‘learning experiences’ which are supposed to lead towards specified ‘outcomes’ contains an in-built direction of travel relentlessly moving from the variability of people’s experience to the singularity of results, from diversity to sameness and from openness to closure. So, let’s take a closer look at what happens during such progression, and what might be the expected ‘learning gains, when the learning outcomes are turned into benchmarks.

Progression as simplification

I noted that in the letter signed by the learned societies, a concern was being expressed about the level of detail at which the outcomes – for particular subjects – were being described. One of such subjects is the Planet Earth. As newly formed human beings, we join the biodiversity and the web of interdependences amongst living and non-living things on the Planet. Such concept is first captured in the benchmarks for the Early level science, as follows:

I have observed living things in the environment over time and am becoming aware of how they depend on each other.

SCN 0-01a


·         Explores and sorts objects as living, non-living or once living.

·         Describes characteristics of livings things and how they depend on each other, for example young dependent on parents.




Relationships of dependence are foregrounded to refer to a broader set of caring, feeding, exchanging actions, within supportive or unsupportive relationships. Then when we reach level 1, the language almost suddenly changes, becoming directive and specific, moving from ‘explores and describes’ to ‘explain and uses’.


I can explore examples of food chains and show an appreciation of how animals and plants depend on each other for food.

SCN 1-02a



●      Explains that the sun is the main source of energy.

●      Explains that energy can be taken in by green plants to provide the major source of food for all living things.

●      Uses the terms ‘producer’ and ‘consumer’ correctly.

●      Uses vocabulary correctly including ‘predator’, ‘hunter’, ‘prey’ and ‘hunted’.

●      Uses and constructs a simple food chain showing energy flow.


This learning outcome at Level 1 is supposed to be clear and simple, building progressively from the previous one at early level. In this passage, we note that the emphasis on the broader set of relational dependencies and interdependencies of the early level has been reduced to the more recognisable concept of ‘food chain’, whereby relationships are instrumental to materials exchanges and living organisms are given the stultified roles of producers and consumers. What is more, an additional layer of meaning is supposedly being provided by the introduction of the ecological concepts, as a means to extend on the production and consumption model. So, in the second bullet point before last, producers and consumers are now turning into prey and predators. Not only, the prey-predator model applies to a very specific category of beings, the carnivore animals, but in the effort to simplify content, scientific ideas are being distorted to suit a particular view of the world. Mechanistic models of the biosphere – exemplified by the consumption chain – seem to be preferred, although these ideas have been long overtaken by modern biology. Current thinking favours metaphors such as the web, the tree, the hand along with relational models of symbiosis, mutualism and cooperation.

Progression as determinism

Another example is from Fourth level science where at the level of the generic skills, to develop scientifically literate citizens, we find:

  • Demonstrates understanding of the impact of science on society.
  • Discusses the moral and ethical implications of some scientific developments.

Later, in the more specific outcomes, we cannot fail to notice a certain determinism in fact-finding related for example to agricultural production:

  • Uses information about essential plant nutrients to design a fertiliser;
  • States possible impacts of the use of fertilisers, for example, eutrophication and algal blooms.

The use of fertilisers is controversial; eutrophication and algal blooms are not simply the effects of fertilisers in the water but the consequences of mass food production for commercial purposes. At no point such contestation is hinted at or even supposed. While we cannot make assumptions without hearing directly from the authors of the document, it is the language of the specific outcomes which is of concern here. Words are used carefully to delineate factual knowledge but shy away from any critical appraisal of the surrounding cultural and social context.

Progression as Anthropocentrism

As science develops within a cultural, historical and social context, the language of its expression carries connotations about the ways in which human societies – at different point in history – have looked at the bio-physical world. As was mentioned earlier, in relation to the history of curriculum innovation in science education, also the teaching of science needs to be framed within the cultural and value narratives of the time.

The extract below – from Third Level Science – is focussing on viruses and microbes. We recognise almost immediately the underlying violent frame – carried by words such as ‘defence, barriers and the breaching of barriers’. A particularly westernised view of the world, which locates human beings – and their bodies – in direct and adversarial contrast with the living world upon which they depend is forcefully transferred through the power structures of the curriculum.

I have explored how the body defends itself against disease and can describe how vaccines can provide protection.

SCN 3-13c

●      Explains how microbes, for example, bacteria and viruses, can cause disease and infection.

●      Describes the barriers to infection as a first line of defence, for example, skin, mucus and stomach acids.

●      States how the immune system protects the body against disease if the first line of defence is breached, for example, white blood cells and production of antibodies.

●      Explores and explains how vaccinations can protect individuals and populations from disease.

Arguably, one might suggest that the factual phrasing of learning outcomes precludes the possibility to disclose alternative cultural frameworks. Here I refer to the heritage of pupils holding alternative views of the living world, such as the more animistic or religious views. As we know from many years of research in science education, learning progression in science comes from the opportunity to navigate alternative understandings and to recognise how scientific concepts are defined through ongoing dialogue, within a community of researchers and learners. Science education pedagogies should thus focus more on the elicitation of such alternative understandings, as opposed to the fast delivery of the ‘right answer’. That said, looking at the next example from First level:


By investigating forces on toys and other objects, I can predict the effect on the shape or motion of objects.

SCN 1-07a

●     Uses vocabulary to describe forces, for example, pushing, pulling, stretching, squashing and twisting.

●     Demonstrates understanding of how a force can make an object change speed, direction or shape.

●     Investigates balanced forces and can explain that if a push and pull are equal then there is no change in movement.

●     Investigates how shape is linked to motion and stability.


… the benchmark document is strict and ‘forceful’ in the prescription of how children are supposed to talk in physics, even at the basic level of describing the motion of objects in front of them. The words that are being used are also forceful in themselves – pushing; squashing… – not dissimilar from the metaphors of power highlighted above. And finally, forces – second bullet point above – can make an object change shape… forces are being anthropomorphised, extracted from the wider, bio- physical system of interactions and interdependencies… if another view of the world was adopted, one which would focus on a variety of experiences and alternative value-frameworks, one which would let students to play and explore, through bodies, hearts and minds, we would might have been able to see other words coming through in relation to forces: supporting, coming together, balancing or attracting…


So, returning to three key aspects emphasised by the document, I would like to share some of the concerns which have now emerged.


1&2. Turning outcomes into benchmarks and using benchmarks as proxy for learning; By setting out the learning process as a movement from experiences to outcomes and from outcomes to benchmarks, I observed progressive reduction, narrowing of linguistic frames and selective use of value-frameworks. As opposed to the meaning of progressive as reformist or broad-minded, like the CfE wished to be known to the world, I see stultification and closure to dialogue. Institutionalised reductionism is built-in within a machine for knowledge delivery. The progression is perceived more like an inexorable and unavoidable rush, forward-tracking learning to inevitable conclusions.

(I am reminded at this point of the popular film Cassandra Crossing, with Sophia Loren, back in the Seventies… has anybody seen it? The train carrying a sick spy with a contagious illness is made to rush blindly towards an old, crumbling bridge, while Sophia Loren is attending to the illness-stricken passengers, kept in the back carriage, in the desperate attempt to prove the authorities they can recover from the bug…)


  1. Teachers’ professional judgement. Within the simplified picture of the benchmarks document we are then expected to find the voices or judgement of the teachers, who are instrumental in the certification of learning. Yet within a scenario of progressive reduction, what are the teachers expected to do? Is there any room for them to exert their professional judgement? When all they are asked to do is to lead the pupils towards specific ‘benchmarks’ which are simplified and stripped of any personal signification, what is the teacher’s role meant to be? I am perplexed. Are the teachers like the passionate and professionally trained Sophia Loren who is working her socks off at the back of the train to treat the ill passengers… or more like the old-fashioned tram-controllers, stamping the tickets of the people who come on board? Neither metaphor seems to satisfy here…


So where to next? Addressing the trajectory….

Science, like all other subjects, is a body of knowledge which has been accumulated over time but which has also progressively changed over time. Behind science as a set of disciplinary knowledge there are people – the scientists – as well as policy-makers, citizens and tax-payers, including the merchants and the merchandise. Indeed, since the last energy transition from coal to oil, we have witnessed an explosion of knowledge thanks to the power of technology and computing machines. Consequently, science has changed dramatically from being the craft of a single individual to the interconnected activities of interdisciplinary teams operating within an extended web of public and private funding. Such transformations have two important implications. As science is increasingly emmeshed with political and economic agendas, the public is called to interrogate the allocation of funding and the ethical dimension of new ventures. New terms such as post-normal science, citizen science and even DIY science, are pointing to hybrid forms of knowledge sharing and knowledge forming calling for inclusion of different voices, participation, democratisation of science and critical appraisal.  Secondly, if the participation of the public is harnessed, evoked or even feared, education is called upon the task of preparing citizens for ethical, public dialogue, moving from knowledge to complex dialogical competencies which are linguistic, social, imaginative and creative.

In this open-ended and contested scenario, which progresses through debate and radical uncertainties arising from the new frontiers of science, the teacher has a key role in terms of preparing young people to interrogate the knowledge we need and to elaborate own models of living.  I find some notable parallels with the comments produced by Dr. Joe Smith in the earlier blog about history, saying that “progression in history refers not to a more complete understanding of the past (of which most of us know very little), but a more sophisticated one”.  Clearly this business of complexification is tricky for science education. It is well known that in terms of language, scientific terms – like food chain or food webs – are specific, retaining the root of everyday language but encompassing a singular and precise meaning, defined by the discipline.  So, we can see how the preoccupation with specification arises and how it can be justified and legitimised. However, we can recognise that we are amidst a contradiction here. If on the one hand, a simpler set of benchmarks carries the hopes of freeing teachers from the task of sifting out exuberant content; on the other hand, the specified nature of the content demands a critical interrogation of the selections that have been made, recovering the motives, purposes and value-frameworks that accompany any form of knowledge.

What hopes and what possibilities?

Research in science education has repeatedly pointed to the problems of resisting naïve views and perceptions of science held by both teachers and students at different levels of education. Learning and teaching science is equated to a protocol, which through the right sequence of steps, will lead to the right answer. Much has been contributed by science education researchers in terms of pedagogies to address such problems. My own research conducted with colleagues in International contexts, has shown that Scottish teachers are interested in innovation, often taking risks in the implementation of creative pedagogies in science. However, leadership ethos in the school is not supportive of such attempts, and students are preoccupied with attainment and performance, thus contributing to a progressive reinforcement of a transmissive pedagogy and old-fashioned beliefs (Gray et al., 2016[3]).

An important element of innovation and hope in the newly published set of benchmarks however lies with the emphasis on play as a form of scientific inquiry and discovery. Recent understandings of cognition as an embodied process point to play as the first and fundamental process of sense-making. The engagement with spaces, objects and materials can be paralleled to what happens during a scientific investigation and for this reason, it can provide the first point of access for young children into science. Most importantly however it is a process which sustains the development of analogical and metaphorical language supporting increasing levels of conceptual thinking and abstraction. For this reason, I would like to see more emphasis on play and imaginative play throughout the curriculum and into the draft benchmark document. I would like to see further opportunities for students at third and fourth level to be sensitised to alternative value-frameworks and to grapple with the ambiguities and the uncertainties which characterise a genuine scientific investigation!

Similarly, I wish to see teachers as animators of playful interactions. Students and teachers of science can come together as a team of inquirers and interpreters of the ways in which science and technology shape our actions in society. The science laboratory is the wider world and the classroom can afford a space of possibilities, in which we are all actors… in an unfolding play.

(This contribution for the blog has benefitted from the ongoing conversations with colleagues in the School of Education at Aberdeen, Dr. Kirsten Darling and Dr. Donald Gray and from the long-standing affiliation with the Interdisciplinary Research Institute on Sustainability, based at Turin University,


[1] Priestley, M. & Humes, W. (2010) The Development of Scotland’s Curriculum for Excellence: amnesia and déjà vu, Oxford Review of Education 36 (3): 345-361.

[2] Day, S. and Bryce, T. (2013). Curriculum for Excellence science: vision or confusion? Scottish Educational Review, Vol. 45 , No. 1, 2013, p. 53 – 67.



Some commentary on History, progression and the Social Studies benchmarks

The following is a guest post from Dr Joe Smith, University of Stirling, on the assessment of History using the new benchmarks.

Curriculum for Excellence is presently being equipped with ‘benchmarks’ to clarify what a child at each ‘level’ might be expected to know and do.  In terms of history, this means that Education Scotland have addressed the messy question of progression in historical understanding.  This blog posts explores some of the problems with the proposals. (NB. Some of the arguments here are similar to those I raised in The Curriculum Journal 2016)


There exist several models for progression in history education, but all are based on the uncontroversial premise that ‘getting better’ means something other than ‘knowing more’.  There is, after all, a literally infinite amount that one might know about the past and so to say that, ‘I know more history than you’ is to say that ‘I know a tiny bit more about a tiny sliver of the past than you do’.  There is not a totality of historical knowledge against which our knowledge can be cross-checked and this awkward fact means that we can’t assess children based on how much they know, because we, all of us, know very little.

Instead, progression in history refers not to a more complete understanding of the past, but a more sophisticated one.  For example, Shemilt (1983) produced one of the first workable models about how progression in history might be conceived. He argued that children moved through levels of understanding through which the complexity of the past was slowly realised:

  • Level One – There is a story of the past which can be learned. Things happened because they happened.
  • Level Two – There is a single simple story of the past which is easy to learn. Evidence which doesn’t fit this story is wrong. The past could never have been other than how it is.
  • Level Three – There is an appreciation that accounts of the past necessarily differ.
    Level Four – There is a recognition that there is no single story about the past and the nature of the narrative depends on the questions one asks.

Shemilt’s is by no means a perfect model, but it demonstrates how measuring progression requires an assessment of how children are thinking about the past. If teachers must measure children, then they must assess the sophistication of the child’s thinking as revealed through their written and spoken responses. They cannot and must not, simply put a tick or cross against what the child knows (or is perceived not to know).

The approach to progression seen in the new CfE benchmarks contains none of this sophistication.  The benchmarks are problematic in at least four ways.

  • They are, in many cases, so vague that they are devoid of meaning
  • They assess ‘knowledge’ that is pointless
  • They are startlingly undemanding
  • They ask children to behave in a way which is fundamentally unhistorical

In the following paragraphs I deal with each issue in turn, drawing upon examples from the benchmarks.

Vague and Meaningless

The second level benchmark says that a child,

‘Researches a historical event using both primary and secondary sources of evidence.’

This is nothing more or less than a description of the discipline of history. It is possible to achieve this ‘benchmark’ at every level from lower primary to university dissertation. What, specifically, does a child have to do to say that they have met this?  How independent do they have to be? What are they meant to produce at the end of this? How even can you assess the process of researching something? Historians research so that they can produce an account of the past – we assess the quality of an account informed by research, not the act of research itself.

Pointless Knowledge

Recognises the difference between primary and secondary sources of evidence.

The concept of ‘primary’ or ‘secondary’ is not inherent in a source: whether a piece of evidence can properly be called ‘primary’ or ‘secondary’ depends entirely of the questions that are being asked of it.  A school textbook is a primary source is a secondary source about the events is describes, but a primary source to an historian of education. In any case, it’s not even a useful distinction to be able to make. Being able to label ‘X’ as a primary source is of no practical to children. In fact, it encourages formulaic thinking along the lines of ‘X is a good source because it is primary’ which is actively unhelpful to the child’s development of historical understanding.


At the second level, a child of eleven and a half…

Describes and discusses at least three similarities and differences between their own life and life in a past society.

I am pretty certain that my five year old child could meet this benchmark, yet this is the benchmark for a child on the verge of secondary school.  Apart from anything else, this kind of ‘spot the difference’ activity is not particularly historical because it is devoid of explanation: i.e. ‘I put clothes in the washing machine, they use a mangle’ is not really historical thinking.  Whereas,

‘Before the electrification of homes people needed to do their washing on a mangle. This took a lot of time.   Since electrification, we have washing machines which means that we spend less time washing clothes’

contains elements of causation and change.


Contributes two or more points to the discussion (in any form) as to why people and events from the past were important.

The qualities of important or unimportant (or more properly significance) are not inherent in a particular historical topic, but imputed by the person talking about the topic. The phrasing of this benchmark presupposes that Event or Person X was ‘important’ and expects children to say why that is the case.  The idea that we get to decide for children which actors in the past were and were not significant, is deeply troubling. Instead, the expectation should be that children can disagree about why an event is significant (or even whether it was significant at all) rather than assuming it is important and asking them to tell us why. A better benchmark would be something like ‘Can choose a historical event and say why they think it should be remembered’.

So where has the problem come from?

The fundamental unsuitability of these benchmarks stems from the fact that they are based on the Experiences and Outcomes document which was, itself, wholly unfit for purpose.  I have written about their shortcomings at length (Smith, 2016), but the basic problem is that they were never intended to be used as the basis of a progression model. The Es and Os address two incompatible functions – prescribing content and defining procedural knowledge.  When these two functions are translated into ‘benchmarks’, curious things start to happen. For example, the ‘E and O’ SOC 2-04a reads, ‘I can compare and contrast a society in the past with my own and contribute to a discussion of the similarities and differences’.  This is a worthwhile activity for children to undertake – it asks children to appreciate change and continuity over historical time. However, when it is uncritically turned into a ‘benchmark’, a worthwhile activity loses its value; as eleven year olds are asked to ‘Describe and discuss at least three similarities and differences between their own life and life in a past society.’

Another example is SOC 2-06a which reads, ‘I can discuss why people and events from a particular time in the past were important.’ In this phrasing, discussion is the thing that the child does – the child is writing or talking discursively. However, in the benchmarks the active verb ‘to discuss’ morphs into the passive noun ‘a discussion’ to which the child now contributes.  In the process, any semblance of historical thinking is lost.

So what is to be done?

The great pity here is that there already exists a document which might be used as the basis for more effective benchmarking – the 2015 Significant Aspects of Learning (SALs) document.  Up until very recently, advice from Education Scotland was for teachers to defer to the SALs in planning the learning of their classes, not to the Es and Os.  The 2015 SALs are predicated on an assumption that historical understanding is conceptual understanding; it is not a matter of knowing more.

  • understanding the place, history, heritage and culture of Scotland and appreciating local and national heritage within the world
  • developing an understanding of the world by learning about how people live today and in the past
  • becoming aware of change, cause and effect, sequence and chronology
  • locating, exploring and linking periods, people, events and features in time and place

By using the SALs it is much easier to conceive progression. For example, we have a well-established model for assessing progression in children’s understanding of causation which derives ultimately from Shemilt.

  1. It was always going to happen, hence we cannot explain causation
  2. It was caused by one thing
  3. It was caused by many things
  4. It was caused by many things and we can categorise and prioritise these things
  5. It was caused by many factors which were interlinked and interdependent.

Everyone involved in education in Scotland wants its system to remain among the best in the world, but this means having a clear idea of what ‘the best’ looks like.  Progression models are always simplifications of cognitive development, but they are underpinned by a disciplinary understanding of what ‘more sophisticated thinking looks like’. If we reduce progression to a series of performative tasks, then teachers will inevitable teach to these tasks.  Instead we should be empowering teachers by demonstrating our aspirations for our children and trusting teacher’s professionalism to deliver on it.

The endless quest for the Holy Grail of educational specification: Scotland’s new assessment benchmarks

Teachers in Scotland are presently witnessing the phased publication of a series of draft assessment benchmarks. These are linked to the call in last year’s OECD review of Scottish education to simplify the narrative of the curriculum in response to OECD recommendations. The first benchmarks, for literacy and numeracy, were published in August 2016 ( They have subsequently been followed by draft benchmarks in a range of subjects such as Science (see, Expressive Arts and Social Studies, with more to follow for each curriculum area by the end of the year. Each set of benchmarks comprises around 50 pages of text, with groups of Experiences and Outcomes (Es & Os) listed alongside sets of benchmarks related to the applicable outcomes. If early drafts are any indication, we can expect to see around 4000 benchmarks covering the whole curriculum. The example below, from the draft Third Level Social Studies benchmarks, provides a flavour of this new approach.

People, past events and societies  

I can use my knowledge of a historical period to interpret the evidence and present an informed view.             SOC 3-01a

I can make links between my current and previous studies, and show my understanding of how people and events have contributed to the development of the Scottish nation.

SOC 3-02a


I can explain why a group of people from beyond Scotland settled here in the past and discuss the impact they have had on the life and culture of Scotland.                     SOC 3-03a


I can explain the similarities and differences between the lifestyles, values and attitudes of people in the past by comparing Scotland with a society in Europe or elsewhere.

SOC 3-04a


I can describe the factors contributing to a major social, political or economic change in the past and can assess the impact on people’s lives.

SOC 3-05a

I can discuss the motives of those involved in a significant turning point in the past and assess the consequences it had then and since.            SOC 3-06a

Through researching, I can identify possible causes of a past conflict and report on the impact it has had on the lives of people at that time.      SOC 3-06b


·       Evaluates a range of primary and secondary sources of evidence, to present valid conclusions about a historical period.

·       Draws on previous work to provide a detail explanation of how people and events have contributed to the development of the Scottish nation.

·       Provides reasons why a group of people from beyond Scotland settled here.

·       Describes the impacts immigrants have had on life and culture of Scotland.

·       Provides an account with some explanation as to how and why society has developed in different ways comparing Scotland to another society in Europe or elsewhere.

·        Describes factors which contributed to a major social, economic or social change in the past.

·       Draws reasoned conclusions about the impact on people’s lives of a major social economic or social change in the past.

·       Draws reasoned conclusions about the motives of those involved in a significant turning point or event in history.

·       Provides a justifies view of the impact of this significant historical event.

·       Identifies possible causes of past conflict, using research methods.

·       Presents in any appropriate form on the impact of people at that time.

It is immediately clear that the benchmarks add a new layer to the existing specification of Curriculum for Excellence. This is difficult to reconcile with the stated desire to simplify the narrative of the curriculum. It is thus hardly surprising that the benchmarks have been met with considerable scepticism by teachers on social media, and this week saw the publication of a thoughtful and considered, yet highly critical response from a group of STEM Learned Societies (see So what exactly is happening here, when a call to simplify the curriculum is met with a further spiral of specification (Wolf, 1995)? And what is wrong with this approach in any case?

Attempts to specify curriculum and assessment in detailed ways are not new. It is around hundred years since Bobbitt published his taxonomy of educational objectives. More recently in the United Kingdom, we have seen the emergence of the competency-based model that has underpinned vocational qualifications such as those produced by NCVQ in England and Scotvec in Scotland. Related to this has been the genesis and subsequent development of national curricula: from 1988, England’s National Curriculum set out attainment targets, expressed as lists of detailed outcomes, arrayed into hierarchical levels. Subsequent worldwide curriculum developments (for example, Scotland’s 5-14 curriculum, New Zealand’s 1993 Curriculum Framework, CfE in Scotland) have exhibited similar thinking. This approach has an instinctive appeal to those concerned with measuring attainment and tracking a school’s effectiveness. It provides a superficially neat way of categorising and measuring learning. The approach also attracted some support (especially in its early days) from some educationists. For example, Nash has talked of enabling learners “to have a sense of direction through planned and well-defined learning targets which are in turn based on defined criteria in terms of knowledge, skills and understanding”( Nash, in  Burke, 1995, p.162). Gilbert Jessup, the architect of the GNVQ competency-based model, stated that “statements of competence set clear goals for education and training programmes” and that “explicit standards of performance …… bring a rigour to assessment which has seldom been present in workplace assessment in the past” (Jessup, p.39). Jessup saw little difference between the competency-based model for vocational education and the emerging models of outcomes-based national curriculum, predicting that the National Curriculum would “result in more individual and small group project work, and less class teaching” (Jessup, 1991, p.78). Subsequent experience has of course demonstrated quite the opposite effect.

So what are the problems associated with this approach? I list some of well-documented issues here, focusing on generic critique of the model, rather than on a detailed analysis of specific benchmarks/subjects. Further posts on this blog will look at some of the subject areas such as social studies and science, offering a more finely focused critique of particular sets of outcomes.

  • The approach is complex, jargon-ridden and lends itself to bureaucracy. This criticism was levelled at the NCVQ model by Hyland who said that the model was “labyrinthine” in complexity and entirely “esoteric,” and as a consequence of all these factors, the model has proven to be unwieldy and difficult to access for both students and assessors (Hyland, 1994, p.13). Such issues have certainly been evident in Scotland in the creeping development of time-consuming, bureaucratic processes, and the subsequent exhortations for schools to reduce bureaucracy.
  • Specification of learning in this way has been shown to narrow learning, reducing the focus of lessons to what has to be assessed. Critics of this approach such as Hyland (1994) and Kelly (2004) were quick to point out that far from encouraging learner autonomy and flexibility in learning, the model inhibits it because of the prescriptive nature of many of the outcomes. Recent research in New Zealand (Ormond, 2016) indicates that specification of assessment standards has seriously narrowed the scope of the curriculum. Ormond provides an example of the Vietnam War, where some teachers omitted to teach the role of the USA in the war, while still meeting the requirements of the assessment standard.
  • Where assessment standards/benchmarks are too specific, they reduce teacher autonomy by filling lessons with assessment tasks and associated teaching to the test. Teaching thus becomes assessment-driven. In turn, this places great pressure on both teachers and students to perform – to meet the demands of the test. Performativity has been well-documented in the research. Its effects include stress on students and teachers, pressure to fabricate school image and manipulate statistics, and even downright cheating (see Priestley, Biesta & Robinson, 2015, chapter 5)
  • Focusing on ticking off benchmarks encourages an instrumental approach to curriculum development. Our research in Scotland documented instances of strategic compliance – box-ticking – with the Es & Os (e.g. see Priestley & Minty, 2013). There is a tendency to only visit an area of learning until enough evidence has been gathered that it has been covered, then to move onto to another required area. This is not an educational approach designed to build deep understanding or construct cross-curricular links. Instead it atomises learning.
  • There are philosophical arguments about whether it is ethical in a modern democracy to define in detail what young people should become. The assessment benchmarks can be framed as narrow behaviourist statements of performance, which mould people to behave in particular ways – as such, they can be seen as being more about training (at best) and indoctrination (at worst), rather than as educational (see Kelly, 2004).

The above objections to the specification of tightly specified assessment criteria suggest that it is extremely unwise for Scotland to take Curriculum for Excellence in this direction, which moves the practical curriculum yet further from the aspirational goals set out in early documentation. It is clear that such specification has political appeal, offering the (arguably spurious) opportunity to track achievement; moreover, it can be framed as a response to those teachers who have long decried the Es & Os for being too vague. Nevertheless, this spiral of specification is dangerous, and Scotland would do well to learn from prior history of curriculum reform. A salutary example lies in the GNVQ model: initially this was specified as Units, Elements and Performance criteria; later specification added range statements and evidence indicators, as curriculum designers engaged in a Holy Grail quest to achieve total clarity. The result was anything but clear; instead teachers experienced all of the issues outlined above, as courses became increasingly complex, bureaucratic and difficult to teach.


Burke, J. (ed.) (1995). Outcomes, Learning and the Curriculum: Implications for NVQs, GNVQs and other qualifications. London: Falmer Press.

Hyland, T. (1994). Competence, Education and NVQs: Dissenting Perspectives. London: Cassell.

Kelly, A.V. (2004). The Curriculum: theory and practice, 5th edition. London: Sage.

Jessup, G. (1991). Outcomes: NVQs and the Emerging Model of Education and Training. London: Falmer Press.

Ormond, B.M. (2016, in press). Curriculum decisions – the challenges of teacher autonomy over knowledge selection for history. Journal of Curriculum Studies. (

Priestley, M., Biesta, G. & Robinson, S. (2015). Teacher Agency: An Ecological Approach. London: Bloomsbury Academic.

Priestley, M. & Minty, S. (2013). Curriculum for Excellence: ‘A brilliant idea, but. . .’. Scottish Educational Review, 45 (1), 39-52.

Wolf, A. (1995). Competence-Based Assessment. Buckingham: Open University Press.

Scotland’s review of educational governance: where should it take us?

The recent publication of the document Empowering teachers, parents and communities to achieve excellence and equity in education: A Governance Review ( is potentially the most significant watershed moment in the history of Curriculum for Excellence (and we seem to have had a few of these recently!). Despite the usual self-congratulatory rhetoric1, this publication represents a significant recognition that things need to change. For the first time, the government is seriously posing the question about what is required to successfully enact the curriculum, as opposed to simply shoehorning the new curriculum into existing structures. Radical change to Scotland’s educational governance could be on the cards. But, what does it all mean? And where should we be heading if we are serious about supporting genuine change in schooling in Scotland?

While reading the document, I was struck by the following, paraphrased from an OECD publication Governing Education in a Complex World (

Successful systems, however, are those where governance and accountability are inclusive, adaptable and flexible. Roles and responsibilities across the system must be clear and aligned; teachers, practitioners, schools, early learning and childcare settings and system leaders should collaborate across effective networks to improve outcomes; parents and communities require to be engaged; and funding and decision making should be transparent. (p.4)

These seem to be admirable principles. I offer some thoughts here about issues that should be taken into account if we are indeed serious about updating the current governance system to meet the needs of a modern system of school education.

  • Form should follow function. It is good to see mention of new structures (for example regional support organisations), rather than an acceptance that existing structures (e.g. local authorities and Education Scotland) will simply take on new functions. We should be clear by now that prevailing cultural norms in organisations can preclude them from buying into radically new ways of thinking about education – and after all, CfE was supposed to be exactly that. It is therefore vital that the review should first and foremost consider what the function of the new strengthened ‘middle’ tier of the system (or meso-level – see should be. In my view, it should not be about producing reams of additional guidance, as has largely been the function of Education Scotland’s curriculum development endeavours. Nor should it be about mirroring the inspection process, as has largely been the case within local authority quality improvement procedures – we have an inspectorate to do that. Instead we need to develop a view of the middle as being about support – expert advice and hands-on leadership – for curriculum development. This in turn will allow us to develop the structures to achieve these goals. The review document talks about regional organisations and local clusters. These seem to be logical developments, but will not reach their potential if: 1] existing organisations (with their agendas and power structures) are left in place; and 2] if there continues to be a lack of clarity about the proper function of meso-level structures. Thus, clarity of function must precede reform of governance.
  • Autonomy is not the same as agency. I partly welcome the calls to devolve decision-making to schools. Subsidiarity is a worthy goal; however, it comes with many dangers. We should avoid simplistic talk about empowering schools. Research (e.g. suggests that simply granting autonomy to schools is problematic. Schools do not necessarily have the expertise to develop the curriculum. Autonomy can easily lead to the reproduction of habitual forms of practice – going with the flow, recycling old solutions, etc. – rather than genuine innovation. As we have argued in our recent book, teacher agency is important, and this requires a number of conditions:
    1. Skilled and knowledgeable teachers who have a wide repertoire of responses, upon which to draw, and who are able to work from foundational educational principles.
    2. Genuinely different future imaginaries about what is possible. We have found that the role of external agents to stimulate new thinking is vital here (e.g. see
    3. Access to resources to support curriculum development. Time is a key issue here, but cognitive resources (e.g. from research) are crucial. Here, again, the role of external agents is important – especially to facilitate access to new ways of thinking and doing, but also to act as leaders of a curriculum development process.
    4. A comprehensive understanding of the system features that enable and constrain curriculum development

The key point here is that schools and teachers may lack the capacity to work in this way. Local clusters of schools working together are helpful, but there still needs to be an infrastructure to support such working. Regional structures can provide this, for example offering the availability of leaders for curriculum development processes. We need, therefore, as part of this process to think about not only establishing the structure, but also about building system capacity – a cadre of expert teachers, for instance, who can work in their own schools and spend part of the week supporting colleagues in other schools. The large numbers of teacher currently undertaking funded Master’s programmes are an obvious pool for this.

  • Accountability should serve rather than drive curriculum development. All too often, we have seen curriculum development derailed by subservience to accountability processes. This can take the form of risk aversion (as a barrier to innovation), strategic compliance with new policy, or worse still game playing (see The governance review poses some questions about accountability. Scotland should heed the message currently being disseminated in Wales, by Graham Donaldson amongst others – ‘let’s get the curriculum right, then worry about accountability’. Nevertheless, there are some principles that can be considered now, as we develop new forms of governance. One is that, as stated above, we need to be absolutely clear about the function of the ‘middle’. If it is reduced to accountability, it will continue to shape school practices in unhelpful ways, as we have seen in recent years. We need to be clear about what attainment data can (and should not be) used for. And we need to think carefully about the ways in which accountability mechanisms impact on school practices. How are self-evaluation frameworks like HGIOS being used in schools; instrumentally (in a tick box fashion, or developmentally). And should we be moving away from the notion of an external inspection, as putatively an ‘objective’ process, towards inspections that accept the contextual nature of schools, and which involve teachers much more as peer and self-assessors in the inspection process. Apart from anything else, this is excellent professional development for teachers.

Finally, John Swinney states in the foreword to the review document that “This governance review offers an opportunity to build on the best of Scottish education and to take part in a positive and open debate”. All teachers should be contributing to this debate.


  1. For example, the unnecessary statement on page 5 that “Delivering Excellence and Equity in Scottish Education, builds on an impressive track record of improvements and reforms which have been driven forward across education and children’s services in recent years.”