Some thoughts on evidence and curriculum evaluation

“The RSE is of the view that high quality evaluation of CfE must relate to the aims of the reforms and the criticisms to which the reforms have been subjected. It should seek to identify what is different about CfE and consider the extent to which the distinctive and innovative features of the reforms are being reflected in classroom practice. It must provide evidence on what is and what is not working and why this is so, and evidence-based proposals for improving matters. It must use a variety of kinds of data in order to address the variety of criteria that might be used to evaluate CfE; all such data must be rigorously analysed. Given that CfE represents an evolving process of educational reform, there should be a commitment to an on-going and sustainable programme of evaluation.”

The above quotation is taken from a recent Royal Society of Edinburgh briefing paper, released to set out the Society’s view in advance of the forthcoming OECD review of the curriculum (see http://www.royalsoced.org.uk/cms/files/advice-papers/2014/AP14_13.pdf). In the paper, the Society bemoans the “absence of a systematic strategy” for evaluating CfE to date, and suggests that the diverse range of practices and local variability associated with CfE would make it difficult to conduct such an evaluation. Moreover, they point to the lack of baseline data for conducting the OECD review.

I have a great deal of sympathy with these views. CfE has not undergone a systematic and independent evaluation of the sort that, for example, was commissioned by the New Zealand government in the wake their similar curricular reforms (see http://hdl.handle.net/2292/16381). Too often I have heard talk of inspection evidence and local authority audits constituting evaluation. This is highly contestable: they are certainly not independent; they are conducted for particular purposes that may be antithetical to the sorts of evaluation proposed by the RSE; and the performativity engendered by inspections is well documented (put bluntly, many schools are adept at showing inspection teams what they want to see). This situation has been exacerbated by two additional issues. The first is a default position within the educational system (at all levels) to justify the success of CfE, rather than a more healthy tendency to subject it to critical scrutiny. This has been evident, for example, in the feel-good approach to implementation that characterised much of the early LTS curriculum development, as well as in the understandable reluctance of schools and local authorities to air their dirty washing in public (for a notable and honourable exception see https://www.tes.co.uk/article.aspx?storycode=6296430). The second problem lies in a lack of academic research by university-based researchers into the implementation of CfE. Again there are exceptions, for example our work on teachers and curriculum development (e.g. see http://hdl.handle.net/1893/7075), and the work of Malcolm Thorburn and colleagues at Edinburgh University on outdoor and physical education (e.g. see http://tinyurl.com/kxnzcp4). But these remain as exceptions. It is unfair to blame university researchers for this dearth of research; the comparative lack of research council and government money to undertake such research has rendered the environment difficult for many. However, we cannot escape the conclusion that there are alarming gaps in our knowledge about the new curriculum and its effects. I have added a page to this blog that provides a fairly comprehensive list of publications related to CfE (see https://mrpriestley.wordpress.com/academic-publications-related-to-cfe/).

So what should we be evaluating? A useful framing is provided by Thijs and van den Akker, in their excellence overview ‘Curriculum in Development’ (see http://www.slo.nl/downloads/2009/curriculum-in-development.pdf/, p.9. Incidentally, I would recommend this as essential reading for all schools undertaking school-based curriculum development). The authors talk about different levels of curriculum:

  • Supra: the supranational discourses that frame national curriculum policy. These are often generated by organisations such as the OECD and World Bank, having their origins in economic rather than educational thinking.
  • Macro: the high-level policies produced by governments. In the case of CfE, these documents are primarily the 2004 and 2006 publications that set the direction for CfE.
  • Meso: these are the policy development documents which recontextualise and operationalise the often abstract macro level policies. In the case of CfE, there has been no shortage of such guidance, from the BTC series to later briefings.
  • Micro/Nano: these are the day to day interactions at school and classroom level as the curriculum is enacted into practice by teachers

In utilising the above framing, I would stress the following: curriculum operates at multiple levels of policy and practice; it is not simply the statements of intent, but also constitutes interactions between teachers and teachers, and teachers and pupils, as well as the in-between processes put in place by Education Scotland local authorities. This framing allows us to think more systematically about evaluation. At the macro-level it allows us to ask questions about the coherence of policy and the types of purposes set out. At a meso-level if allows us to inquire into the processes that exist to facilitate the enactment of the curriculum, the micro-political issues which shape these processes and the fitness for purpose of development materials At a micro-level, it allows us to judge whether emerging practices are fit for purpose. Of course we need evidence for making such judgments. And as the OECD review approaches, I remain unconvinced that we have sufficient evidence to do so.

Advertisements