The JISC Curriculum Design Programme meeting was held in Nottingham last week. As a new addition to the PiP team the event was a useful way of appreciating the wider aims of the Programme and its projects. The principal theme of the meeting was "sustaining and embedding changes to curriculum design practices and processes" and to this end we were treated to presentations, discussions and activities based on these themes. Since facilitating and sustaining institutional change appears to be (arguably) the most problematic aim for projects, the sessions on change were, for me at least, among the most interesting, partly because widespread organisational change tends to be complex and in some circumstances wholly intractable.
Paul Bailey and Peter Chatterton presented some useful thoughts on the process of managing change, and Stephen Brown - who facilitated discussions and activities on the topic of sustaining change – forced some meaningful scrutiny of project outputs and outcomes, and how they might align with institutional priorities. The content of the meeting has been well documented elsewhere by Sheila MacNeill and Helen Beetham and I have no wish to reinvent the wheel. However, perhaps most interesting was a topic that wasn't on the meeting agenda: evaluation, and the varying approaches projects intend to take towards evaluation.
The PiP project has now entered an intensive phase of evaluation and I was therefore interested to explore the evaluation approaches that other projects intend to adopt. The Marketplace Activity - in which projects presented posters on project outcomes and technical systems – presented an ideal opportunity for a series of informative discussions with other project team members. As a researcher who has in the past emphasised measurable research, my view has traditionally been that evaluation must attempt to adhere to the core criteria of research. This generally entails the following:
- Evaluations that exhibit universality (i.e. our ability to subject research to replication and to yield similar research);
- Evaluations should demonstrate control (i.e. our ability to minimise the influence of factors that could compromise universality); and
- Evaluations should be measurable (i.e. our ability to record and/or observe the phenomenon we purport to be investigating).
Yet the bewildering thing about the JISC Curriculum Design Programme is that there are many projects on the brink of evaluating phenomena which evades satisfactory evaluation. To be sure, tangible project deliverables, such as the PiP online CC approval system which is currently under development or improvements to business processes, can be subject to the criteria of research; but what about the principal themes of our meeting in Nottingham, i.e. institutional change or sustainability? Can organisational change ever be satisfactorily measured? How can control possibly be exerted when the variables involved in facilitating change at HE are so multifarious? Even if you could exert the necessary control, how could your methodology possibly be replicated when HE institutions differ in so many important respects?
These questions are largely rhetorical, and no one attending the meeting (that I spoke to) had satisfactory answers – at least not yet. But I was nevertheless cheered to discuss the conceptual issues of this conundrum with Paul Bartholomew (Project Manager of T-SPARC) and Rachel Harris (Inspire Research), both of whom acknowledged that the starting point for these aspects of the project evaluation need to be different; using alternative and informal forms of evidence to support the evaluation process. Where possible I would like to see the PiP project making use of robust and sound evaluative techniques - and for many things I have no doubt we shall - but the acceptance of alternative forms of evidence may in the end be all that is available to measure the largely intangible phenomenon of institutional change.