Third set of principles
The other thing to remember is that even if you’re leading the evaluation, it’s not your evaluation. One thing you don’t want to create is an “us and them” division within a project, where teachers provide data for the researchers. Education research should be designed with the end user in mind, which is educators, and they know better what they need to know. And everyone in the project is bound to have a good idea about research questions (I refrained from saying “better idea” but that’s probably true too). So the research questions, survey design, sources of data, all need to be collaboratively created, with practitioners and (if they’re interested) students. If there are other practitioners who want to contribute to the evaluation and writing of the report and any papers coming out of the project, they also have a right to do that, and should be included. I know of some projects (none I’ve been involved with thankfully) where academics have just gone to ground with the results and months later have a paper published, without offering anyone else within the project the opportunity to be involved and get a publication out of it. Which isn’t on. The AMORES project brought some of the schoolchildren along to the final conference. This shouldn’t really be exceptional, but it still is. Arguably it’s the learners who are the rationale behind doing all of the research in the first place. (Competing arguments are that it’s our mortgage lenders who are the rationale for doing it, but that’s another post entirely).
So .. #3 evaluation design should be Egalitarian, Inclusive, participative.
Now would be a good time to mention ethics, probably as it brings together all of the principles we’ve discussed so far.
Obviously everyone who takes part in the project needs to be protected. Everyone has the right to anonymity if they are taking part, so usually I get students to adopt a pseudonym for all interactions. There’s a piece of paper somewhere that matches pseudonym to real name (in case the student forgets and needs to look it up) but that never goes online and never leaves the classroom. Protecting the identities of staff is also important if that’s what they want, but also acknowledging their participation if that’s what they want too. Just remember to ask which it is. But ethics is really the underlying reason why you want the evaluation to be useful (you’re obliged ethically to put something back into the sector if you’re taking time and resources from it) and to be egalitarian (everyone deserves a chance to be published and have a creative input to the process.
So #4 Be ethical
The fifth set of principles are possibly the most difficult to put in place. Up to now every previous principle put in place has led to a whole set of different data, from different sources, that just happen to be around, contributed by and perhaps analysed by, a lot of different people. At this stage, it could be seen to be a bit of a mess.
However, that’s where the skill of the evaluator comes into its own. It’s taking these disparate sets of data, and looking for commonalities, differences, comparisons, and even single case studies that stand out and elucidate an area on their own. The strength of having such disparate sets of data are that they are :
#5.1 eclectic, multimodal, mixed methodologically,
However, it’s still necessary to put a minimum (remember, light touch) more robust evaluation in place at the core, in the form of a survey/questionnaire etc. This needs to contain a pre- and post-test and be open to quantitative analysis (some people only take numbers seriously). This runs against the idea of aligned with practice and opportunistic, as it’s an imposed minimum participation, but I think as long as it’s not too onerous, it’s not too much to ask. Usually though, this is the bit that requires the most struggle to get done.
So .. #5.2 quantitative comparative analysis, demanding minimum imposed involvement from practitioners to complete, provides an essential safeguard to the research to ensure robustness
However, this is not the only robust aspect. Even though the remainder of the data are opportunistic, because they are so wide-ranging they will inevitably provide qualitative data in sufficient quantity (and be triangulated), that this would in itself be an effective evaluation. It’s just good to have some numbers in there too.
To make the best of these elements, post-hoc, is the most difficult aspect of this style of evaluation, and requires a bit of time just sifting through everything and working out what it is you’ve actually got. Allow a week without actually getting anything concrete done. It’s OK, it’s just part of the process. It requires the evaluator synthesise the findings from each set of data and therefore to be
#5.3 flexible, creative, patient
As Douglas Adams once said (though he was quoting Gene Fowler) “Writing is easy. All you do is stare at a blank sheet of paper until drops of blood form on your forehead.”
Finally the outputs. Both the BIM Hub project and the AMORES project have the same two sets of evaluation reports. Given the aims of the project to be both useful, and robust methodologically, I think having the outputs in these two forms is essential.
Typically these two forms are:
A “how to” guide – the AMORES one is at this link:
The BIM Hub one is here:
http://bim-hub.lboro.ac.uk/guidance-notes/introduction/
Both of these summarise the key points of learning from the project, in a form that practitioners can adopt this learning and incorporate it into their own practice.
However, backing up these documents are fuller evaluation reports detailing the data and analysis and showing how these points of learning were arrived at, and providing the evidential basis for making the claims. This isn’t essential for people to read, but these documents do provide the authority for the statements made in the summary documents.
Finally both projects also include visual materials that contribute to the evidence. In the BIM Hub project, this is recordings of the meetings the students held, showing how their abilities developed over time. For the AMORES project there are dozens of examples of the students’ digital artefacts. In short, when you’re publishing the evaluation you also want to reassure your audience that you haven’t just made the whole thing up.
i.e. The final principle generate artefacts during the project so that at the end you can: show that it is a real project, with real students, doing real stuff