On gobbledygook

One of the tasks for participants in the First Steps in Learning and Teaching course that I’m helping to convene at Brookes is to create an annotated bibliography. The discussion that has spun out from this has been fascinating. One of the participants particularly drew attention to this quote from Cathy Davidson: “if ‘research paper’ is a category that invites, even requires, linguistic and syntactic gobbledygook?”

Academic gobbledygook is one of my main bugbears. There are so many different things that can make something impenetrable to readers, some of which are really difficult to avoid. One comes from the idea of threshold concepts, which are steps in learning that change your perspective on a field. These can be particularly difficult to teach, not least because, once you grasp them, it’s very difficult to then see things the way that you used to see them, and therefore communicate to people who then haven’t acquired that mindset. Concepts and the language to describe them become so intrinsic to the way you see the world that communicating with people who haven’t crossed over to that mindset becomes particularly difficult.

Another problem is that within a particular discipline, some terminology is regularly used, to an extent that you’re no longer aware that it’s a specialist terminology. I used to research into subcultures in virtual worlds, and ranted to friend that an editor of a paper asked me to explain the words “furry” and “steampunk”. I just felt it was asking me to talk down to my audience. Rather than commiserating, my friend responded with doubts about my own sanity, that I’d probably got too far into my own little world because those definitely aren’t words that everyone would recognise. Terminology is useful, when it encapsulates a range of ideas that would otherwise take a long time to unpack, but we always need to be prepared to unpack them for our audience. When doing a session on technology enhanced learning I always ask people to flag when I’ve used a term or abbreviation they don’t know.

What does annoy me in those situations though is, that you can get some people who act all outraged when you use a phrase that they don’t know. I was at one workshop on PDAs (that dates it) where someone in the audience got very belligerent with the person running it that she hadn’t said what PDA stood for. It was in the title of the session. Why attend if you don’t know what it is? Or spend 10 seconds googling it. Or work it out from context. Lack of understanding about terminology is a failing of two people, not just one. It’s not like spelling it out helps anyway. Anyone know what DVD stands for? Does knowing that help you watch a movie?

I think the one thing we can guard against more easily in the fight against gobbledygook is the tendency to pack ideas together so densely that sentence structure and following the conceptual trail of an argument, become difficult to follow. Sometimes this is sloppiness, but it is often because the writer is unsure about the validity, or even academicness, of the subject they’re talking about. The temptation is to dress it up with fancy words, or cram two previously unmatched ideas together and expect the audience to work out for themselves what those two concepts when blended actually mean. When I next come across an example of that I’ll add it to the comments below.

Being a reviewer for a few journals has helped me enormously here in being prepared to go through something a colleague or submitter has written and say “I don’t understand what this means”. I could probably have a go at working it out, or forming that synthesis in my head myself, but that’s not something a reader should have to do. That’s the author’s job.

Another thing that we do is placing so many caveats and double-negatives around something in order to actually avoid making as firm a statement. It’s not inconceivable to doubt that the absence of double-negatives can lead to the lessening of the propensity to reduce the failings in sensemaking of many statements.

Gobbledygook is just being a poor communicator. However there’s a form of academic writing that starts with a phrase like “it is not inconceivable that” and then uses this as an axiom in an argument. The master of this is Greenfield, who’s created a whole new career out of making statements like “children are on social media so much, it is not inconceivable that it will have an effect on them” and then working up to “given that there is such an effect what can we do to limit it.” (I’m paraphrasing, but that’s how the reasoning works). The discussion then subtly glosses over the element that the original statement is totally unsupported and a bit weasely. At no point is there actually any lying going on, it’s just a subtle misdirection that sounds like a valid argument. At this point we’ve got past gobbledygook and are into the realm of bullshit, which I will post about next time.

 

For great examples of bullshit, one of my favourite books is Fashionable Nonsense by Alan Sokal and Jean Bricmont. I think if more people were primed to detect it, and have little less patience with it, we would all contribute to making academic writing a bit more accessible.

 

On escape, enhancement and exceptionalism

Last week I did a guest spot in a lecture at (although whether it was really “at” is a moot point) Surrey University. The students’ regular lecturer is Lee Campbell and we share an interest in the idea of embodiment online, particularly its role in performance (although Lee knows way more about the performance angle). His module is actually on Digital Performance.

I used this as an opportunity to bring together three different (though linked) strands of what I’ve been researching and writing about, digital identity, online spaces and embodiment. The linking theme of all of these is really that we aren’t just physical beings; that our interactions with technology are so integrated with our sense of selves, spaces, bodies, that the technology is an extension of us. Understanding who we are (and more relevant to using TEL) understanding how we act online, can be informed by adopting this perspective. And by “act” I mean it in both senses, or rather that  an essential part of acting (as in performing an action) is acting (as in performing).

The question about whether I was actually “at” Surrey Uni is a debatable one because I was at home, on the sofa. My image and voice were on a screen in a classroom there through the auspices of Skype. But where I actually was was kind of my topic.

Also to put together the presentation, I pulled together bits from three different presentations. The one on identity was re-used from my SOLSTICE keynote from a couple of years ago where I contrasted developing my identity in the physical world (cue pictures of me as a baby, in toddlers clothing, in a baggies strip, and as a 15 year old nerd) compared to four stages of my avatar (standard start-up form, first bought skin, more textured skin, sitting in a home). While describing this I said that my identity up until 15 was purely an offline one because …. because the internet hadn’t been invented.

I’d never actually mentioned that in a presentation before, but of course, for students now, that’s quite a novel idea. OK there was ARPANET, but I was nearly 30 before anyone I knew had actually been online.

That aside led to being positioned with a weird sort of authority “as someone who was around before the internet”. The question I was asked though was “as someone who was around before the internet, what do you think about the use of virtual worlds and games as a means of escapism?”

Working in virtual worlds, it’s a question I’ve been asked before. The games-based learning field leads into it too. Recently at a conference I was asked whether we should be encouraging students to play games.

My answer is always a bit inconsistent, because I’m not sure what assumptions are leading people to ask the question, so I try and answer both simultaneously.

The first is that there’s something wrong with escapism. There isn’t. I’ve spent a large proportion of my life escaping to some fictional world or other. A central thesis of Saler’s “As if” (“As If: Modern Enchantment and the Literary Prehistory of Virtual Reality” – an exceptional book, I assume from the small amount I read before I lost my copy – or maybe it escaped) is that the completeness of fantastical worlds is so that they can function as places to escape to. And those sorts of fictional worlds have been around for a long time. Games have been with us for thousands of years. I’m sure the reason our ancestors sat around a few pebbles and lines drawn in the sand it was to escape from the daily pressures of hunting down bison or evading sabre-toothed tigers (editorial caveat: I’m not an anthropologist).

But this isn’t the sort of escape that the gaming naysayers are worried about. If someone is curled up in an armchair with a Trollope (I’m thinking Anthony but Joanna would do), that is A Good Escape, but if they’re firing away at the Covenant in the latest Halo, that’s a Bad Escape. No coherent reason is given for distinguishing between the two.

The other is that escapism is happening when it isn’t. This applies particularly to when they’re contesting virtual worlds. The name Second Life doesn’t help in this argument, in that it implies something other to normal life. My response to these people is that the online world is an authentic experience for those who take part in it. That the relationships built up there are as real as those in the physical world to those that want them to be. That this extension of our bodies, lives, selves, spaces into the virtual is a real change that society has experienced, and that actually ignoring it is the real escape.

So … as someone who was born before the Internet my position is: “If you haven’t fought off the Reapers in Mass Effect then you need to ask yourself the question, “why not? What are you trying to escape from?””

The reason for the double-standards is, of course, that technology is still seen by many as exceptional to the human experience, not an integral part of it. It’s something that’s added on to what we are, and we are losing ourselves in. It has at its route a technophobia or even neophobia; that technology itself is seen as other, and any change towards incorporating it is a move away from some imaginary golden age ideal of what being human is.

This issue has cropped up in conversations at work about the idea of Technology Enhanced Learning, with reference to beliefs, anxieties and miscommunications about the idea of Enhancement. Some people see the TEL agenda as being one of trying to impose some sort of technocratic imperialism on them. Others feel that they are being singled out for enhancement because they aren’t performing.

This is difficult for someone that just gets excited about new tech to realise, and communicate with. For me change (with or without technology) is an essentially human thing. The first caveman who picked up a hand axe must have been excited about it, they’re such undeniably cool things. But even then there must have been other people who looked on them with suspicion and carried on hitting things with ordinary shaped rocks.

I tried to encapsulate these various positions into a single matrix (because simplifying things is how I deal with them). This is what I came up with:

Change is for all Change is imposed on the disadvantaged
Technological change is an essential part of the human condition Extropianism Technological dystopianism/class warfare
Technology is dehumanising Technophobia The eugenics debate

These could probably be expanded with some examples. But that’s probably for another time. But for a little self-test here’s an item taken from today’s headlines.

http://www.telegraph.co.uk/news/science/science-news/12025316/Humans-will-be-irrevocably-altered-by-genetic-editing-warn-scientists-ahead-of-summit.html

Now I admit there are ethical issues, and we need to engage our brains before making a decision, but isn’t the initial emotional instinctive reaction to the idea of GM babies “woah, cool”?  Isn’t that the most human response?

 

 

 

On evaluation design pt 2.

Third set of principles

The other thing to remember is that even if you’re leading the evaluation, it’s not your evaluation. One thing you don’t want to create is an “us and them” division within a project, where teachers provide data for the researchers. Education research should be designed with the end user in mind, which is educators, and they know better what they need to know. And everyone in the project is bound to have a good idea about research questions (I refrained from saying “better idea” but that’s probably true too). So the research questions, survey design, sources of data, all need to be collaboratively created, with practitioners and (if they’re interested) students. If there are other practitioners who want to contribute to the evaluation and writing of the report and any papers coming out of the project, they also have a right to do that, and should be included. I know of some projects (none I’ve been involved with thankfully) where academics have just gone to ground with the results and months later have a paper published, without offering anyone else within the project the opportunity to be involved and get a publication out of it. Which isn’t on. The AMORES project brought some of the schoolchildren along to the final conference. This shouldn’t really be exceptional, but it still is. Arguably it’s the learners who are the rationale behind doing all of the research in the first place. (Competing arguments are that it’s our mortgage lenders who are the rationale for doing it, but that’s another post entirely).

So .. #3 evaluation design should be Egalitarian, Inclusive, participative.

Now would be a good time to mention ethics, probably as it brings together all of the principles we’ve discussed so far.

Obviously everyone who takes part in the project needs to be protected. Everyone has the right to anonymity if they are taking part, so usually I get students to adopt a pseudonym for all interactions. There’s a piece of paper somewhere that matches pseudonym to real name (in case the student forgets and needs to look it up) but that never goes online and never leaves the classroom. Protecting the identities of staff is also important if that’s what they want, but also acknowledging their participation if that’s what they want too. Just remember to ask which it is. But ethics is really the underlying reason why you want the evaluation to be useful (you’re obliged ethically to put something back into the sector if you’re taking time and resources from it) and to be egalitarian (everyone deserves a chance to be published and have a creative input to the process.

So #4 Be ethical

The fifth set of principles are possibly the most difficult to put in place. Up to now every previous principle put in place has led to a whole set of different data, from different sources, that just happen to be around, contributed by and perhaps analysed by, a lot of different people. At this stage, it could be seen to be a bit of a mess.

However, that’s where the skill of the evaluator comes into its own. It’s taking these disparate sets of data, and looking for commonalities, differences, comparisons, and even single case studies that stand out and elucidate an area on their own. The strength of having such disparate sets of data are that they are :

#5.1 eclectic, multimodal, mixed methodologically,

However, it’s still necessary to put a minimum (remember, light touch) more robust evaluation in place at the core, in the form of a survey/questionnaire etc. This needs to contain a pre- and post-test and be open to quantitative analysis (some people only take numbers seriously). This runs against the idea of aligned with practice and opportunistic, as it’s an imposed minimum participation, but I think as long as it’s not too onerous, it’s not too much to ask. Usually though, this is the bit that requires the most struggle to get done.

So .. #5.2  quantitative comparative analysis, demanding minimum imposed involvement from practitioners to complete, provides an essential safeguard to the research to ensure robustness

However, this is not the only robust aspect. Even though the remainder of the data are opportunistic, because they are so wide-ranging they will inevitably provide qualitative data in sufficient quantity (and be triangulated), that this would in itself be an effective evaluation. It’s just good to have some numbers in there too.

To make the best of these elements, post-hoc, is the most difficult aspect of this style of evaluation, and requires a bit of time just sifting through everything and working out what it is you’ve actually got. Allow a week without actually getting anything concrete done. It’s OK, it’s just part of the process. It requires the evaluator synthesise the findings from each set of data and therefore to be

#5.3 flexible, creative, patient

As Douglas Adams once said (though he was quoting Gene Fowler) “Writing is easy. All you do is stare at a blank sheet of paper until drops of blood form on your forehead.”

 

Finally the outputs. Both the BIM Hub project and the AMORES project have the same two sets of evaluation reports. Given the aims of the project to be both useful, and robust methodologically, I think having the outputs in these two forms is essential.

Typically these two forms are:

A “how to” guide – the AMORES one is at this link:

http://www.amores-project.eu/news/why-the-amores-teaching-methodology-is-the-secret-ingredient-to-teaching-literature

The BIM Hub one is here:

http://bim-hub.lboro.ac.uk/guidance-notes/introduction/

Both of these summarise the key points of learning from the project, in a form that practitioners can adopt this learning and incorporate it into their own practice.

However, backing up these documents are fuller evaluation reports detailing the data and analysis and showing how these points of learning were arrived at, and providing the evidential basis for making the claims. This isn’t essential for people to read, but these documents do provide the authority for the statements made in the summary documents.

Finally both projects also include visual materials that contribute to the evidence. In the BIM Hub project, this is recordings of the meetings the students held, showing how their abilities developed over time. For the AMORES project there are dozens of examples of the students’ digital artefacts. In short, when you’re publishing the evaluation you also want to reassure your audience that you haven’t just made the whole thing up.

i.e. The final principle generate artefacts during the project so that at the end you can: show that it is a real project, with real students, doing real stuff

 

 

 

 

 

 

On evaluation design pt 1.

Some thoughts on my approach to evaluation design

I’ve just finished another internal evaluation of a project. This time it’s the AMORES project http://www.amores-project.eu/ Reflecting on the evaluation, and the similarities with the previous evaluation I did, led me to some realisations about the sort of evaluations I conduct, how they are designed, and what their essential elements are. I thought I’d collect these together into a couple of blog posts, mainly so that the next time I design one, I can remember the best of what I did before.

I should specify that I’m discussing particularly internal evaluation. For those not familiar with educational projects, most of them have two evaluation strands. One is the external evaluation; this is conducted by someone outside of the project who examines how well the project functioned, whether it met its goals or not, how well communications worked within it, and so on. It’s part of the Quality Assurance, compliance and accountability process.

The internal evaluation asks questions of the learners, teachers and anyone else involved with the educational aspects to identify good practice, look for tips that can be passed on, and encapsulate the overall experience for the learners and educators. In short, it’s there to answer the research questions addressed by the project.

There’s a good deal of overlap between the two, but they are essentially different things, and should be done by different people. You merge the two at your peril, as part of the external evaluation is to address the success of the internal evaluation. And you do really need both to be done.

I’ve been the internal evaluator on 13 education projects now, but the last two (the other one was the BIM Hub project http://bim-hub.lboro.ac.uk/) were very similar in evaluation design; I think I’ve cracked the essential elements of what an internal evaluation should look like.

Part of the issue with being an internal evaluator is that, even though you’re part of the project team, you’re not (usually) one of the teachers. And teachers on projects have their own agenda, which is to teach (obviously) and, quite rightly, this takes precedence over all the analysis, research and general nosiness that a researcher wants to conduct.

For this reason, an evaluation design needs to be as unobtrusive as possible. Most education activities generate a lot of data in themselves, artefacts, essays, recordings of teaching sessions, all of these can be used without placing any additional burden on the learners or teachers. Sometimes the evaluation can drive some of the learning activities. So, for example, you need students’ perceptions of their learning; so you set as an assignment a reflective essay. You need something to disseminate, so you set students the task of creating a video about their experiences, which can also be evaluation data. And when we’ve done this, not only does this prove to be a very useful set of data, it also becomes an excellent learning opportunity for the students. Teaching generates a lot of data already, too, such as grades, results of literacy testing, pupil premium figures, tracer studies. As long as the institution releases the data, then this is stuff you can use with no impact on the learners or teachers.

So here’s the first set of criteria. Evaluations must be:

Unobtrusive, opportunistic, aligned with teaching practice

The second set of criteria is related to actually having an evaluation that makes sense. There’s no point gathering a set of data that are more than you can deal with (having said that, every project I’ve done has). Also the data you collect have to be targeted towards finding out something that will be of use to other practitioners once you’ve finished the project (I’ll come to outputs later). The RUFDATA approach is a good one here. There’s also no point trying to gather so many data that no-one will look at the surveys you’re distributing, or complete them if they start them. For length of survey principles that seem to work are:

Quantitative questions – no more than one page (and use a 5-point Lykert scale obviously – anything else looks ridiculous – but add “don’t know” and “N/A” as options too.

Free text questions: well no-one wants to write an essay, and if it’s on paper you’ll have to transcribe them at some point anyway. As far as numbers go, a good rule of thumb is that if it’s a number you’d see in a movie title it’s OK. So seven, or a dozen, or even 13, is fine. More than that is pushing it (and if you’re going to ask about 451 or 1138 questions then full marks for movie trivia, but minus several million for being a smart arse. The point of the movie title thing is that if you see your research questions as characters in the narrative you’re going to weave, then you don’t want to overcrowd your story anyway. Putting too many in then becomes pointless. You want all your questions to be Yul Brynners, rather than Brad Dexters.

So: useful, targeted, light touch, practicable

A third set of principles is based around whose research is it anyway? Which will be covered when we reconvene in the next post.

 

 

 

 

 

 

 

 

 

 

 

Joining Brookes

As I’ve just started working as a Senior Lecturer in OCSLD (the Oxford Centre for Staff and Learning Development) at Oxford Brookes I felt it was time to start blogging regularly (not trying to blog, do or do not, there is no try). As there’s not much yet to report about my work there, this first one is more about where I’m at, and why the change, and what changes are likely to be coming up.

I’ve posted a lot on my Facebook account about these changes using the metaphor of The New 52. Having reached 52 years old this year, this feels like a mid-way point in my life (optimistic, right?), so out with the Old 52 and in with the New. Any mainstream comics fans will recognise the metaphor, a few years ago DC Comics rebooted its whole line under the banner of the New 52. Every comic started again from #1 – even the ones that had been going for 73 years. After four years of working as a “freelance academic” I wanted a reboot. A proper job in which I had a team of people to work with, a platform in which to share my ideas, and some regularity of income so I could focus on work, rather than being distracted by the need to find more work, or chasing clients in order to get paid for work I’d completed.

Also consultancy work is excellent, you meet loads of people, get to travel to some interesting places, but an academic career really survives on publishing research, and as a self-employed consultant, any time you spend writing is a financial cost. It’s one thing to write for free, it’s another for it to cost personally hundreds or thousands of pounds to take the time out to write a book chapter.

It’s not like I worked on my own before the Brookes job came up – the team on the AMORES project are an excellent team, for example, but they’re distributed across Europe and we only meet four times a year. My daily interactions are predominantly with my cats. And it’s ironic, I know, or perhaps hypocritical, to mainly work (at the moment) on how best to facilitate online teamworking and then bemoan the fact that I entirely work online with people.

However, I would never argue that face-to-face collaboration is entirely able to be substituted with online interactions. However, I am enough of an extrovert to need the communication with people to feel motivated by talking to others. I’m also enough of an introvert to need to work on my own after a couple of days of interaction. A 0.6 fte post is therefore ideal for me. I’ll still be teaching at Worcester until January, and I’ll still be working for a few clients.

However, anyone familiar with the New 52 metaphor will know that any reboot is preceded by a Crisis (deliberately capitalised cf. https://en.wikipedia.org/wiki/Crisis_%28DC_Comics%29) .

My flashpoint was the double whammy of my brother-in-law dying (the day before my 52nd birthday) and then my father dying two weeks later. The sheer admin involved is overwhelming, but of course it’s the emotional fallout that really has the impact.

September and October was the key month for me in AMORES (I’m leading the evaluation workpackage, and as everyone who will listen to me knows, without any evaluation there IS no project) and I’d planned for the majority of my commitment to be done with before I started at Brookes. It didn’t get done. I’m still contracted to work on AMORES until the end of November, but the original intention was that it should be some small bits of dissemination, not writing the actual whole thing. So my family, my wife’s family, my AMORES commitments all collided with my new job; in the collision of parallel worlds you’d expect from a capital-C Crisis.

However, it was a good test of how accommodating my employer (errm still got a bit of a culture shock at that word) would be with allowing me time to respond to all the various intruding realities. Last week saw the completion of the AMORES report (180 count ’em 180 pages long) the spreading of my father’s ashes (which ended up a farcical rather than solemn occasion) and everything is now sort of coming together on an even keel.

My main roles are to help implement the TEL framework here, and support the development of the OCSLD’s online presence. I’m also co-tutoring on a couple of modules. More things will emerge as I’m here longer. I will just keep stepping up and try to make myself useful. Make myself useful (no try, remember).

I’ve got an ID card and a car park pass, but then I have four others of these from clients when i was self-employed. What is exciting is my own desk, an office key, my own stapler and desk tidy and TWO hole punches.

Just in time for my honeymoon in Vietnam starting on Saturday (for two weeks). Then a conference in Dubrovnik (one week). So more testing of OCSLD’s patience.

I will be at ECEL http://academic-conferences.org/ecel/ecel2015/ecel15-home.htm presenting the hot-off-the-press findings on AMORES. If you’re there then please come up and say hello.

Is it wrong to discriminate against assholes?

I’ve just read this? http://www.economist.com/news/international/21645759-boys-are-being-outclassed-girls-both-school-and-university-and-thegap Which makes interesting reading. It does raise the question for me though – is this really a problem? The generalisations they are making about the sexes (which the article admits are generalisations) are that on the whole girls are hard-working ambitious and studious whereas boys are lazy, aggressive and in awe of a cult of masculinity which states that being academic is effeminate (to put it mildly).

These are generalisations though. Many boys aren’t into a macho culture (or are so socially inept they are oblivious to the fact that there is one – author disclosure: that was the category I was in), many girls are pretty dreadful.

The implicit undercurrent in the article is that this is a problem in education that we need to fix. Boys are underachieving compared to girls, but that is when you take them as a whole. I’d be very interested in seeing a study in which the divisor isn’t boy/girl but asshole/non-asshole (explanation: I’d think someone who buys into macho [or macha] stereotypes is behaving pretty horrendously and so therefore fulfil this definition).

The gender gap could be actually just because a bigger percentage of boys are assholes than girls. If we try this new divisor we might see no gender gap at all. Just that the system is, quite reasonably, making it more difficult for people to get on because they’re complete knobheads, which they possibly deserve, and rewarding the good kids because that’s what they deserve.

Continuing this line of reasoning, therefore, we don’t look at different ways within the system to appeal to the knobheads out there, we actually try and make social changes to reduce the number of knobheads in the education system – combat the culture of masculinity that says being studious is effeminate (or even that actually effeminacy is not such a bad thing).

Having said that, attempting a range of different learning approaches is (intrinsically and self-evidently) A Good Thing, so in the attempts to appeal to the less able, or less interested students, we’d be making education better for everyone. A lot of children are disaffected because they are bored, which is also a very rational and intelligent response. That’s down to us. But maybe the gender divide in education is down to them.

Thoughts?

The role of the teacher in learning environments

I recently was asked in Facebook by my friend Di – “I keep mulling around the importance of the practitioner as the key defining resource in learning… feeling very ‘anti’ commodification of education blah, blah again – is this anything you would be interested in? do you hit on anything like this? Would be very cool to look at how manipulation of new learning environments can be linked back to the centrality of the teacher in the educational process? Possibly? Maybe? Has it been done?” It’s a good question – in the debate on VLEs (or I prefer the US phrase LMS since it seems more honest to refer to them as a way to “manage” learning) on an ALT mailing list someone said that a lot of the problems with Blackboard spring from it being designed by technologists rather than pedagogues. Probably true, but the thing that indicates that teachers are still at the centre of the learning experience is that, no matter how bad the platform a good teacher can produce a good learning experience, and a no matter how great the platform, a bad teacher will create a bad one. I’ll concede the tech helps one way or another, though. This is why criticisms of PowerPoint cheese me off. PowerPoint is a passive tool, if you use it well it’s great – it’s just that too many people (and I could probably own up to this myself) use it as a prop to avoid thinking too much about the learning experience. So they produce crap. And so they switch to Prezi because they think this’ll make them look hip, and for a while people are impressed until they realise it’s the same dull presentation, just with added motion sickness.

But Di’s central implicit point I’m sure is – is it fair to say that new learning environments have been exploited to commodify education. The answer is “yes” unfortunately. I’ve been at meetings where people who should know better talk about putting stuff online so they can bring in thousands of students really cheaply. I’ve just finished a study which indicates that – outside of a few rare contexts where they can work, MOOCs do not have a sustainable business model. Students want education that is free, but they also want education that is valued, and that means qualifications. But for qualifications to mean anything, they need robust assessment, and (there are a few specific exceptions) this means human intervention. Which is expensive. That’s a circle you can’t square. Some people like to study for its own sake, some people like to create learning materials for their own sake. In those scenarios MOOCs FTW. Otherwise, no way.

There is a school of thought that you can make the materials able to be worked through on their own, that people can read them and just pay to pass a test, and this will bring education to the masses, and everyone will be able to make money off it by stacking edcucation high and selling them cheap. This is the commodification that Di’s concerned about. It’s been the model in the private sector for decades. (Reality check – the division I’m about to describe is a generalisation, I’ve seen good and practice on both sides, but my experience is that this does represent the two positions of the two sectors on the whole). I’ve sat in presentations by private sector companies that think e-learning and computer-based training are synonymous. If you read the magazine “E-learning” there’s loads of adverts in there about how “Content is king” (usually accompanied by a pic of Elvis).

There’s that Oxford Union debate between Diana Laurillard and some private sector representatives about whether e-learning is effective or not, and the majority of it is taken up with talking at cross-purposes about what the word e-learning means. Because if you’ve been working in the field of e-learning in HE for the last 20 years you’ll think that e-learning is about forums, social media, wikis, annotated artefacts, virtual worlds, webinars. The actual content isn’t king, it’s a serf, in the background, called upon when needed, produced when needed, borrowed, shared, but ultimately of little value. That’s where the confusion in the Oxford debate arose. In the private sector there was some backlash against e-learning, with some commentators saying it didn’t work, because people need to be connected, they need tutors and contact with other learners; their definition of e-learning didn’t include those things. In HE learning environments, that’s the heart of what e-learning is. That’s what worries me about those conversations where people are talking about pushing out large amounts of content to people and letting them work through it on their own. It’s a retrograde step.

That’s not to say it never works. I went through an computer-training package last month on data protection. There were bits of videos, some MCQ tests, which were automatically marked, and then you had another chance to have a go if you got them wrong. I learnt quite a bit. So it worked. People have taught themselves by reading books, watching TV shows, working through MOOCs, but these are limited in subject matter, and/or limited to particularly special types of learners who can learn like that. But some stuff needs to be talked about to be understood, some stuff isn’t about being understood, but about being able to work with the ambiguity, some people just need the extrinsic motivation of being part of a learning set to get there. So we need two terms really for e-learning environments. One which is just content, and self-paced and about large numbers of students just learning facts. The other which is still about tutor-student, student-student interactions, and is participative and experiential and enables contextual and applicable information. We could differentiate them with qualifying adjectives, let’s call them crap e-learning and potentially good e-learning. Let’s hope that with the buzz around MOOCs kicking off the pound signs in the eyes of senior managers the one doesn’t get conflated with the other..