Trickle up

There’s a phenomenon in media studies called trickle up, the idea that cultural themes, ideas, creativity, start on a smaller scale and then get adopted by larger and more prominent media outlets and then become mainstream. The tendency possibly happens more nowadays due to more people getting into being creative and sharing their outputs globally. Advertising agencies look at what DIY animators are doing and think “that looks rather good” and copy it, or better yet employ the animators in their campaigns, or film companies see character designs from cartoonists and rip them off, or blogs become well-known and get turned into novels and then films. It’s an important concept I think, because so much of our cultural assumptions are that it happens the other way round, media create ideas, which then become copied by the masses. Trickle up is far more common than trickle down as far as I can see.

This blog post has been prompted by two different conversations I’ve had about two examples. The first came out of a tweet from my friend Sarah Sherman, who at a conference tweeted, “anyone remember the Swap Shop logo?” to which i replied that I was more of a TISWAS person myself. When I was at school there were definitely two camps of people, you were either a TISWAS person or a Swap Shop one. When TISWAS started it was a bit staid, one person, then two people at a desk, which then opened out into a studio of messy kids shouting at each other, phantom flan flingers, cages, grunge. If there was ever an animal segment, the animal would escape and the hosts would have to run round the studio trying to catch it, slipping on bits of flan. It was anarchic, surreal, raucous and a bad influence. Swap Shop on the other hand was quiet, reserved, based around the idea of exchanging material goods. The children were well-behaved and asked the guests serious questions. It was dull, reserved and set out to improve the audience. I could see why either could appeal to an audience, but for me TISWAS was full of the kind of people I wanted to hang out with, Swap Shop was full of the sort of people I resorted to hanging out with.

Where the trickle up example comes in though was that TISWAS started several years before Swap Shop, but on ATV, back in the days when ITV had a lot of regional programming. It was the first of the model of an entire Saturday morning show, but with segments of cartoons etc. The BBC copied the idea, gave it the swapping twist, and made it national. Fair enough. Most TV entails stealing a format, changing it around slightly, then representing it as your own.

Where the trickle up phenomenon becomes less acceptable though, is often the way that the larger media outlets create a revisionist history which pretends the previous versions didn’t exist. In a recent TV celebration, the BBC did a programme called “It all started with Swap Shop” as if the precursor didn’t exist, and as if they could get away with the retcon because not everyone had seen the original. That kind of revisionist history of how things were created underplays the role of the smaller outlets and the less well-know creators in order to take all the credit for itself. It’s this side of it that annoys me; not least because all too often people who aren’t familiar with the origins fall for it and fail to recognise where the true origins lie.

The same has been happening on the BBC with the 50th Dr Who celebrations. Bringing back Dr Who was a difficult process for Russel T Davies I’m sure. There were a lot of people at the BBC who needed to be convinced of its relevance to a modern audience I’m sure, but all the documentaries I’ve seen about it (for example, the Culture Show one last week) make out that they were working from a blank slate, that there was no idea whether it would work if it was brought up to date. And yet there was six years of Dr Who, remodelled for a modern audience, keeping the essential elements but changing the tone and the direction, in the Big Finish audio material, which had been going since 1999. The nu Who exactly matches that direction and timbre, and uses many of the writers (and some of the scripts) so it’s no coincidence. And yet the influential role they’ve had in the 21st century regeneration have been written out of the history. In fact even before the 9th Doctor appeared, there was a TV discussion between a journalist and Sylveste McCoy (oh that dates me, spelling his name without the “r”) where the arrogant twat of a journalist was saying how it couldn’t work, because it was so old-fashioned in its essence, to which SM replied, “well it’s been working fine for years on audio”. Hopefully that segment is still around somewhere because it would be great to see him held up to public ridicule now.

So — I’m actually relieved I didn’t pursue a career in media. I could imagine how galling it could be to have your contribution overlooked and written out of history, and for someone else to take the credit for your innovation. In this job, both the education aspect and the academic aspect, people seem (on the whole) scrupulously fair about attributing ideas. The trickle up thing happens just as much, the larger organisations trawl the practice of teachers in the classroom for idea, or ask for them to be contributed to reports, but always the origin is acknowledged. In fact there was an excellent JISC programme called LEX which focused on drawing ideas from students themselves, and integrating those into practice. Wherever possible those were credited too. If only the media industry had the same ethos.

Advertisement

Writing blogs

The post I just wrote is actually the first time I’ve posted something as a project requirement (as opposed to writing one for no reason). All of the project members are expected to write one, and most are new to the process so I produced a list of tips for them:

Keep it conversational and informal.
The blog should read like a stream of consciousness, and is written as if it was whatever comes into your head. In fact that’s the best way to write it, but review it for structure and style. It should still be readable and make sense.
The idea is to have your personal perspective, but include something factual about the project. What you’ve done, but also what you feel about what you’ve done. Ideally it should be prompting more discussion and so people need to 1) have something solid about professional practice to comment on but 2) be given “permission”, so to speak, to provide their own personal perspective. If it’s just a flat report of what you’ve done, this won’t engage, but if it contains nothing concrete then it’s of no value.
Write about disagreements, problems, and so on, but remember that actually this is a project blog, so overall it will be best to maintain a positive report, and not be critical about colleagues, or institutions.
Don’t worry about length. A paragraph is about the shortest you can do, a page (eg 400 words) is about the longest. It’s more important to do it regularly than do a lot each time.
Add the tag bim-hub, and as many others as you will find useful.
I’ve tried to strike the right balance with the post I’ve just done as an example.

Anyone else got any tips for my colleagues on the project?

Starting the BIM-Hub project

I’ve recently started working on a new project – this one is at Loughborough University. It’s been a while getting involved; unlike my other projects this one is actually salaried – I’m an employee! – so the contract inevitably takes longer to set up than with other clients. Also September and October were very very busy with other previous commitments, mainly with the Open University and CSIR Meraka, which meant I could really only get into it once I was back from leave I booked way back before we even got the funding allocated. Still … the 4th November finally came round and at last I could get down to working on it properly, rather than odd bits here and there squeezed between other things.

What’s great, for a start, is that I’d already worked at one of the collaborating partners already and with the other the project is with Coventry University and Ryerson University. It’s also a follow-up to a project that the PI and I had already completed, and written up, and reflected on. That was the Creating a Better Built Environment project. So often you start on something and need to spend a while getting a handle on everything. This time I already know most of the issues and how to evaluate. The danger is though that there’s a tendency to think “business as usual” – every new project, even a second iteration of a running project, throws up new things.

The first thing to get underway was the evaluation of the learning so far – although it’s an 18 month project, that really only contains one academic year, so there’s only one shot at everything. By the time I came on board the students were almost at the end of their first semester, so I wanted to get into getting feedback on their experiences straight away.

It’s always a dilemma what to go for with getting student experiences. Obviously you survey them, that generates lots of numerical data, which always gives you something to analyse, and is the only stuff some people look at, so getting all those numbers makes everyone on the project feel secure. Immediately though we hit an impasse – 5 point or 4 point Lykert scales for responses? I’m firmly on the 5 point side of the argument, but others on the team were on the 4 pt side. I’m not at all convinced by the argument on the other side (in fact, if I’m asked to fill in a 4 pt scale I either draw a fifth point in the middle and tick that, or refuse to fill it in). However, luckily on the team we’ve got a few lateral thinkers one of whom suggested we do both, then analyse the differences. So, not only a compromise, but also another spin off research question which we can publish on. Win-win.

The dilemma with getting the qualitative feedback is interviews or focus groups. On the last project we interviewed the teams separately. and got quite different responses from each team. The ability to do comparative analyses between the different groups proved really useful. However, lots and lots of interviews is not only time consuming to conduct (and we’re trying to limit the impact on the students) but also is a real pain to transcribe (and that’s my job). However, the project plan calls for focus groups (if in doubt always check back with the project plan – really obvious thing to do but frequently forgotten). But I’m hoping to do one or two interviews too. So far I’ve done two, one at Coventry f2f and one at Ryerson via GoToMeeting. Both went well, the Coventry lot needed a bit of prompting at first but soon got very talkative, the Ryerson lot needed no prompting, but audio problems meant I couldn’t always hear what they said – in fact my voice coming over their speakers was all I could hear at times. However I got a great range of data – the best you could hope for really in that some of what they said confirmed what we got last time, some of it was new stuff, and between the groups there was some stuff they shared and some that was different. Of the new stuff what the CU students said was that the chance to do virtual teamworking felt more like the real thing because they were working with external people. That’s not something I’d thought of before. We think of the issues and skills of virtual teamworking as the issues with being at a distance, or cultural (or timezone) differences, or institutional differences, but the outward facing aspects of the project was also something they found a challenge (not in the sense of it being difficult, but in the sense of it being something they had to address and found to be a valuable experience). What was also reassuring was that my answer to my last question (“how do you feel about being part of a research project”) was a very positive one for both groups. We so often hear that in the age of the “student as customer” (grrr) that students want to be cosseted and spoonfed – and won’t take on any risks because they want value for money. Both the British and Canadian students were even bewildered that this should be an issue. In Coventry I got puzzled looks and the answer “well we volunteered for it” and in Ryerson it was a jubilant “we’re pioneers”. Reassuring that educational research is not meeting any flak from the student end. Perhaps we can start being a bit less hesitant about doing it.

New courses in Second Life

I’m at a meeting of the Virtual Worlds in Education Roundtable (twice – I mean I have two avatars there) and the subject is what’s new this academic year in virtual worlds. For the first time in a long time I’m not involved in any teaching inworld, which is a disappointment, and quite worrying that this might imply an downward trend in the use of them in education. Some of the others around the table are doing stuff though, and this one looks very interesting .. I’ll paste the entire course info here in case anyone is.

++++++++++++++++++++++++++++++++++++++++++++++++++++++

This fall a new kind of course will be taught t by 15 institutions of higher learning. The courses are all connected on the theme of feminism and technology, and the general public is welcome to participate through independent collaborative groups. This is an invitation to join a discussion group in Second Life, which will meet on Sundays at 2pm to discuss the weekly themes of the course.

I hope you can join us, and if you know someone who might be interested, please forward this message.  More information is below.

WHAT:

A discussion group revolving around the FemTechNet Distributed Open Collaborative Course on feminism and new technologies  Please see the press release for this collaborative course below. More information can be found on the website: http://femtechnet.newschool.edu/docc2013/

DATE:  September 29 – December 8

WHEN:

Sundays at 2pm Eastern Daylight/Standard Time ( 11 am Pacific, 7pm GMT). Check the Minerva OSU Calendar for cancellations or date changes: http://elliebrewster.com/minerva/minerva-calendar/

WHERE:

The discussions will be held in the virtual world Second Life, in the Ohio State virtual classroom, Minerva OSU. To find Minerva OSU: Simply type Minerva OSU into the Second Life address bar.

If you are new to virtual worlds and would like to join an orientation session in the week preceding the first meeting, please contact Ellie Brewster

If you are exploring in Second Life and need help, please IM Ellie Brewster.

TOPICS:

With the approval of the group, we will follow the weekly video dialogues that accompany the course (schedule of videos is here: http://femtechnet.newschool.edu/video-dialogues-topics-schedule/ ). Other suggestions for topics are welcome.

WHO CAN JOIN:

Anyone can join the discussion; however, there will be a weekly limit of 35. If you cannot gain access to the classroom, it will be because the room is full. If you are interested in leading a second discussion group at a different time, please let me know.

HOW TO JOIN:

For inclusion in the e-mailing list (no more than one e-mail per week), and membership in the Second Life group (necessary for admission), please send a request to this address with DOCC Mailing List  in the subject line. Please include your avatar name.

For Immediate Release

Feminist Digital Initiative Challenges Universities’ Race for MOOCs

Columbus, OH, August 21, 2013: FemTechNet, a network of feminist scholars and educators, is launching a new model for online learning at 15 higher education institutions this fall. The DOCC, or Distributed Open Collaborative Course, is a new approach to collaborative learning and an alternative to MOOCs, the massive open online course model that proponents claim will radicalize twenty-first century higher education.

The DOCC model is not based on centralized pedagogy by a single “expert” faculty, nor on the economic interests of a particular institution. Instead, the DOCC recognizes, and is built on, expertise distributed among participants in diverse institutional contexts. The organization of a DOCC emphasizes learning collaboratively in a digital age and avoids reproducing pedagogical techniques that conceive of the student as a passive listener. A DOCC allows for the active participation of all kinds of learners and for the extension of classroom experience beyond the walls, physical or virtual, of a single institution. FemTechNet’s first DOCC course, “Dialogues in Feminism and Technology,” will launch fall 2013.

The participating institutions range from small liberal arts colleges to major research institutions. They include: Bowling Green University, Brown University, California Polytechnic State University, Colby-Sawyer College, CUNY, Macaulay Honors College and Lehman College (CUNY), The New School, Ohio State University, Ontario College of Art and Design, Pennsylvania State University, Pitzer College, Rutgers University, University of California San Diego, University of Illinois Urbana-Champaign and Yale University.

DOCC participants, both online and in residence, are part of individualized “NODAL courses” within the network. Each institution’s faculty configures its own course within its specific educational setting. Both faculty and students will share ideas, resources, and assignments as a feminist network: the faculty as they develop curricula and deliver the course in real time; the students as they work collaboratively with faculty and each other.

At Ohio State, the course will be taught in the Women’s, Gender, and Sexuality Studies by Dr. Christine (Cricket) Keating. The course, “Gender, Media, and New Technologies,” will be offered on the undergraduate level. Keating is a recipient of the 2011 Alumni Award for Distinguished Teaching.  This course takes as its starting point the following questions: How are gender identities constituted in technologically mediated environments? How have cyberfeminists used technology to build coalitions and unite people across diverse contexts? How are the “do it yourself” and “do it with others” ethics in technology cultures central to feminist politics? Juxtaposing theoretical considerations and case studies, course topics include: identity and subjectivity; technological activism; gender, race and sexualities; place; labor; ethics; and the transformative potentials of new technologies. The course itself is a part of a cutting-edge experiment in education, culture, and technology. It is “nodal” course within a Distributed Online Collaborative Course (DOCC). In this course, we will collaborate with students and professors across the U.S. and Canada to investigate issues of gender, race, and techno-culture.

These dialogues are also anchored by video curriculum produced by FemTechNet. “Dialogues on Feminism and Technology” are currently twelve recorded video dialogues featuring pairs of scholars and artists from around the world who think and reimagine technology through a feminist lens. Participants in the DOCC — indeed, anyone with a connection to the web — can access the video dialogues, and are invited to discuss them by means of blogs, voicethreads and other electronic media. Even as the course takes place, students and teachers can plug in and join the conversation.  Through the exchanges and participants’ input, course content for the DOCC will continue to grow. From this process emerges a dynamic and self-reflective educational model.

 

Debunking educational myths

It’s a strange experience, but I think I have fallen in love with an academic paper. It is this one: http://www.tandfonline.com/doi/full/10.1080/00461520.2013.804395#.UiIooj-_h5J “Do Learners Really Know Best? Urban Legends in Education” The title refers to its debunking of the idea of Student-Led education, but it also totally blows away two other myths I completely object to in education, that of the digital native and that of learning styles. Student-led learning is also at the heart of the idea that teaching adults is so radically different from teaching children that it deserves a different name, one that goes under the putrid buzzword of “androgogy”. The paper opposes the idea that people really know how they need to learn, and that, if they are adults, we should hand over those choices to them. The arguments against the idea of digital natives and learning styles is well-documented, but this summarises them neatly. Any teacher educator everywhere should be made to read this before they stand up in front of a group of trainee teachers.

Prensky talked about the net generation as if they had a native language, and previous generations could at best only learn to use the tech as a second language. It really caught on for a while, but as more evidence has come in the premise has three big flaws 1) is that you can generalise like that, a lot of the younger generations struggle with technology, or don’t like it, and the older ones don’t 2) that you can generalise about the tech, someone may be a wiz at manipulating images, but totally blow at expressing themselves in twitter, and 3) that it matters. Because even if learners are using the tech in new ways it shouldn’t necessarily lead how you teach them, since this might not be conducive to effective teaching.

I’ve just finished a book bringing together a variety of case studies on student-centred, practical learning. The evidence is that it doesn’t matter if you’re in a middle school in Chicago, a university in Nairobi, or a high school in Chile, activity-led, student-centred learning works better than subject-centred. The difference is that student-centred is not student-led. Often if you give students what they want, you would end up with them sitting there and being spoon-fed information, which isn’t really an effective way to learn. The problem with being customer-driven in education is that educators are handing over the direction of education to people who aren’t experts in how best to learn. We are. Or should be.

The problem with the digital natives idea is  that it’s one of those concepts that makes people feel like they’ve got a handle on changes that are happening, so become very popular. But the reality is both more complicated (people are far more varied than anything that can be put in a box) and simpler (deep down people don’t change that much anyway). The digital natives thing caught on in education because we could see our students working differently than we did (multitasking or rather switch tasking, meshing technologies), so we tried to emulate that in how we taught. The problem is that the assumption that “this is how kids learn now so we should support it” skipped the step of finding out if they actually learnt well doing that. Answer is they don’t. The other problem is that crazy paranoid people like Susan Greenfield made a bit of a career out of warning everyone that our brains are plastic and kids’ brains are being screwed up by being online. Until everyone spotted she’d seriously lost the plot, it got a lot of people worried about it. I think there’s a lot of truth in the idea that we need to keep students motivated because the usual droney text-based abstract approach (nicknamed “mortarboading”) isn’t successful but then, the truth is, that it wasn’t for our generation either. The reality is, our brains evolved over 100s of thousands of years. 15 years of the internet is going to have no impact on how they work.

The danger is that as a researcher, there is a pressure on you to develop models that keep the pigeon-holing going, because they’re the only things that get attention. I’ve done it myself. Extended Activity Theory. Progressive presence. Fourth Places. All buzzwords that oversimplify things. The fact that they haven’t made me rich and famous isn’t through lack of my desire to sell out my principles, it’s just that no-one’s noticed them yet. You need something that can fit on one slide of PowerPoint if you want to make a name for yourself. I suppose you could do it, and cover it with loads of caveats, but as the models get passed on and popularised, the caveats get shed. Look at how people have bastardised the Myers-Briggs stuff over the years.

Social presence and bots

cog

One of the issues with MOOCs and just a whole mass of OER in general, is that if you have thousands of people looking at the materials, who’s going to help give you the individual steer through them that many learners need. Bots are one of the things that may help with this. Bots or companion agents, or AI tutors – they can be called any of these things (but NOT avatars, avatars are specifically online representations of humans, don’t get them mixed up) are standalone programs, which can be purely text-based, but are usually these days a head and shoulders or even a 3D representation (in which case they are embodied companion agents). In virtual worlds, they are indistinguishable from avatars, until you start to talk with them). Even then I’ve run workshops where one or more of the attendees have had long and increasingly frustrated conversations with a bot. There is a sort of intellectual arms race between humans and bots called the Turing test. The idea is that a person will try to work out by having a conversation whether something is human or computer driven (a process called turing, i.e. they ture, they are turing, they have tured – actually only i call it that, but I’m trying to get it taken up by everyone else and eventually adopted by the OED). Although the programs are getting better, people are getting better at turing, so the bar is rising faster than the programmers can match. At the moment.

In the work I’ve been doing with avatars, there’s a strong link between the affinity people feel with their avatar and their perception of how effective their learning is. In the project I’ve been doing with Ravensbourne College and Elzware, I started with the same hypothesis, if the learner feels more affinity with the bot that’s leading them through the content, will they experience their learning as more effective?

emo

We’re not at that stage yet, but in the first phases – since the ethos of the project is that it is a user-centred design – we began with a series of workshops to identify which of a series of bot designs the learners would feel a greater affinity towards, and why.

The students selected a bot design that was not anthropomorphic, though narrowly beating one that was. The reasons for this were various, but was down to three major reasons:

Bots that were realistic and too anthropomorphic were too creepy and too distracting.

Bots that were cartoony and too anthopomorphic weren’t creey but were still distracting.

Bots that were realistic but not anthropomorphic were just right.

Bots that were cartoony and not anthropormorphic were unengaging.

goop

“Realistic” in this sense, is a very specific usage, meaning engaging the breadth and/or depth of senses, and is the sense that people like Naimark and Steuer use it. So it could be 3D rendering, higher number of bits, more detail and so on. It also means behavioural realism, and it was this aspect, having a personality (and not necessarily a pleasant one) that students felt made the “realistic” but non-anthropomorphic the best tutors for them.

We still haven’t been able to put this to the test – the actual I in the AI is still being worked on, but we have hopefully put in place a design that will make the bot something the students want to learn from.

Latest book published

Just heard on the grapevine (not from the publisher or anything helpful like that) that my latest book on virtual worlds Experiential Learning in Virtual Worlds has just been published. https://www.interdisciplinarypress.net/online-store/digital-humanities/experiential-learning-in-virtual-worlds

Image

Looks good doesn’t it? It’s sort of a hybrid book, in that it’s largely a collection of chapters by a range of authors, edited by Greg Withnail and me (tempted to say Withnail and I, but that would be grammatically incorrect). I’ve got a few chapters in there though, the introduction, which is cannibalising a bit more of my PhD, a chapter I wrote with Anna Peachey on the various reasons why students hate Second Life (again adapted from my PhD) and finally a chapter on the various futures of virtual worlds, including a short description of a potential view of an augmented reality classroom. If you read that description, I’ve deliberately included something that’s almost impossible into the description as a sort of test to see which bit people will pick up on.

Although the book is £25, the introduction is downloadable for free. In the introduction, what I’ve tried to do is write it as a proper academic paper, covering a specific subject (in this case how notions of reality influence learning in virtual worlds), but focusing on the chapters in the book as my literature sources. With this the aim was to try and kill two birds with one stone … both introduce the chapters, but also provide something new to the debate. It was prompted by an argument between Greg and me about whether we should permit the authors to use the phrase “real world” to describe the physical world, my position being that this relegates virtual world activity to a secondary status, of not real when it can seem like that for a lot of people, and Greg pointing out that this is just not how people talk; for most people the physical world is the real world. In the end I went along with Greg and we just let the authors do their own thing, but wanted to raise this as an area that is problematic to some extent.

Most of the book chapters were actually relevant to this argument (the one or two that weren’t were more looking at the technology) and so it became an interesting task to pull together what the other authors had to say about how reality is perceived in a virtual world setting. I came to these conclusions:

  1. Presence and embodiment are key to effective experiential learning, but do not always occur.
  2. Immersion is fostered by the open navigable space of virtual worlds in balance with appropriate learning design. (which is covered in more depth in an upcoming book)
  3. To be effective for learning, not everything has to be perceived as real, but it is more effective if all participants agree on which parts are real and which are not. (Actually that probably applied to life in general, in the physical world too).
  4. In some cases, it is the non-real aspects that have value for learning. (in short, the people that complain that virtual worlds are not real are completely missing the point)

Anyway, take a look and see if you feel like shelling out for the whole thing.

MOOC schmook

The discussion about MOOCs is raging again and various alarmist mutterings are occurring. I’ve even heard the phrase “paradigm shift” cropping up a few times. I’ve used the argument a few times that I think the panic that they’re creating is occurring unnecessarily because people are seeing MOOCs as the equivalent of courses, when really they’re not, they’re the equivalent of text books. There’s the worry that this could mean a transformation of HE in that people could just attend a MOOC, then go along to be assessed and get a qualification without ever attending an actual lecture in an actual university.

I’ve got some news for you. People have been doing that for centuries.

Well not attend a MOOC and get a qualification, but attending an exam without attending a taught course. I sat plenty of exams as an undergraduate where I hadn’t learnt anything from the lectures, but had to make sense of it independently from a text book. I’ve had lectures that actually were simply regurgitating text books (though in fairness those were usually written by the lecturer). There was no tailoring to the student base, no extra explanatory stuff if you were struggling, there was no sense of gaining insights from having a person standing there talking. There was no teaching.

The only advantage of attending the course delivered by the university rather than staying at home reading the text books was that they knew exactly what was to be on the exam, so you had to be there to find out what the syllabus was. Or in the case of my housemate at uni, have someone to send out to the lectures (i.e. me) so you could copy their notes when they got home (or by the second year she had the brainwave of buying lots of carbon paper “for me”, so she didn’t even need to do that).

So really, paying the fees to attend those lectures to the university, would have been a waste of money, since we could have just been given a syllabus and a reading list. This was when education was free though. The people who deserved the money for me passing those exams were the authors (Thank you Richard Feynman) and in the case of my computing assignments, the second and third years who offered me advice. Really I was only paying (or rather the state was only paying) for the university to accredit me, not to teach me. For those courses. Other ones were taught properly, I should add.

So where’s the harm in acknowledging that’s how HE has always worked and allow more people access to the role of providing content, and to be reimbursed for it? Just as kindle allows more people the opportunity to write and sell books directly. Education is a mixture of content, teaching, assessment and accreditation. The last two probably have to be provided by the same institution but the rest could be distributed. If you need to know something about, for example, quantum mechanics, join a MOOC (or watch some youtube videos, or read a book) about it. Need some help?, sign up with someone with a good reputation at teaching it, and if they’re good at it, they’ll put together a learning set on the subject. Feel you know enough?, sign up for an exam and be assessed. Accrue enough assessments, get a degree.

In reality, things probably won’t change that much. To be accredited you have to learn the right stuff to pass that particular exam. And universities will probably keep that close to their chest so you have to sign up with their course. Practical exams need equipment that only universities can afford to provide. Also the business model for MOOCs doesn’t really support them as standalone things. The only economic rationale for them that i can see is as a loss leader. If you like the MOOC but want to know more, then sign up for some tuition, and then sign up for the degree, or (perhaps if education does become more disaggregated) to be assessed and accredited at the end. Certainly there’s no way to make money directly from MOOCs since they’re not only free, but also the content is immediately rippable once it’s made public. Two colleagues I spoked to last week were expressing shock at a MOOC’s content being replicated within a week or two in its entirety and used to create another two MOOCs elsewhere. That seems to me to be perfectly appropriate. The learning isn’t happening when the content is being read, it’s happening elsewhere, in the communication between learners, or between learners and tutors. The content should be free because, essentially, it is the part of the process that has the least value.

Oh and as for the idea that it’s a cheap, and therefore affordable and accessible format for all those who don’t have access to HE, Martin Smith at Strathclyde points out that for the learners that don’t have access to HE normally, self-learning is not going to be that easy. There are a set of skills that you acquire by being formally taught, that you need in order to get the most from materials. This is where Sugatra Mitra’s idea of Self Organised Learning falls down. Yes you can go so far with self-organised learning, and some remarkable people are effectively self-taught, but it’s a difficult skill to learn for most, and no amount of other learners, or Intelligent Tutors/Agents/Bots are going to fill that gap.

A second blog

I’ve set a second blog up to capture all the stuff I wanted to say about things other than work; prompted mainly by all the street art I saw in Brazil. I don’t want to clog up the work-related posts with other things, so thought a secondary blog would be the best route. There will also be other things posted on neither the subject of non-formal art (about which I actually know nothing, I’m just interested) or elearning but they’ll probably be randomly distributed across the two. It’s at markchilds2.wordpress.com

Good and bad interface design

There’s a growing tendency in user interfaces to move to a “design aesthetic” rather than actually having something that actually works for the user. You know the sort of thing I mean. Metro, for example, which has made a pig’s ear of using my Xbox 360, and by most accounts has done the same thing for Windows. More and more functions are added, with the useful stuff buried deeper and deeper and more and more difficult to find. Instead of functionality the interface is replaced with stuff that “looks good”, as if that’s more important than being able to use it. I know why it happens. I went to journalism school for two years, and in that time we were taught a lot of the normative practices of journalism, a few old saws that got passed down from generation to generation. One of these was “people first, events second, ideas third”. This particular pearl of wisdom is why, whenever you see an article about some amazing scientific discovery, the article focuses on the life of the scientist making it. The reasoning is that people won’t be drawn in if you talk about the discovery, only if you talk about the person. This reasoning is why Horizon is far worse than it used to be, because you have a whole swathe of bollocks to sit through before you actually learn anything. There’s also a tendency to make vague generalisations about the subject matter first. I have a rule, that if a documentary hasn’t told me anything new by 7 minutes in, I turn it off. Pretty consistently this seems to work. Seven minutes of waffle, then bam some interesting fact. It’s as if they believe that if they shock us with information too early on it will damage our systems or something.

The thing is, there is no evidence for this as a rule. In fact, if you ask anyone in the audience they would put the relative importance of people, events, ideas in the reverse order. It is just that someone once made this up, and in a profession where people are desperate for a clue about how to do it well, people cling to it as a fact. It’s also why we have  the concept of “learning styles” in education and “digital natives” in elearning.

Designers seem to work from a similar set of principles that have just been pulled out of  <edit> thin air. </edit>. Resistance to the introduction of the newer interface, which is “cleaner” or “more aesthetic” or “gui driven” is just dismissed as the user not liking change. Well, to some extent, sticking with what exists is important. The whole point of interfaces is that they become transparent through frequent use, and this supports a sense of immersion. You mess with them and suddenly they become visible again and therefore less usable. You have to be really sure something is an improvement before you mess with it.

What is tricky too is showing why the new one is worse, because so often the upgrade is done without any foreknowledge, so it’s not possible to make a comparison. However, the BBC iplayer has had both the new version and the old version side-by-side for a while, so it’s possible to screen grab both and demonstrate why the new one is so poor. So here goes.

This is the landing page for the old iplayer.

Image

You can see immediately several radio programmes to listen to, in a variety of categories. If you see one you like you can click on it, and within a few seconds are listening to something. So for me, the Unbelievable Truth would do it if I hadn’t already heard it. so … click on that and done.

If you don’t see something, you can click on Favourites and see things you’ve previously tagged as things you’re interested in. It looks like this:

Image

Ah OK — heard all of those so go deeper into the website, which you can do by scrolling down. In theory people don’t like doing this, but where is the evidence?

Image

What’s great about this is that you can see the top selections from a variety of categories, which might lead you in a direction you hadn’t otherwise considered. Nothing there takes my fancy so I’ll head onto comedy and select that.

Image

Well that should be enough choice. Round the Horne is pretty bona. However, if not then click on show all comedy and you have the entire list in alphabetical order.

Image

So there you have it. Nice, straightforward and fast.

Here’s the landing page of the new interface:

Image

You can immediately see the problem. Someone with a “design aesthetic” has been let loose. There is a lot of empty space which contains no information, and seems to be there just to look good. There are no links to actual programmes. We are forced to select a search strategy to find a programme two of which are meaningless. I mean who cares what station or what time of day it’s on?

So after a completely pointless and confusing click on “categories” we get to this :

Image

And as you can see STILL NO LINKS TO PROGRAMMES. It’s another superfluous click on comedy to get to:

iplayer 8

We can scroll down to see programmes but they’re not in any type of order. The only real advantage of the new interface is that it enables filtering by sub-genre. It needs one more click to get the alphabetical list:

Image

Which also for some reason includes programmes which aren’t available. Is anyone actually thinking through this at all?

And yet with all the extra clicks, this is meant to be “simpler” … the assumptions seem to be that we are children who like big clear pictures with plenty of colour and not too much information at once. Reading is too hard for us. We know exactly what we want to search for (the opportunities for serendipitously discovering stuff are eliminated) and we have time to randomly click on things to discover content. None of these things are true and I resent the implication.

For the moment both are running side by side and this is fine. Maybe some people prefer the new version. But anyone who has let the designers loose on their interface ought to give people the option. For example the latest version of Firestorm (a virtual worlds viewer) has the option to switch between Firestorm (a gui-driven interface) and Phoenix (a text driven one). Not all of us rate aesthetics above speed of access to information and not everyone needs bright colours, or curved edges, or little animations in their interfaces. In fact, they’re distracting and annoying.

Why the rant? I suspect you’re wondering. It’s because I can see the online world becoming less and less usable as a result of designers being let loose on things, and either not consulting, or deliberately ignoring the user feedback, as if we’re too uneducated in “design” to know what we want. I had a huge argument with a colleague who said that a change had been made to something he’d been working on because people prefer GUI to text. “No they don’t” I replied. He just said that “yes they do”. My response: “Maybe most people prefer it, but by saying ‘people’ you’re implying that all do, and I know that’s not true because I don’t”. The result? He completely ignored the point I was making, possibly because I wasn’t a designer and so therefore wasn’t capable of making a proper judgment about what I liked. Unfortunately if no-one creating interfaces listens, the online world will become less usable. I no longer access videos on my xbox, because the user interface is messed up. I use Twitter much less because the interface is unwieldy. wordpress is another good example. WTF does that w in a circle mean really? Could they not put “menu” there or something? I was using WordPress for months before i realised I could access my Reader or Freshly Pressed by clicking on it. Bit by bit I can see the gradual disenfranchisement of the user as control over how the online world is accessed is ceded to “designers” and I’d quite like it to stop.