New courses in Second Life

I’m at a meeting of the Virtual Worlds in Education Roundtable (twice – I mean I have two avatars there) and the subject is what’s new this academic year in virtual worlds. For the first time in a long time I’m not involved in any teaching inworld, which is a disappointment, and quite worrying that this might imply an downward trend in the use of them in education. Some of the others around the table are doing stuff though, and this one looks very interesting .. I’ll paste the entire course info here in case anyone is.

++++++++++++++++++++++++++++++++++++++++++++++++++++++

This fall a new kind of course will be taught t by 15 institutions of higher learning. The courses are all connected on the theme of feminism and technology, and the general public is welcome to participate through independent collaborative groups. This is an invitation to join a discussion group in Second Life, which will meet on Sundays at 2pm to discuss the weekly themes of the course.

I hope you can join us, and if you know someone who might be interested, please forward this message.  More information is below.

WHAT:

A discussion group revolving around the FemTechNet Distributed Open Collaborative Course on feminism and new technologies  Please see the press release for this collaborative course below. More information can be found on the website: http://femtechnet.newschool.edu/docc2013/

DATE:  September 29 – December 8

WHEN:

Sundays at 2pm Eastern Daylight/Standard Time ( 11 am Pacific, 7pm GMT). Check the Minerva OSU Calendar for cancellations or date changes: http://elliebrewster.com/minerva/minerva-calendar/

WHERE:

The discussions will be held in the virtual world Second Life, in the Ohio State virtual classroom, Minerva OSU. To find Minerva OSU: Simply type Minerva OSU into the Second Life address bar.

If you are new to virtual worlds and would like to join an orientation session in the week preceding the first meeting, please contact Ellie Brewster

If you are exploring in Second Life and need help, please IM Ellie Brewster.

TOPICS:

With the approval of the group, we will follow the weekly video dialogues that accompany the course (schedule of videos is here: http://femtechnet.newschool.edu/video-dialogues-topics-schedule/ ). Other suggestions for topics are welcome.

WHO CAN JOIN:

Anyone can join the discussion; however, there will be a weekly limit of 35. If you cannot gain access to the classroom, it will be because the room is full. If you are interested in leading a second discussion group at a different time, please let me know.

HOW TO JOIN:

For inclusion in the e-mailing list (no more than one e-mail per week), and membership in the Second Life group (necessary for admission), please send a request to this address with DOCC Mailing List  in the subject line. Please include your avatar name.

For Immediate Release

Feminist Digital Initiative Challenges Universities’ Race for MOOCs

Columbus, OH, August 21, 2013: FemTechNet, a network of feminist scholars and educators, is launching a new model for online learning at 15 higher education institutions this fall. The DOCC, or Distributed Open Collaborative Course, is a new approach to collaborative learning and an alternative to MOOCs, the massive open online course model that proponents claim will radicalize twenty-first century higher education.

The DOCC model is not based on centralized pedagogy by a single “expert” faculty, nor on the economic interests of a particular institution. Instead, the DOCC recognizes, and is built on, expertise distributed among participants in diverse institutional contexts. The organization of a DOCC emphasizes learning collaboratively in a digital age and avoids reproducing pedagogical techniques that conceive of the student as a passive listener. A DOCC allows for the active participation of all kinds of learners and for the extension of classroom experience beyond the walls, physical or virtual, of a single institution. FemTechNet’s first DOCC course, “Dialogues in Feminism and Technology,” will launch fall 2013.

The participating institutions range from small liberal arts colleges to major research institutions. They include: Bowling Green University, Brown University, California Polytechnic State University, Colby-Sawyer College, CUNY, Macaulay Honors College and Lehman College (CUNY), The New School, Ohio State University, Ontario College of Art and Design, Pennsylvania State University, Pitzer College, Rutgers University, University of California San Diego, University of Illinois Urbana-Champaign and Yale University.

DOCC participants, both online and in residence, are part of individualized “NODAL courses” within the network. Each institution’s faculty configures its own course within its specific educational setting. Both faculty and students will share ideas, resources, and assignments as a feminist network: the faculty as they develop curricula and deliver the course in real time; the students as they work collaboratively with faculty and each other.

At Ohio State, the course will be taught in the Women’s, Gender, and Sexuality Studies by Dr. Christine (Cricket) Keating. The course, “Gender, Media, and New Technologies,” will be offered on the undergraduate level. Keating is a recipient of the 2011 Alumni Award for Distinguished Teaching.  This course takes as its starting point the following questions: How are gender identities constituted in technologically mediated environments? How have cyberfeminists used technology to build coalitions and unite people across diverse contexts? How are the “do it yourself” and “do it with others” ethics in technology cultures central to feminist politics? Juxtaposing theoretical considerations and case studies, course topics include: identity and subjectivity; technological activism; gender, race and sexualities; place; labor; ethics; and the transformative potentials of new technologies. The course itself is a part of a cutting-edge experiment in education, culture, and technology. It is “nodal” course within a Distributed Online Collaborative Course (DOCC). In this course, we will collaborate with students and professors across the U.S. and Canada to investigate issues of gender, race, and techno-culture.

These dialogues are also anchored by video curriculum produced by FemTechNet. “Dialogues on Feminism and Technology” are currently twelve recorded video dialogues featuring pairs of scholars and artists from around the world who think and reimagine technology through a feminist lens. Participants in the DOCC — indeed, anyone with a connection to the web — can access the video dialogues, and are invited to discuss them by means of blogs, voicethreads and other electronic media. Even as the course takes place, students and teachers can plug in and join the conversation.  Through the exchanges and participants’ input, course content for the DOCC will continue to grow. From this process emerges a dynamic and self-reflective educational model.

 

Debunking educational myths

It’s a strange experience, but I think I have fallen in love with an academic paper. It is this one: http://www.tandfonline.com/doi/full/10.1080/00461520.2013.804395#.UiIooj-_h5J “Do Learners Really Know Best? Urban Legends in Education” The title refers to its debunking of the idea of Student-Led education, but it also totally blows away two other myths I completely object to in education, that of the digital native and that of learning styles. Student-led learning is also at the heart of the idea that teaching adults is so radically different from teaching children that it deserves a different name, one that goes under the putrid buzzword of “androgogy”. The paper opposes the idea that people really know how they need to learn, and that, if they are adults, we should hand over those choices to them. The arguments against the idea of digital natives and learning styles is well-documented, but this summarises them neatly. Any teacher educator everywhere should be made to read this before they stand up in front of a group of trainee teachers.

Prensky talked about the net generation as if they had a native language, and previous generations could at best only learn to use the tech as a second language. It really caught on for a while, but as more evidence has come in the premise has three big flaws 1) is that you can generalise like that, a lot of the younger generations struggle with technology, or don’t like it, and the older ones don’t 2) that you can generalise about the tech, someone may be a wiz at manipulating images, but totally blow at expressing themselves in twitter, and 3) that it matters. Because even if learners are using the tech in new ways it shouldn’t necessarily lead how you teach them, since this might not be conducive to effective teaching.

I’ve just finished a book bringing together a variety of case studies on student-centred, practical learning. The evidence is that it doesn’t matter if you’re in a middle school in Chicago, a university in Nairobi, or a high school in Chile, activity-led, student-centred learning works better than subject-centred. The difference is that student-centred is not student-led. Often if you give students what they want, you would end up with them sitting there and being spoon-fed information, which isn’t really an effective way to learn. The problem with being customer-driven in education is that educators are handing over the direction of education to people who aren’t experts in how best to learn. We are. Or should be.

The problem with the digital natives idea is  that it’s one of those concepts that makes people feel like they’ve got a handle on changes that are happening, so become very popular. But the reality is both more complicated (people are far more varied than anything that can be put in a box) and simpler (deep down people don’t change that much anyway). The digital natives thing caught on in education because we could see our students working differently than we did (multitasking or rather switch tasking, meshing technologies), so we tried to emulate that in how we taught. The problem is that the assumption that “this is how kids learn now so we should support it” skipped the step of finding out if they actually learnt well doing that. Answer is they don’t. The other problem is that crazy paranoid people like Susan Greenfield made a bit of a career out of warning everyone that our brains are plastic and kids’ brains are being screwed up by being online. Until everyone spotted she’d seriously lost the plot, it got a lot of people worried about it. I think there’s a lot of truth in the idea that we need to keep students motivated because the usual droney text-based abstract approach (nicknamed “mortarboading”) isn’t successful but then, the truth is, that it wasn’t for our generation either. The reality is, our brains evolved over 100s of thousands of years. 15 years of the internet is going to have no impact on how they work.

The danger is that as a researcher, there is a pressure on you to develop models that keep the pigeon-holing going, because they’re the only things that get attention. I’ve done it myself. Extended Activity Theory. Progressive presence. Fourth Places. All buzzwords that oversimplify things. The fact that they haven’t made me rich and famous isn’t through lack of my desire to sell out my principles, it’s just that no-one’s noticed them yet. You need something that can fit on one slide of PowerPoint if you want to make a name for yourself. I suppose you could do it, and cover it with loads of caveats, but as the models get passed on and popularised, the caveats get shed. Look at how people have bastardised the Myers-Briggs stuff over the years.

Social presence and bots

cog

One of the issues with MOOCs and just a whole mass of OER in general, is that if you have thousands of people looking at the materials, who’s going to help give you the individual steer through them that many learners need. Bots are one of the things that may help with this. Bots or companion agents, or AI tutors – they can be called any of these things (but NOT avatars, avatars are specifically online representations of humans, don’t get them mixed up) are standalone programs, which can be purely text-based, but are usually these days a head and shoulders or even a 3D representation (in which case they are embodied companion agents). In virtual worlds, they are indistinguishable from avatars, until you start to talk with them). Even then I’ve run workshops where one or more of the attendees have had long and increasingly frustrated conversations with a bot. There is a sort of intellectual arms race between humans and bots called the Turing test. The idea is that a person will try to work out by having a conversation whether something is human or computer driven (a process called turing, i.e. they ture, they are turing, they have tured – actually only i call it that, but I’m trying to get it taken up by everyone else and eventually adopted by the OED). Although the programs are getting better, people are getting better at turing, so the bar is rising faster than the programmers can match. At the moment.

In the work I’ve been doing with avatars, there’s a strong link between the affinity people feel with their avatar and their perception of how effective their learning is. In the project I’ve been doing with Ravensbourne College and Elzware, I started with the same hypothesis, if the learner feels more affinity with the bot that’s leading them through the content, will they experience their learning as more effective?

emo

We’re not at that stage yet, but in the first phases – since the ethos of the project is that it is a user-centred design – we began with a series of workshops to identify which of a series of bot designs the learners would feel a greater affinity towards, and why.

The students selected a bot design that was not anthropomorphic, though narrowly beating one that was. The reasons for this were various, but was down to three major reasons:

Bots that were realistic and too anthropomorphic were too creepy and too distracting.

Bots that were cartoony and too anthopomorphic weren’t creey but were still distracting.

Bots that were realistic but not anthropomorphic were just right.

Bots that were cartoony and not anthropormorphic were unengaging.

goop

“Realistic” in this sense, is a very specific usage, meaning engaging the breadth and/or depth of senses, and is the sense that people like Naimark and Steuer use it. So it could be 3D rendering, higher number of bits, more detail and so on. It also means behavioural realism, and it was this aspect, having a personality (and not necessarily a pleasant one) that students felt made the “realistic” but non-anthropomorphic the best tutors for them.

We still haven’t been able to put this to the test – the actual I in the AI is still being worked on, but we have hopefully put in place a design that will make the bot something the students want to learn from.

Latest book published

Just heard on the grapevine (not from the publisher or anything helpful like that) that my latest book on virtual worlds Experiential Learning in Virtual Worlds has just been published. https://www.interdisciplinarypress.net/online-store/digital-humanities/experiential-learning-in-virtual-worlds

Image

Looks good doesn’t it? It’s sort of a hybrid book, in that it’s largely a collection of chapters by a range of authors, edited by Greg Withnail and me (tempted to say Withnail and I, but that would be grammatically incorrect). I’ve got a few chapters in there though, the introduction, which is cannibalising a bit more of my PhD, a chapter I wrote with Anna Peachey on the various reasons why students hate Second Life (again adapted from my PhD) and finally a chapter on the various futures of virtual worlds, including a short description of a potential view of an augmented reality classroom. If you read that description, I’ve deliberately included something that’s almost impossible into the description as a sort of test to see which bit people will pick up on.

Although the book is £25, the introduction is downloadable for free. In the introduction, what I’ve tried to do is write it as a proper academic paper, covering a specific subject (in this case how notions of reality influence learning in virtual worlds), but focusing on the chapters in the book as my literature sources. With this the aim was to try and kill two birds with one stone … both introduce the chapters, but also provide something new to the debate. It was prompted by an argument between Greg and me about whether we should permit the authors to use the phrase “real world” to describe the physical world, my position being that this relegates virtual world activity to a secondary status, of not real when it can seem like that for a lot of people, and Greg pointing out that this is just not how people talk; for most people the physical world is the real world. In the end I went along with Greg and we just let the authors do their own thing, but wanted to raise this as an area that is problematic to some extent.

Most of the book chapters were actually relevant to this argument (the one or two that weren’t were more looking at the technology) and so it became an interesting task to pull together what the other authors had to say about how reality is perceived in a virtual world setting. I came to these conclusions:

  1. Presence and embodiment are key to effective experiential learning, but do not always occur.
  2. Immersion is fostered by the open navigable space of virtual worlds in balance with appropriate learning design. (which is covered in more depth in an upcoming book)
  3. To be effective for learning, not everything has to be perceived as real, but it is more effective if all participants agree on which parts are real and which are not. (Actually that probably applied to life in general, in the physical world too).
  4. In some cases, it is the non-real aspects that have value for learning. (in short, the people that complain that virtual worlds are not real are completely missing the point)

Anyway, take a look and see if you feel like shelling out for the whole thing.

MOOC schmook

The discussion about MOOCs is raging again and various alarmist mutterings are occurring. I’ve even heard the phrase “paradigm shift” cropping up a few times. I’ve used the argument a few times that I think the panic that they’re creating is occurring unnecessarily because people are seeing MOOCs as the equivalent of courses, when really they’re not, they’re the equivalent of text books. There’s the worry that this could mean a transformation of HE in that people could just attend a MOOC, then go along to be assessed and get a qualification without ever attending an actual lecture in an actual university.

I’ve got some news for you. People have been doing that for centuries.

Well not attend a MOOC and get a qualification, but attending an exam without attending a taught course. I sat plenty of exams as an undergraduate where I hadn’t learnt anything from the lectures, but had to make sense of it independently from a text book. I’ve had lectures that actually were simply regurgitating text books (though in fairness those were usually written by the lecturer). There was no tailoring to the student base, no extra explanatory stuff if you were struggling, there was no sense of gaining insights from having a person standing there talking. There was no teaching.

The only advantage of attending the course delivered by the university rather than staying at home reading the text books was that they knew exactly what was to be on the exam, so you had to be there to find out what the syllabus was. Or in the case of my housemate at uni, have someone to send out to the lectures (i.e. me) so you could copy their notes when they got home (or by the second year she had the brainwave of buying lots of carbon paper “for me”, so she didn’t even need to do that).

So really, paying the fees to attend those lectures to the university, would have been a waste of money, since we could have just been given a syllabus and a reading list. This was when education was free though. The people who deserved the money for me passing those exams were the authors (Thank you Richard Feynman) and in the case of my computing assignments, the second and third years who offered me advice. Really I was only paying (or rather the state was only paying) for the university to accredit me, not to teach me. For those courses. Other ones were taught properly, I should add.

So where’s the harm in acknowledging that’s how HE has always worked and allow more people access to the role of providing content, and to be reimbursed for it? Just as kindle allows more people the opportunity to write and sell books directly. Education is a mixture of content, teaching, assessment and accreditation. The last two probably have to be provided by the same institution but the rest could be distributed. If you need to know something about, for example, quantum mechanics, join a MOOC (or watch some youtube videos, or read a book) about it. Need some help?, sign up with someone with a good reputation at teaching it, and if they’re good at it, they’ll put together a learning set on the subject. Feel you know enough?, sign up for an exam and be assessed. Accrue enough assessments, get a degree.

In reality, things probably won’t change that much. To be accredited you have to learn the right stuff to pass that particular exam. And universities will probably keep that close to their chest so you have to sign up with their course. Practical exams need equipment that only universities can afford to provide. Also the business model for MOOCs doesn’t really support them as standalone things. The only economic rationale for them that i can see is as a loss leader. If you like the MOOC but want to know more, then sign up for some tuition, and then sign up for the degree, or (perhaps if education does become more disaggregated) to be assessed and accredited at the end. Certainly there’s no way to make money directly from MOOCs since they’re not only free, but also the content is immediately rippable once it’s made public. Two colleagues I spoked to last week were expressing shock at a MOOC’s content being replicated within a week or two in its entirety and used to create another two MOOCs elsewhere. That seems to me to be perfectly appropriate. The learning isn’t happening when the content is being read, it’s happening elsewhere, in the communication between learners, or between learners and tutors. The content should be free because, essentially, it is the part of the process that has the least value.

Oh and as for the idea that it’s a cheap, and therefore affordable and accessible format for all those who don’t have access to HE, Martin Smith at Strathclyde points out that for the learners that don’t have access to HE normally, self-learning is not going to be that easy. There are a set of skills that you acquire by being formally taught, that you need in order to get the most from materials. This is where Sugatra Mitra’s idea of Self Organised Learning falls down. Yes you can go so far with self-organised learning, and some remarkable people are effectively self-taught, but it’s a difficult skill to learn for most, and no amount of other learners, or Intelligent Tutors/Agents/Bots are going to fill that gap.

A second blog

I’ve set a second blog up to capture all the stuff I wanted to say about things other than work; prompted mainly by all the street art I saw in Brazil. I don’t want to clog up the work-related posts with other things, so thought a secondary blog would be the best route. There will also be other things posted on neither the subject of non-formal art (about which I actually know nothing, I’m just interested) or elearning but they’ll probably be randomly distributed across the two. It’s at markchilds2.wordpress.com

Good and bad interface design

There’s a growing tendency in user interfaces to move to a “design aesthetic” rather than actually having something that actually works for the user. You know the sort of thing I mean. Metro, for example, which has made a pig’s ear of using my Xbox 360, and by most accounts has done the same thing for Windows. More and more functions are added, with the useful stuff buried deeper and deeper and more and more difficult to find. Instead of functionality the interface is replaced with stuff that “looks good”, as if that’s more important than being able to use it. I know why it happens. I went to journalism school for two years, and in that time we were taught a lot of the normative practices of journalism, a few old saws that got passed down from generation to generation. One of these was “people first, events second, ideas third”. This particular pearl of wisdom is why, whenever you see an article about some amazing scientific discovery, the article focuses on the life of the scientist making it. The reasoning is that people won’t be drawn in if you talk about the discovery, only if you talk about the person. This reasoning is why Horizon is far worse than it used to be, because you have a whole swathe of bollocks to sit through before you actually learn anything. There’s also a tendency to make vague generalisations about the subject matter first. I have a rule, that if a documentary hasn’t told me anything new by 7 minutes in, I turn it off. Pretty consistently this seems to work. Seven minutes of waffle, then bam some interesting fact. It’s as if they believe that if they shock us with information too early on it will damage our systems or something.

The thing is, there is no evidence for this as a rule. In fact, if you ask anyone in the audience they would put the relative importance of people, events, ideas in the reverse order. It is just that someone once made this up, and in a profession where people are desperate for a clue about how to do it well, people cling to it as a fact. It’s also why we have  the concept of “learning styles” in education and “digital natives” in elearning.

Designers seem to work from a similar set of principles that have just been pulled out of  <edit> thin air. </edit>. Resistance to the introduction of the newer interface, which is “cleaner” or “more aesthetic” or “gui driven” is just dismissed as the user not liking change. Well, to some extent, sticking with what exists is important. The whole point of interfaces is that they become transparent through frequent use, and this supports a sense of immersion. You mess with them and suddenly they become visible again and therefore less usable. You have to be really sure something is an improvement before you mess with it.

What is tricky too is showing why the new one is worse, because so often the upgrade is done without any foreknowledge, so it’s not possible to make a comparison. However, the BBC iplayer has had both the new version and the old version side-by-side for a while, so it’s possible to screen grab both and demonstrate why the new one is so poor. So here goes.

This is the landing page for the old iplayer.

Image

You can see immediately several radio programmes to listen to, in a variety of categories. If you see one you like you can click on it, and within a few seconds are listening to something. So for me, the Unbelievable Truth would do it if I hadn’t already heard it. so … click on that and done.

If you don’t see something, you can click on Favourites and see things you’ve previously tagged as things you’re interested in. It looks like this:

Image

Ah OK — heard all of those so go deeper into the website, which you can do by scrolling down. In theory people don’t like doing this, but where is the evidence?

Image

What’s great about this is that you can see the top selections from a variety of categories, which might lead you in a direction you hadn’t otherwise considered. Nothing there takes my fancy so I’ll head onto comedy and select that.

Image

Well that should be enough choice. Round the Horne is pretty bona. However, if not then click on show all comedy and you have the entire list in alphabetical order.

Image

So there you have it. Nice, straightforward and fast.

Here’s the landing page of the new interface:

Image

You can immediately see the problem. Someone with a “design aesthetic” has been let loose. There is a lot of empty space which contains no information, and seems to be there just to look good. There are no links to actual programmes. We are forced to select a search strategy to find a programme two of which are meaningless. I mean who cares what station or what time of day it’s on?

So after a completely pointless and confusing click on “categories” we get to this :

Image

And as you can see STILL NO LINKS TO PROGRAMMES. It’s another superfluous click on comedy to get to:

iplayer 8

We can scroll down to see programmes but they’re not in any type of order. The only real advantage of the new interface is that it enables filtering by sub-genre. It needs one more click to get the alphabetical list:

Image

Which also for some reason includes programmes which aren’t available. Is anyone actually thinking through this at all?

And yet with all the extra clicks, this is meant to be “simpler” … the assumptions seem to be that we are children who like big clear pictures with plenty of colour and not too much information at once. Reading is too hard for us. We know exactly what we want to search for (the opportunities for serendipitously discovering stuff are eliminated) and we have time to randomly click on things to discover content. None of these things are true and I resent the implication.

For the moment both are running side by side and this is fine. Maybe some people prefer the new version. But anyone who has let the designers loose on their interface ought to give people the option. For example the latest version of Firestorm (a virtual worlds viewer) has the option to switch between Firestorm (a gui-driven interface) and Phoenix (a text driven one). Not all of us rate aesthetics above speed of access to information and not everyone needs bright colours, or curved edges, or little animations in their interfaces. In fact, they’re distracting and annoying.

Why the rant? I suspect you’re wondering. It’s because I can see the online world becoming less and less usable as a result of designers being let loose on things, and either not consulting, or deliberately ignoring the user feedback, as if we’re too uneducated in “design” to know what we want. I had a huge argument with a colleague who said that a change had been made to something he’d been working on because people prefer GUI to text. “No they don’t” I replied. He just said that “yes they do”. My response: “Maybe most people prefer it, but by saying ‘people’ you’re implying that all do, and I know that’s not true because I don’t”. The result? He completely ignored the point I was making, possibly because I wasn’t a designer and so therefore wasn’t capable of making a proper judgment about what I liked. Unfortunately if no-one creating interfaces listens, the online world will become less usable. I no longer access videos on my xbox, because the user interface is messed up. I use Twitter much less because the interface is unwieldy. wordpress is another good example. WTF does that w in a circle mean really? Could they not put “menu” there or something? I was using WordPress for months before i realised I could access my Reader or Freshly Pressed by clicking on it. Bit by bit I can see the gradual disenfranchisement of the user as control over how the online world is accessed is ceded to “designers” and I’d quite like it to stop.

Online v offline communication

Realise it’s time to get back into blogging after my trip to Brazil and looking for inspiration went to the Daily Post … never fails … there’s a post on this http://dailypost.wordpress.com/2013/05/04/daily-prompt-text-speak/ How do you communicate differently online than in person, if at all? How do you communicate emotion and intent in a purely written medium?

Luckily I’ve got something to say on this, well I should have. It’s one of the core things I do research on – how do people communicate online. I’ve looked a lot at how people’s behaviour offline translates to online, and there’s no real consistency. The stereotypical transition is the quiet shy student in class, who when given the chance to communicate in an environment where they don’t feel so exposed, suddenly blossom into a talkative and dominant contributor. These students do exist, cyberdisinhibition is such a useful tool that any educator who doesn’t provide his or her students with a mechanism to communicate online as an intrinsic part of their course is a bit of a twat really. If you choose to limit communication to only the face-to-face activity of a classroom then you are acting to censor a proportion of the student body through your own apathy or laziness. My ability to communicate in a face-to-face situation is often very limited. I don’t think very well while someone else is talking, I need silence to collect my thoughts. So in a conversation I need a second or two pause before I can start talking. I was recently at a meeting where that break didn’t happen for about the first hour of the meeting. Ideas got tossed backwards and forwards, some of which I didn’t have anything to contribute to, some of which I could have done, but didn’t because at all times the start of one person’t contribution overlapped with the end of the previous person’s contribution. I spent that hour feeling more and more frustrated, and more and more withdrawn. I guess feeling the after effects of flu slowed me down a bit more than normal too. Finally they all shut up long enough for me to make my contribution. It took about 10 minutes, and they waited until I’d finished, but I would have much preferred a dialogue to a monologue. I think that’s why I prefer online communication to offline. It’s just so much easier to get a word in.

Online does have disadvantages though. I think tone is sometimes difficult to read. Sure we should get into the habit of using :-p when we don’t mean something or flagging that we’re being ironic because putting little pseudo html around phrases <sarcasm> is just so hard </sarcasm>. But even when I’m reading stuff by people I know really well, I can still read them as literal when actually they’re meant ironically. But then the same is true face to face. If not more so. The number of arguments I’ve had with (now ex) partners because I had a particular expression on my face, or a tone, which they misinterpreted because they had a much greater confidence in their ability to read body language than was warranted. There’s nothing more annoying than being told what you actually feel by someone who doesn’t know how to read expressions and think they do. Really there’s something to be said for putting a paper bag on our heads before we begin a conversation with some people. Or on theirs.

Another reason why some learners prefer online to offline is that they can turn it off when they need to get back to work. A study I did at Warwick a while ago (with the acronym BLUPs) identified this as a big incentive. Students could drop onto chat if they needed some help, could stay around to socialise a bit but then go offline when they needed to. online was more manageable.

There are some students who really don’t like communication online but are fine offline. Another study I did looked at students’ responses to using virtual worlds. In the discussion we had about it, the majority of the comments were negative, by about a 2 to 1 ratio. In the survey the students were positive about it in about a 3 to 1 ratio. It appeared that those 1 in 4 students who hated the online interaction were those dominating the face-to-face discussion and were about 3 times more active than those that liked it. The interpretation of what they were saying about online interaction was that they were so at ease with offline, had such a fluency and ability with it, that they felt the loss more than those who liked online. In effect they had lost their superiority and were railing against it.

As a result I’m always deeply suspicious of people who demand that all their interactions take place face-to-face. I agree there is something very worthwhile about meeting in that way, at the moment I’m taking time out to meet a lot of projects all over the UK, taking several hours to travel to do it. The issues with ensuring everyone gets to speak don’t arise (since I’m chairing them), and it does produce a lot more ideas, and comaraderie and trust. All of those things. But people who refuse to interact online? My first thought is why do they want to make sure they can limit what’s being said. Purely offline people tend to be assholes in my experience.

The final two ways that offline to online can translate are those students who are fine in both modes (which is good). But really any of these are fine. The ones I do worry about are those that don’t communicate in either mode. Again in the BLUPs study the few students that fell into this category really seemed to be at risk and unis do very little to proactively seek these out, tending to respond just to students who flag that they’re struggling. Like drowning people, the ones who are really in trouble are the ones who aren’t saying anything, not the ones waving.

Oh and I’ve realised that I’ve pretty much gone off topic. But in short answering the question, use emoticons, hashtags, pseudo html, different fonts. Emotion can actually be conveyed much more precisely online than offline.

Badgification of learning

A response to http://lg.dlivingstone.com/2013/04/21/badges-badges-badges/

I remember at school we had a credit system, earn a credit for your house. It was a way to exert some extrinsic pressure on us to perform, but I was enough of a nerd to want to learn the stuff anyway. I remember once a maths teacher congratulating me on solving some problem and asking if I wanted a credit. I answered that if she wanted to give me one then fine, but I wasn’t really interested. I think she was quite non-plussed. I think it unsettled her whole notion of how to motivate students.

Today though, although I won’t go much out of my way to unlock an achievement when playing on the Xbox, I will occasionally. I play the games just for fun, to get to the end, but if I see there’s an achievement for, for example, stabbing people with an arrow rather than shooting them with one, I’ll hit the B button occasionally rather than the Y one. That’s pretty much as far as I’ll go to get a badge.

It surprises me therefore the degree to which badgification of courses is taken seriously as a concept. Qualifications are weak enough as an indication of learning. I got through all of my A levels by simply rote learning, and actually only really understood the material when I had to teach it 10 years later. The whole attributing a badge automatically, which is really the only attribution a system can make without the intervention of an actual human to assess the learner, seems to be particularly pointless, on the level of an attendance certificate. The only time I ever really felt my learning was properly being assessed was during my viva. A nerve-wracking experience, and I felt I’d been put through the wringer, but I knew at the end that I’d proven I knew what I knew, and knew my externals knew I knew. But to hand out something just because something is completed, rather than understood, is the other end of the spectrum. We might as well put a badge in the back of a book for someone to peel off and stick to their shirt when they’ve got to the end of it to prove they know what it’s about.

To me, the idea of badges is another example of the sleight of hand involved in MOOCs which replaces education with content, and yet still calls it education. I think they’re useful, they make materials accessible to far more people, but materials are only one aspect, and to complete one really proves nothing.