Flow and writing

This is my own observations and response to Grainne’s latest post http://e4innovation.com/?p=658 mainly because I’ve just spent three days solid writing and doing nothing else in order to meet a deadline so it’s on my mind at the moment.

The subject of Grainne’s post is flow, and I’ve definitely been in the zone today. The book is on Making Sense of Space and is written with a long-standing friend and collaborator Iryna Kuksa – she got the publishing deal, we came up with a subject we could both write about, and then off we went. Or rather I didn’t. I did an introduction back in September, then left it until December, didn’t quite get the last chapter written in the time I had allocated and it’s taken me until now to get it written.

What helps? Well deadlines help. They are the best cure for writer’s block there is. We all know the stories about Douglas Adams and deadlines, so I don’t need to repeat them here … I’m not as bad as DNA, I nearly always meet them, but this one has been particularly difficult to get started on. The reason, mainly, was because I didn’t believe I could do it. Although I did my half of the intro with no problem this was mainly because Iryna had laid out what she wanted from me and how much, so no real thought required there. So really six months (I started thinking about it in July) of panicking before I got down to it. But then I remembered something I really wanted to write about, which was a proposal I’d started to put together for a Marie Curie fellowship, something I’d noted about descriptions of game spaces, ritual spaces, theatre and virtual worlds while doing the PhD and had emerged in conversations with colleagues and friends. That gave me something I wanted to say. I was no longer just doing this because I felt I ought to write somethign, this was something I cared about. So that’s lesson 1 for writing: Find something you care about. Even then though it was a while before I started. I was really waiting for an opportune time, I’d taken on a few projects, and needed to get those written, but had most of December and early January set aside for writing the book (well my half of it). Other writing commitments eroded that though, so bit by bit I was reduced to only about two weeks: a few days before Christmas and about two weeks of January. This was a good time though, I’m not a huge fan of Christmas, and luckily I had a huge back muscle cramp that meant I couldn’t walk for about two weeks anyway, so I could shut everyone away, turn off the email, turn down Facebook and focus on the book. Because really you need to think, and you need to immerse yourself totally to do that properly. Lesson 2: Shut yourself away from distractions. That worked this week, three days with no Facebook, no email and no visitors and I got it done. This morning I had the conclusion to write and the only way to do that is read it through, hold everything in your head at once, and try and look for the common themes. That needs protracted durations of quiet. I wanted to link experiences of space, experiences of technology, willingness to bond with technology and ultimately look at longterm effects on what that means to be human. A lot of disparate stuff, but I think i got there without sounding too mixed up.

The reason why I wanted to bring together all those different things was because throughout the book — in fact a big part of the pitch to the publishers, was that this would be a book with a lot of contributors, but with the majority of the writing by Iryna and me. I’ve quoted friends, got them to add stuff through Facebook, interviewed them, quoted their dissertations. Of the 26k words I’ve written I’d say that about 5k were written by others (all credited obviously). I like having those viewpoints and voices, and I figure that it’s a platform for other people who have influenced me to also get into print. I’ve also let anyone read it who wants to, through posting it as FB notes, or emailing it to them. It’s made it a lot more fun, and hopefully readable. so lesson 3 Don’t do it alone.

The other thing that helped too, over Christmas particularly when I had 10 days over three weeks of concentrated work was to keep a spreadsheet of how much I was doing and set a target every day. This was around 1000 words, which doesn’t sound a lot, but some days I’d delete half that before starting anything else. The advantage of this is that you have to keep going, even when you want to stop. And also when you get to that point, you can stop. One of the mistakes with writing is to always think that you should do a bit more. The mistake with that though is, if you don’t stop, where’s the incentive in writing? If you keep going at it and get your 1000 words done by 4:00 the evening is yours, aiming towards that goal is then a point you can reward yourself, so you keep going. If you faff about and are still at it at 10:00, tough. The flow thing is all about feeding back how well you’re doing and thereby remaining motivated. so lesson 4: lots of small targets and stick to them, feed back regularly.

Although I was originally a bit peeved at the time taken off the chunk of time I’d set aside to write, when I got down to it I could see that this had been an advantage, because during that time a friend had given me a book called Virtual Literacies. In it there was a chapter on the schome project, which she’d contributed to. This ended up being the place where I started my chapter, because the discussions in the Gillen et al chapter in the book had stuff to say about how learners in Schome had related to those places. I could start by recapping that chapter and then branch out to talk about teh bigger picture. This then became the format for the other chapters too, start with a case study of one thing, to illustrate the argument, then talk about the chapter. Without finding a formula like that I’d’ve been prevaricating for a couple of days each time trying to get started. This applies really to each individual day too. If you finish one day with sort of an idea of what to do next, or even start by editing what you’ve already done, it makes it easier to start, because you know what you have to do. Some peopel I know even leave sentences half way through so they can start off the next day by finishing it off. I wouldn’t take the chance that I would be able to, but a few notes on what the next bit is, or a plan, really helps. lesson 5, if you only know one thing, know how you’re going to start.

That really applies to the conclusion too, I find it helps to start those of with one specific thing … maybe something new, or maybe something said in the chapter, that can kick off the discussion. The first bit doesn’t have to be profound. It can be only connected vaguely or occur to you because of something else completely. Just write that down and see what follows on from that. I was stuck on the conclusion for one chapter and couldn’t see what the lessons learned were, but then had a conversation about how we always try and fix things by making the technology better rather than the pedagogy. That seemed to be a lesson that also arose from the case studies I’d been writing about, so I put that down. After writing for a while, I realised that actually, it was true. lesson 6 if you’ve got too many things to say just start with one, pick it at random if you like. that’s still better than not picking one.

And finally, lesson 7, sometimes you just have to go with the flow and let yourself be distracted. The last post I wrote was when I was still trying to get down to the final chapter. I saw the Daily Post challenge and i spent a couple of hours writing a short story when I should have been working.  I can’t really argue that it helped me get the work done, but I really don’t think I’d have been able to focus until it was written. Same with this blog. I have 200 unread emails and about 250 unanswered ones, but I thought of this first so got it out the way.

Daily Post Challenge

I’ve just started following the Daily Post and read this week’s challenge to write a short story about a dystopia http://dailypost.wordpress.com/2013/02/25/writing-challenge-dystopia/. Just as I finish I noticed that the deadline was Friday … aaghh .. just missed it. Anyway here is the story:

Harvest

Maybe this is a dystopian view of the future, maybe a utopian one. I don’t know; you decide.

Plenty has been written about the life of Edwin Janus Talbot, analyses, homilies, diatribes; all trying to decide if he was a saviour or a judas, remaking the world in his own image or betraying us to alien intervention. What all can decide on is that he was an astronaut, out there he made contact with Something, and what he brought back changed us all. His motivations for doing so, however, have been subject to intense scrutiny.

The death of his wife and son, only months before his spaceflight in 2015, were obviously a huge influence. While driving along a highway they were sideswiped by a truck, driven by a drunk driver, with previous convictions. Edwin survived. His family didn’t. The driver was found guilty but was given only a suspended sentence and had his license revoked for a year. That may have been a bigger motivator for his later decisions. Some said his flight should have been scrubbed, but he passed all of his pysch evaluations, and the comet fly-by could not be delayed. So up he went.

Accounts vary of what happened during the flight. It is a matter of record that for five minutes during his extra-vehicular activity all ground crews lost his signal. No voice, no EEG readings, no ECG readings. Not even static. During that time it was believed that perhaps some comet debris had struck him or his craft, or the solar activity noted at the time had damaged on-board systems. Then, miraculously, he reappeared.

Dazed, confused, unable to properly communicate until days after his return, speculation about what had happened during those five minutes was rife. When he was finally able to communicate coherently it did nothing to reduce the speculation. He reported that glowing forms had emerged from the chunk of ice, surrounded him, spoke to him. They wanted to know what he was, where he came from. For weeks of his subjective time they interrogated him until finally they set him back. And asked him one gift they could bestow upon Talbot’s planet. His answer was immediate. “No more murder.” Murder was an unknown concept to these beings. He had to explain it was the deliberate taking of life.  And then each one of those terms also needed explaining. “Life.” “Deliberate.” Around Edwin Janus Talbot’s clarity of definitions of those two words our whole world now gravitates.

On hearing of his accounts of First Contact, Talbot was returned to quarantine. He was subjected to a series of tests, and these found, replicating away in his blood stream, small nanotechnological mites that had not shown up in their previous analyses. They appeared to have no effect on him, until they ran an MRI scan of his brain. There, in the part of the cortex that interpreted his vision, they found a lump, formed from a collection of these mites. And it was growing.

Talbot never again left quarantine, but by then it was too late. In the days he had spent in contact with the investigators, they had become infected. And their families had become infected. And so on.

Each new revelation caused a new wave of panic amongst the populace. Astronaut disappears then reappears. Astronaut reveals First Contact. Astronaut infected by alien nanites. This last, that this infection was in the wild produced the greatest. But after weeks and months of speculation, and there being no evident effects of this infection, the hysteria died down. People went back to their regular lives. Thousands, had tests, the nanites were found in their bloodstream, the lump was found in their visual cortex, but they did nothing just sat there. People adapted.

Then, in 2017, the grandparent of one of the first people to come into contact with Talbot died. As a close member of his family she’d long been known to have suffered tertiary infection, but this had long been dismissed as a cause of illness. A stroke had killed her, and she had lain in her bedroom for several days. When she was found her body was in an advanced stage of metamorphosis. Again the quarantine, again the constant surveillance. The public’s horror grew as information about the change the body was undergoing was leaked to them. Then, after weeks, the full horror was reported. A grainy black and white video, copied from security tapes without permission and leaked through social media showed the body suddenly fragmenting into dozens of small insect-like creatures. They scurried over walls trying to find an exit, scratching their way through the plastic containment walls, then disappearing through the underground facility.

Again speculation was rife. The answer of where they came from was presumed to be the nanites. After the death of the host, the nanites had formed into these synthetic creatures. Their purpose was unknown though. Then someone thought to exhume the bodies of other family members who had died during the previous two years. All were gone. All graves showed evidence of having been chewed away from the inside.

That was when we as a planet, first knew the fear of the Harvesters. Although the first Harvest had not then happened, there was still the anxiety about what these things were, what they were planning. Then Bradley Inglenook killed his girlfriend.

Bradley lived near the base on which Talbot and his interrogators lived and worked, but was, as far as anyone knew, had no direct contact with anyone who worked there. One night, after too many drugs and too much drink, in an argument with his partner he picked up a bat and beat her to death. He had a history of domestic abuse, and when the police officers arrived at the apartment they had a good idea that finally, awfully, he had taken this abuse too far. The concerned neighbours that had called for them watched as the police officers broke down the door, and fully expected to see Bradley hauled away in handcuffs. What they actually saw were the police officers backing away, and a man running between them, pursued by a wave of what looked like small spiders. As he fell, he screamed, and the creatures passed over him in a wave. As they watched, horrified, the things dismantled him, then disappeared into the night.

It was the first Harvest witnessed. Little by little, as more occurred, more of the process was pieced together. The lump in the visual cortex received and transmitted visual information, to where it was not properly known for a while. Everything someone saw, if they were infected, was perceived and analysed, but as far as could be determined, with only one purpose; to detect murder. If a murder was committed, the perpetrator was identified, and the Harvesters were summoned.

If you were identified as a perpetrator there was no appeal, and no escape. Bradley’s death was only the first. It was as if the metamorphosed bodies of the infected that had died had suddenly reached critical mass. Within weeks another death, this time a child killed by a woman and her partner. Both had beaten the child, but only the one responsible for the final blow was sought out by the organic machines. The mother watched and screamed as her partner was slowly devoured by them, then watched as they disappeared.

Neither was there any escape. A drunk driver ploughed into a parked car on the highway only a little distance from where Talbot’s family had died. Fully realising what he had done, and what the punishment would be, the driver fled back to his car and sped back to his home. Locking all the doors, sealing the windows, shoring up every conceivable entrance to the house he waited. Neighbours reported hearing the ominous susurration of the Harvesters as they gathered around the building, swarming over the windows, clustering by doorways. The driver, still worse for drink phoned the police, begging for help, the 911 call reported on every news channel heard him saying he would do anything, but just to keep the damn things away. TV cameras arrived as the Harvesters found a gap under one of the doors, and flooded into the hallway where he stood broadcasting his screams over the phone as they showed images of the exterior.

In his quarantine on the base, it was reported that Edwin Janus Talbot watched the live news feed with a slight smile on his face. Then closed his eyes and did not open them again.

The small town in Florida where the infection had started was merely the first place on Earth to achieve this critical mass of Harvesters. The infection had already spread to be almost totally worldwide. In Chicago a parent watched horrified as their teenage son, who unbeknown to anyone, was in a street gang was consumed by Harvesters. It was presumed he’d been responsible for a shooting earlier in the day. In London, three people beat someone to death in a street fight, and even before they left the scene, were Harvested, all caught on CCTV, and broadcast around the world. In Israel a soldier that had shot and killed a stone-throwing teenager was consumed by the small alien devices. Distance was no defence. He and several comrades had fired, only one shot had hit. The Observer in his head and the other soldiers had relayed the information to whatever processing system made the judgment and the execution was automatically carried out.

Eventually the final link in the chain was discovered. Deep underground beneath a subway system in Delhi a large mass of neural networks was discovered, comprised of the connected bodies of billions and billions of nanites. The need for a large critical mass was evident, until enough of the infected had died, and their bodies transmuted, then there was not enough mass to create one of these alien brains. Without them, the sentences could not be carried out. That brain was destroyed, and funeral practices everywhere required cremation rather than burying to be carried out, but it was too late to stop. Enough Harvesters, and enough Judges, existed for the genie to be entirely out of the bottle.

Needless to say, murder rates fell drastically once people realised that there would be no escape from Retribution as the act of disassembly by the Harvesters came to be called, and that the Judges made no allowances for context, or provocation, or political motivation. Indeed, the Judges’ interpretation of deliberate was open to interpretation. A crime of passion committed in the heat of the moment still met with Retribution. An accident may or may not meet with the sound of hundreds of crawling insects. An act of incompetence by a doctor led on a number of occasions to a hospital ward being flooded with screams of the medic being torn apart soon after their patient died, and for a while this resulted in a widespread moratorium on operations. As the Judges became (it was presumed) larger, and more sophisticated, the consistency and nuances of Judgments improved, and these days it is rare for accidental death in surgery to lead to Retribution.

And as the alien lifeforms defined and redefined deliberate, so too did they redefine “life”. To the disappointment of many, it was not considered murder to kill many animals. Swatting a fly did not lead to death, neither, surprisingly for many did fishing. Again people cursed Talbot for the subjectiveness of his definitions. Until people noticed that loggers in Papua New Guinea had almost entirely vanished. It appeared that the Judges considered all primates as “Life” and so with each orang-utan that died in a deliberate fire, at least one logger that started the fire would be Harvested. It appeared that they had based their understanding of life on the template of Edwin Janus Talbot, and stage by stage, as the Judges understood sapience better, more animals appeared to be taboo. Japanese whalers would return home, to be met by a wave of Harvesters that would consume them, leaving their catches unclaimed in the docks. The last few, on learning of the fate of the others, chose to always live at sea, never stepping on dry land, along with a small community of those who have murdered. It was found that the only protection against that wave of death-like insects was water, and some, though not many, choose that as a way to protect themselves against Retribution. For more, suicide is the only sure way to ensure a painless death.

And that is the reality we all live with now. Most of us feel liberated. No longer needing to fear the ultimate violence from other people. Occasionally an aggrieved lover, or a frustrated parent, or a political extremist may still kill, in the heat of the moment, or with their belief in a calling to kill. A psychopath may still shoot an innocent bystander, or a street fight go too far and result in death. And for some it is the most extreme statement of suicide they can imagine. And then the reports will be of another Harvest, and we will all become very conscious of the recording and transmitting device in our heads, and that alien neural net, hidden away beneath our feet somewhere, ready to Judge us. But genocide no longer happens, once the first blow falls from a machete, there is never a second, wars cannot take place, when superiority of firepower, or distance from target, or perceived notions of right and wrong, cannot defend against that wave of death crawling towards the killer, ready to dismantle him or her.

So is this dystopia, or utopia? Are we living in Talbot’s nightmare, or his dream? By now, when we have lived with this for so long, it’s all we know. And so it’s neither. It’s just the way things are.

Gaming literacy as basic competence

I was planning on writing about this, but of course Steve Wheeler beat me to it http://steve-wheeler.blogspot.co.uk/2013/03/skills-or-literacies.html so read his thing …

One of the projects I’m working on is introducing NEETs to education, and much of the discussion yesterday was about whether we should be engaging them with digital skills or digital literacies. Although we had different viewpoints (but eventually came to a mutual decision … we decided to go for skills, a matter of not running before we could walk) what made the conversation easier is that we all had a shared understanding of the distinction. For all of us skills are what we do in training. Upload this video, compress this photo, add this page. A literacy is critically reflecting on the task (why this photo and not that? what does this video mean in this context? how do we address our different audiences). As the word “literacy” has seemed to proliferate it’s worthwhile pulling back on its usage to only really refer to this higher level of engagement.

The question also arises, to what extent can we insist upon literacies amongst the people with whom we work, both colleagues and students. In our NEETs work we were constrained by the literacy literacy (text literacy?) of the learners. Asking people to critically reflect when we really just want them to engage in the first instance, is too much. From an undergraduate however, I would expect them to be able to spell and punctuate accurately, though would be a bit lenient on those that had English as a foreign language. That’s not even a text literacy though, it’s really just a skill. When I taught, if an essay came in with too many errors, they would have to re-do it. Learning where an apostrophe goes takes about 30s to learn. Yet I still review academic papers that can’t get it right. But, I doubt I’d get much argument about insisting on spelling and punctuation from anyone.

What about digital literacies though? The thought cropped up again in reference to this piece: http://www.bbc.co.uk/news/technology-21631646 in connection with the boy who ran up a GBP 1700 bill on playing Zombies vs Ninjas. The freemium model in games shouldn’t be a surprise to anyone, though how expensive the in-game purchases on this one certainly are to me. What makes the page interesting is an argument between two of the commenters, one called David and one called ravenmorpheus2k about the culpability of the parents and the worth of the article. I think the article is drawing attention to how expensive in game purchases are, not that they exist, but the argument raises some good points. Should parents actually be digitally literate enough to know how freemium games work, and at least have a baseline knowledge of games and gaming before handing a tablet over to their child? And not need to rely on picking this up from a BBC website but spend the time exploring in order to achieve this level of literacy? One commenter says yes, the other says no. No prizes for guessing which is which. The discussion also brings up the “get a life” accusation that most non-gamers will throw at gamers at some point, the argument being that if you’re a proper grown up then you won’t be wasting your time with this sort of thing.

But .. that’s the question. Is a basic knowledge of games an essential digital literacy for existing in the twenty-first century? Is there an onus on parents to learn enough about them to be able to monitor and make literate sensible choices about their children’s activities?

I’m not insisting that everyone becomes a serious gamer (though I think if you’re not your life is impoverished, but e gustibus non disputandem est and all that, but “serious gamer” is certainly not an oxymoron). However, I do think that if you’re not taking your time out to become at least partially aware of your environment, and gaming is a part of your environment whether you like it or not, then the accusation of irresponsibility and laziness is actually fair comment.

Oh and don’t get me started on academic colleagues who don’t know how to install software, or upload images, or find files on their computer, now that would really get me ranting.

Harlem Shake

One of the things that makes social media so fascinating is the speed with which trends appear, morph and then disappear. February 2013 was the month of the Harlem Shake. It seemed to appear at the start of the month, proliferate madly and then by the end of the second week it was already becoming passe … as evidenced by the very first one I saw, which was this http://www.youtube.com/watch?v=C4ZxszoeCiU Since then we’ve seen record attempts (like the one at Warwick) https://www.youtube.com/watch?v=S6mvfhGkyNI , The Norwegian Army doing it, some of the cast of Twin Peaks did it. There have been TV newsrooms, Lego Avengers (probably about the funniest) http://www.youtube.com/watch?v=kwAKxED4uTs, it’s been done in World of Warcraft and Minecraft at the Welsh Open snooker, and now even the Simpsons have had a go. I even know someone who’s done one, or at least the people in his company have https://www.youtube.com/watch?v=R33Bvyv-dCo  (though I’m pretty sure that’s him at the start with the box on his head). The syntagm is a simple one, first 15s someone (usually masked) dances to the Harlem Shake while everyone else does routine stuff, then the baseline drops and then there’s a jump cut to lots of people jumping around on the screen. The appeal is that they actually seem like a lot of fun to do, not so much to watch, after a while though. Full story is here http://knowyourmeme.com/memes/harlem-shake

What’s also great about social media is the speed with which it can clash with authorities. An early seemed to arouse the ire of a couple of NYPD officers, then when it was attempted on a larger scale there was a bigger backlash http://www.youtube.com/watch?v=M9LH1CdbSkw  However now it seems to becoming a mechanism for opposing oppression in Tunisia http://blogs.independent.co.uk/2013/02/27/tunisia-does-the-harlem-shake/ The first seem to have unintentionally wound up the authorities, but now there are hundreds of copycat activities going on.

It still seems that the powers that be (or is that the powers that were) in a lot of countries haven’t really come to grips with the power that the Internet can provide. It’s not just about posting videos or images, (or blogs), it’s how when you bring together and connect a mass of different people then doing anything, sometimes funny, sometimes insightful, sometimes just plain stupid, can occasionally just trigger a wave of activity, often without any discernable root (i.e. a stand alone complex). It’s still surprising that that this growing wave of self-expression and/or fun still comes into conflict with the authorities though. You’d have hoped by now that these regimes would have learnt their place. Yes they have their allegiance to the status quo, but with us all connected together to this extent, and able to act together and share ideas, then ultimately, they should probably be shaking too.

Avatars and identity

Image

Yesterday I did my regular guest lecturing spot at Newman University College – oh excuse me – it’s now Newman University, Birmingham, on digital identity in virtual worlds, which is a big part of the research I do (and there’s a book on it http://www.springer.com/computer/hci/book/978-0-85729-360-2). I do a brief lecture, talk about the students’ avatars that they’ve designed, then they do a task that gets them thinking about how their identity has evolved. I posted something on FB about it, so my friends can see I do actually do a proper job. This is the conversation I had with one friend about it (her comments in italics). I thought they might be interesting here for anyone that’s not aware of how these things work so asked her if I could paste it to here.

Just admitted to my class that, not only am I in bed, I’m also wearing Iron Man pyjamas. #tmi #underminingprofessionalimage  Lots of students, many very interactive; two naked, a couple of furries, a lot in hats and one a Ferrari. Also one very very fat which is* unusual.

Also one very very fat which is* unusual. So, you can be anything you want. Humanoid. Robot. Little ball of mist. Even a car. Something out of The Only Way is Essex. Dressed, undressed. (Do the naked ones improve on what nature gave them?)

Yep each avatar can have lots of forms if they want, you can swtich between them as easily as dragging and dropping files from one directory to another. Usually people have one form that they stick with for most of the time with a small range of costume changes. They might have a freaky one for occasions. There’s some stats that 94% might be more, of participants have avatars with a main form that’s human. And yes, they can be humanoid, robot, ball of mist, car, I have an eyeball, an airship, werewolf, minotaur, loads actually. Most people stick to the gender and ethnicity of their physical forms. But almost always younger, thinner, more muscular, taller.

Change gender. Fly. Be really fit  – in both senses of the word – or not.

In fit terms, only in the sense of muscly. You can be in a wheelchair, but  none of this affects the speed of the avatar. They can all fly, but you can acquire scripted objects that change the way you move, fly better, teleport along line of sight, that type of thing.

How do people decide what their avatar will be?

Ahhh that’s the interesting thing. That’s what my session was about. What makes them choose their appearance? There are some standard answers. some say “I want to be me” meaning they want to appear as they appear offline. Some will actually match body shape, most will go for skin colour and so on. Some people pick something that will shock. The naked guy in my class said he did that. Others will also say that they want to be themselves, but mean it has being their hidden true self that they can’t be IRL. They will pick something that will represent something unrealised in their physical self, if they’re transgender they’ll pick another sex, the otherkin love it because they can finally be the animal they identify with. The people that don’t really care are usually the ones that aren’t taking to SL particularly. they may be using it just as a form of communication, or they may think the whole thing is damn silly. The guy who just wanted to shock couldn’t see the point of SL. If he was upsetting people it wouldn’t matter to him so much presumably. Whereas those  for whom it does matter would want to be seen for who they really are.

You said something about people referring to their avatar as “I” when they’re more confident. So does the avatar evolve, learn to do different things, look different as the student gains confidence?

It’s not so much about confidence as about presence, the feeling that they are part of the world, that people see them and react to them, that they start noticing communities or make contacts. The experience becomes more real to them – it matters more. They also learn things like where to shop, where to get the good stuff, how to modify or build things, all of this drives them towards more personalisation and also gives them the skills to personalise. It’s very close to how we build up an image IRL … it’s called a technology of self … how we learn to represent ourselves to others through the clothes we choose, through modifying our bodies.

But it’s unusual for an avatar to be very very fat. Now, this was the thing that made my ears prick up. You could probably (?) be a Doctor Who Adipose. That would be kawaii, so it might be acceptable whereas maybe being fat isn’t 🙂
Yes … there are some users who think it’s griefing to be ugly, that because everyone can be beautiful that they should be, and that if you’re not you’re just doing it to be confrontational. For example http://borderhouseblog.com/?p=672 The adipose would be more acceptable, but some places don’t approve of non-humans, I’ve been banned from some places because my avatar isn’t human enough. an adipose would be a tiny and that’s a specific subset of avatars that have their own places and their own culture and are recognised as such. Being kawaii is key for them and people are usually more accommodating than other non-human avatar precisely because they are so cute. Actually an adipose would make a cool avatar.

I started thinking: Is that student reflecting how they are in real life or how they think they are in real life? Do they have some kind of body dysmorphia?

There is a definite allure for things like SL for people with body dysmorphia … although i think body dichotomy is more accurate a phrase, when you get down to it; since everyone has some sort of dissonance between their physical self and their idealised or “true” self it’s not really “dys” any more. Some people feel trapped in the wrong sex, others wrong species, or wrong age. but for others it can just be height or weight or eye colour. All of them would probably act out that preference in SL., But for those for whom the dichotomy is greatest, for example the morbidly obese, then there is something about the rejection of the physical that i think makes SL particularly pleasurable.

Has anyone built the avatar equivalent of a fat suit to explore the idea of morbid obesity?

I don’t know. I did take part in an experiment where we all were pregnant  for a while. The bottom line is though, that you’re not really phyiscally disadvantaged by any of these things, what does happen is that you can get some idea of the social responses that someone may experience, and that in itself is interesting.

Are older people more prone to have fat avatars as theoretically they’re less prone to peer pressure?? Is peer pressure ever an issue?

I think peer pressure is hugely an issue for everyone who spends a lot of time in SL and becomes immersed. Even if you’re like me and you’ve got an avatar that gets a negative response often, you’re conscious of the reactions of others and are consciously resisting peer pressure. So it’s still a factor. I think older people are perhaps more prone to peer pressure because SL tends to mean more to them. The younger ones are likely to be having too much fun IRL to really care.

Being precious and presenting

Responding to Bex’s blog post http://mavendorf.tumblr.com/post/43978411437/useful-things-what-i-have-discovered-as-a-learning some really useful stuff in there. The comment about not being precious about sharing your materials is so true. I still don’t understand the rationale behind not sharing stuff. As I said at an ALT conference presentation on repositories*, there are only three justifiable reasons for not sharing your teaching materials – because they’re crap, they’re ripped off or they’re not finished.  Most people in the room agreed with me, but it’s surprising how often you’ll come across someone who doesn’t want anyone else to use the stuff they’ve produced. And it’s usually the people who don’t have that much stuff to share. I assume it’s because it is so rare for them to create something they want to hang on to every litte bit.

One of the first projects I did in eLearning was the DIVERSE project – a TLTP funded project which had room built lecture capture equipment at various universities. A lot of the lecturers refused to have their lectures videoed; their fear was that if a lecture they gave was recorded, then it could be used in future instead of them. The reponse of the project manager was that if someone could be replaced by a video then they should be. The point being was that if all they brought to a session was exactly the same as they brought the previous time, and would to the next, then they aren’t worth employing as a teacher anyway.

The same is true of a presentation, or any learning materials. If they essence of what you do in a presentation or a lesson can be reduced to a PowerPoint presentation, then what you do isn’t very good. There’s a book by Hubert Knoblauch http://cus.sagepub.com/content/2/1/75 about PowerPoint presentations, which examines this … I remember his keynote partly because his was the only presentation apart from mine at the conference I saw it at that was in English, but mainly because it really put the final nail in the coffin of every complaint against PowerPoint. His point is that the important part in a presentation isn’t the presentation materials, it’s the presenter; there’s nothing intrinsically wrong with it as a medium. If a presentation is done badly with it, it’s because the presenter is a bad presenter. I’m at a conference next week, and PowerPoint was banned in the earlier years it ran; now text in PowerPoint is banned. I enjoy the difference in approach, but really it misses the point. It’s not text, it’s too much text, it’s not PowerPoint, it’s people who read off screen. Remove the PowerPoint and replace it with someone reading off a bit of paper – it’s still going to be awful. And really, unless English isn’t your first language, then there’s no excuse for reading out your paper. You should know your subject well enough to talk about it with only a few prompts. If you don’t then don’t waste my time talking about it.

The other extreme is the pseudo-hip and trendy TED stuff where the presenter is usually totally the focus and if any imagery is used it’s very flashy. Sometimes this works, but usually it just looks and sounds very cheesy. It’s academia trying to be too rock n roll and it’s just a bit embarassing really. Yes you want to be entertained to some extent, but substance beats style hands down every time. I would still argue though, that the majority of the substance is you, the presenter, your ideas, and the way you communicate with your audience. And no matter how many times your materials are downloaded, re-used, replicated, that’s stil unique to you.

*Childs, M., Bell, V., Rothery, A., Smith, K. and Thomas, A. (2006) Digital Repositories: The next big thing or another failed learning technology?, ALT-C 2006

Mind Your Language

A new piece of research seems to indicate that how we view our future selves depends on how the language we use constructs tenses around future activiites http://www.bbc.co.uk/news/business-21518574 A lot of linguists criticise this idea, saying that language doesn’t influence the way we think. Really? I am not a linguist, and may be leaning towards ultracrepidarianism here but, it doesn’t seem that farfetched to me.

In the research I’ve done, the use of language does seem to influence how well we can express things and so push our thoughts in a particular direction, and also indicate how our minds are working. I did an MScEcon in media studies (20 years ago now) and the dissertation was on how science and scientists were represented in the media, focusing mainly on the twin stereotypes of Faust and Frankenstein. For the empirical bit I looked at newspaper reports on the use of genetic manipulation in food. The arguments for and against were filling the newspapers back then. At least the arguments against were. My local MP was involved in a campaign against what he called Frankenstein farming. And it caught on. The arguments against GM were so much more forthright, easily communicated, and powerful, than those for because they could be expressed more succinctly and with more resonance. Because you only have to use the word Frankenstein and suddenly everyone knew where you were coming from.

We have good guy scientists in our popular culture, but on the whole they’re not mainly known for their science. A large proportion of the most wellknown superheroes are scientists, if you think about it, mainly because the guy who created them was into science. But that’s not what you think about. There are hero scientists. Sagan and Feynman are two of mine, but even though I studied the fields they researched in (I did a BSc in Physics with Astrophysics) it’s actually their roles as humanists and that expression of truthful spirituality that only atheists really get right, that I think about mostly when I think of them. So our language, I think, suffers from not having a catch-all signifier to stand for all the great stuff technology does for us, and probably our culture suffers as a result.

In the work I do now, I see the language students use as a very useful barometer for how well their sense of embodiment in a virtual world via their avatar is developing. The first hour or so, the avatar is referred to as “it”, then as “he” or “she”. It’s when their avatar becomes “I” that you really know that they’re in the right position to start learning in that environment. And that’s simply the one effect of language that I particularly look for. How many others are there all around us that we’re not attuned to, and miss?

The liberation of online interaction

I’m finding that blogging is easier if I find another post or forum discussion that triggers a thought and this one http://e4innovation.com/?p=638 by Grainne Conole prompted a lot of thoughts – the dangers of online interaction. I interact a lot online, I occasionally interact offline. Most of the people I interact with offline I also interact with online. Most but not all. I find that those that I only interact with offline I’m not as close to, I don’t know them as well.There is a distance between us and I realise this is because of two reasons. One is that our communication is infrequent. The other is that it’s through the constant engagement online that I get closer to people. I had this conversation with someone recently – she commented that we would have not got to know each other if it wasn’t for the Internet. True. It’s also true of probably the majority of the friends I’ve made in the last 15 years. I had lunch with five people I know, and whom I know or at least like well enough to count as friends. Four of us do a lot of work in Second Life and we all commented on the experience of getting to know someone in a virtual world first and then meeting them IRL. It’s a strange experience, like double vision, you know someone and yet simultaneously you don’t know them. sometime the physical person can be quite different from the avatar and yet I feel I know someone so much better if I’ve seen their avatar first. You’ve seen a glimpse of how they see themselves, not just what the physical world has imposed upon them (what has been termed “the tyranny of meatspace”).

I am deeply suspicious of a viewpoint that presumes that technology that alters social or behavioural interactions is problematic, or worse, detrimental. The underlying assumption of something like Alone Together that technology is separating us because we are spending more time online prompted me to refer to Sherry Turkle as having “gone to the dark side”. Which she (quite rightly) picked me up on, since I had made the mortal sin of making this judgment based on what I’d read of her book, not actually of the book itself. What I meant though was that, from what I knew of the approach, it was to look at the movement from offline to online as intrinsically a deficit model. Like Daniel Goleman’s underlying concept of cyberdisinhibition, which is that this makes us behave antisocially. Actually I’ve seen the benefits of cyberdisinhibition in that students who are too shy to interact offline blossom when given the chance to interact online. Anyone who complains about stepping away from “real life” needs to first justify what’s so great about “real life” anyway. I like the tack Caitlin Moran once took in her column, as a busy person and mother, the options aren’t online interaction or offline interaction. they are online interaction or no interaction. The time we spend online communicating shouldn’t be compared unfavourably to time spent with people. It should be compared favourably to the absence of communication offline. Online interaction overcomes isolation, not encourages it. And the quality of online interaction is often better than offline. Compare a simultaneous one-to-one with a few people on facebook with sitting in a pub unable to get a word in edgeways as one person monopolises the conversation and the background noise drowns out most of what’s said? Dump your preconceptions about the relative value of offline and online. Be honest. Which really is the better conversation? And sure there are weirdos and stalkers and god know what online, but again, offline is worse. At least online you can just unfriend them. Or ignore them.

Ultimately though, the doubts and worries about what technology is doing to us are completely pointless, since they are happening irrespective of what we feel about them. The whole notion that the offline world is real and the online one isn’t, is flawed in itself. Both are real, we have been a mixture of biological and mechanical for a long while. Again my view may be biased due to my transhumanist viewpoint, but the technology is us. Welcoming the way it transforms us is always going to be better than rejecting it, because change is fun, exciting, challenging in and of itself. Thinking back to lunch. There were six of us, three of us had our vision augmented by glasses, one of us had dyed hair, one of us was tattoed, one of us was in a wheelchair, three of us were probably more wellknown through our avatars than our physical selves, one of us had a prosthetic ear on his forearm.  We segued fluidly between discussing our online bodies (hair loss, knee pain) to our avatars (whether we’d acquired genitalia). As a group we represented a range of different cyborg selves, bodies modified, identities distributed. And we weren’t exceptional by any means. As a society we are a body electric, our commuinication our identities extend through the machine. Let’s just accept it and not get so hung up on it.

eLearning Today

There’s been an interesting discussion happening on the ALT forum recently about the use of the term eLearning. As a senior research fellow for elearning at Coventry Uni (actually I think I might be THE senior research fellow for elearning at Uni), I don’t actually have a problem with the term. Yes I know people have preferences for Technology Enhanced Learning, or Technology Supported Learning, but really these are just labels. As long as we all have a general idea of what we mean by the term, it’s trivial to get hung up on labels.

The more interesting debate though is … what exactly do we mean by the term? And also, do we really need it anyway? Ask most people and it’ll be something to do with computers and technology and education, that sort of overlap. It seems pretty arbitrary though which technologies are included. I’d put LMSes (or VLEs if you prefer, but I think the US label describes what they do better) in the category, and videoconferencing, but not word processing, or spreadsheets, or even photocopiers. And if you look at Vygotsky, Leon’tev et sec and their stuff on mediating artefacts, then they’d argue that anything, a blackboard, a book, even language, is a tool which we use. For a lot of people it begins and ends with their institution’s VLE. But I like what can be done with getting students producing video so would add that.But not watching video … unless it’s online and linked in to discussions or learning content … :-/ it’s a blurry line.

Looking at the distinction between which tech I mean when I talk about elearning, I realise that it’s no more a really strict defining criterion than “things that I didn’t use or see being used in a classroom during my PGCE”. Since I finished that in 1989, that’s a lot of stuff. One of the outputs of The 52 Group (a think tank of academics pulled together by Lawrie Phipps, though small enough for me to instead refer to it as a ponder pool, there were only 6 of us) was the concept of postdigitalism, that digital technology now is so commonplace that we should no longer see it as distinct from anything else. That makes a lot of sense to me.

And yes, there are a lot of other interesting innovations in learning that can excite people. Activity-Led Learning is taking off in a big way in my faculty, led by my erstwhile fellow Teaching Development Fellow there, Sarah Wilson-Medhurst. Some fascinating stuff, so eLearning isn;t distinct because it’s innovative.

The discussion then is, do we need a separate label for what is, really, just another form of teaching. Is there anything distinctive about eLearning or can we just dump it as a concept?

I think one thing we can agree on though is that the technology ultimately, is not what eLearning is about. At least those of us who do it can. There was an interesting debate at the Oxford Union on eLearning a few years back http://bit.ly/Ym2pAz 2009 to be precise, with Diana Laurillard  on whether eLearning can meet the needs for tomorrow. There did seem to be some confusion there about the term, with those arguing against believing that “The e-learning of today was not all things e and learning; for the majority, it was much more limited, it was e-courses for compliance and basic knowledge acquisition”. Errrm no, that may be the view in the private sector delivering computer based training (if you look at the magazine for the industry called Elearning today it’s largely appalling … lots of ads proclaiming “content is king”) but for the rest of us it’s about bringing people together, about find new ways to get them to think and to engage with material, and new ways to express themselves. Content is cheap, most places will give it away, it’s teaching that’s important. Dave White did a study looking at the optimum ratio of online tutors to online learners. Errm I can’t remember the optimum number, but I do remember the maximum was 30. The idea that eLearning provides a pile em high, sell em cheap solution is erroneous.

It still gets trotted out as a reason to do it, or not to do it though. On one project I worked on which was making academic tutoring accessible over videoconferencing, a tutor refused to take part, complaining that introducing technology was symptomatic of capitalist … blah blah blah, … he referenced self-service checkouts and god knows what else. The reality, that whether you’re delivering it face-to-face or over the internet, you still have a one-to-one interaction, so aren’t actually cutting down at all, completely failed to make a dent in his knee-jerk reactionism.

The other extreme from seeing technology as some sort of neocon bogeyman is seeing it as a solution in itself. The most difficult part of staff development in eLearning is people seeing you as someone who just shows them how to use the technology. a number of times I’ve met with a lecturer who wants to use a technology, I’ll have shown them how it works, then arrange to meet them to support them with their teaching. That meeting gets cancelled, they go ahead and use it, and when it all falls apart, because they haven’t realised they also need a new set of skills to make use of it they seem surprised and either reject the technology or make a big deal of learning from their mistakes. Errrm no, the bit you skipped is precisely what my role was. That’s the interesting bit.

And I think that’s why I think eLearning is a recognisable and distinct thing, and why it fascinates me. Because of that step in the process. Yes I agree that, ultimately eLearning is just a form of learning, the pedagogy comes first, in as much as that is the goal. I wrote about this nearly 10 years ago now in http://www2.warwick.ac.uk/services/ldc/resource/interactions/celi/chap7/article2/childs which was originally titled “Is there an e difference?” (see what i did there?). But the skills that need to be learnt to use it effectively in learning and teaching, how accomodating and exploiting what technology does alters our practice, that’s what’s interesting. Josie Fraser makes an interesting point about the idea of putting the pedagogy first. Yes of course, she says, that’s ultimately what the point is of eLearning, but it’s not as simple as knowing what you want to do and then finding the technology to do it. Her point is that it’s a two-way street, understanding and knowing what the technology can do opens up new areas for learning.

And technology does change us, as we adapt to it as much as it adapts to us. It’s a mechanism for social, cultural, physical change more than anything else, (other forms of innovation notwithstanding). It has the specific forms of problems noted above (seen as bogeyman by some colleagues, and the change it requires in practice overlooked by others), it has a specific set of selling points to colleagues too (use of ICT always looks good on an OFSTED inspection), but I think it requires that adaptable, explorative and (I’m going to say it) transhumanist perspective to exploit if fully. And, really, to be honest, the bottom line is that it’s about playing with all the shiny cool stuff.

Observation

Transcribing an interview with Ian Upton, and heard a comment I made saying that the difference between life now and pre-internet (now 20 years ago) is that then I had to make an effort to be present, to stay in touch with people. Now the effort is to be absent. It takes a real concentrated focus to remove myself from communication with others. With everyone in constant touch with everyone else, it’s amazing any work gets done at all.