Immersion, presence and immersiveness

This post is prompted by a discussion I’ve been having in linkedin with many of the delegates from the Experiential Learning in Virtual Worlds conference in Lisbon earlier this month. It’s extracted from the various posts I made, but also prompted by their comments, so thanks to them for the discussion.

The question was really about the role of immersion in general, and in virtual worlds in particular, and whether it’s different in different environments, and particularly what immersion is and how it differs from other forms of experience.

I think the problem with much of this is that we’re trying to explain experiences that aren’t necessarily ones we’re used to, in that the technology does provide new sorts of experiences. And that these things are defined differently by different authors, so we’re not always talking about the same thing.

For me, immersion is a very precise metaphorical term for that sense of feeling submerged in an experience. It’s like being immersed in water when you’re taking a bath. Making a certain set of technologies different because they’re so called immersive technologies is pointless as far as this is concerned, because any technology is immersive. You can lose yourself in a book, that’s becoming immersed in it. You can do the same in a play or a film. In those media it’s called the diegetic effect, the fictional world of the narrative becomes real just for the period that you’re part of it.

Is immersion the same as presence? I think it probably is. While you’re feeling immersed, you’re transported to that fictional world. There’s a paper by Sheridan MIT’s journal Presence in which he talks about the sense of actually being there when we experience these media. There’s a sense of departure from one reality and arrival at the other. We get in the flow of the text, of the narrative or whatever, but if something intrudes, someone talking in the cinema, or a cat jumping on your lap, then that connection with that fictional space is lost.

I rant about that a bit on a post in a previous blog. It’s in response to the BBC placing a trailer for a TV show over the top of the climax of Dr Who http://blogs.warwick.ac.uk/markchilds/entry/responses_to_nortongate/ worrying not just because it ruined the experience, but also because anyone who can do that obviously doesn’t get a large point of what art and entertainment are for, which is that sense of transportation and immersion.

Is immersion necessary for learning, or for engagement? On the whole, I don’t think it is. In fact some entertainment deliberately avoids immersion. Brecht called that Verfremsdungseffekt. I’m reading Midnight’s Children at the moment, it’s a good book and I’m enjoying it. But the frequent breaking of the author into the narrative, and the jumping from scenario to the next precludes that sense of flow, of being caught up with the story. The reader isn’t submerged in the same way. Actually that distant, sometimes critical reflective position is often referred to as engagement and there’s a great paper here on how that works in Grand Theft Auto http://www.jorisdormans.nl/article.php?ref=theworldisyours by moving between a sense of immersion and engagement, is perhaps how we get the most out of something. Experiencing both at once is supposedly possible too, a state called metaxis.

Two people can watch the same piece or experience the same technology and one can feel immersed and the other not. Ultimately immersion happens in your head, not on the screen. Technology has something to do with it though, but the problem with the idea of immersive technology is that it implies somehow that it creates that sense of immersion. It doesn’t but it can help. It’s more useful therefore to think of immersiveness as a series of technological factors that can contribute to immersion (resolution, frame rate, width of field, soundsurround, haptics, etc. the so-called depth and breadth of senses engaged) as objective measures, without being hung up on the issue that they don’t actually cause immersion.

I think one of the clarifications that can help is the difference between perceptual immersion and psychological immersion … this is in At the Heart of it All by Lombard and Ditton http://jcmc.indiana.edu/vol3/issue2/lombard.html which together with Biocca’s The Cyborg’s Dilemma http://jcmc.indiana.edu/vol3/issue2/biocca2.html is probably the most seminal article on this. Immersive technologies lead to perceptual immersion, but this might not necessary lead to psychological immersion. And psychological immersion can take place without recourse to messing with your perceptions. It depends on the individual. How it depends on the individual is one of the things I’m particularly interested in looking at. But more on that some other time.

Another thing that gets bundled into the same package as immersion is immediacy. Sometimes immersion is defined as the perception of non-mediation. I don’t think these are equivalent at all. Sure if you’re in an environment where you don’t notice the technology it can seem real (if technology ever gets that sophisticated) but actually the things that help mediate information can actually help you feel more immersed. An example: minimaps in Second Life. They pop up on screen, (so you’re aware of something between you and the virtual space) but once you’re accustomed to them, and incorporate them into the automatic way you interact with the world, they become extensions of your perception, they help you wayfind round the space, and so therefore add to the sense of immersion.

So we have three factors that are linked, but also have differences: immersion (=presence), immediacy (=non-mediation) and immersiveness (=realness, vividness).

I’m using the word presence for “being there” and I’m deliberately avoiding the word telepresence because that’s become an ambiguous word. Originally it was coined by Minsky to mean ability to act at a distance http://web.media.mit.edu/~minsky/papers/Telepresence.html but was since expanded to mean anything at which you felt you were present at a remote location (like feeling a videoconference was actually a face-to-face meeting). Recent developments in technology have reappropriated the word to mean specifically technologies that enable you to act at a distance, not just experience being at a distance. For that I’m trying to get into using the phrase “distal presence” since that’s not ambiguous. But I just wish people would come up with a definition for a word, that’s different from their definition of a different word. And stick to it.

So if any technology can cause immersion, why get hung up on the more immersive technologies? Good question, but I’ve run out of space. Some other time.

Sheridan, T. (1992) Musings on telepresence and virtual presence. Presence: Teleoperators and Virtual Environments 1 (1), 120 – 126

Mobile learning in hospitals

Yesterday I had the opportunity to visit Birmingham Children’s Hospital. The children there still receive an education, and so it’s a site of James Brindley school (it has 14 sites around Birmingham). They asked me to come in to get them started on evaluating the impact that iPads have had on their teaching there. I think there’s an amazing amount of things that can be done with tablets (i’m agnostic about specific devices, though I’m director of research for the iPad Academy UK I actually own a Transformer Prime) and this was an opportunity to see some of the real advances that can be done with them.

There are three main modes they teach in. There’s a primary classroom and a secondary one, for children who are well and mobile enough to leave their wards, each ward has a separate room for teaching in too, and then a lot is done at the bedside. They have children from around Europe visiting, so language can be a barrier, but there’s google translate just a tap away. They do a lot of maths and art education too, so it’s less of a problem in those subjects. The devices integrate directly with the other work they do, so in animation they draw out the storyboards on paper, then use the storyboards to create animations using an animation app. The primary children showed me a video they’d made (the ipad integrates seamlessly with the reflectors and smart boards, no annoying plugging in data projectors – which never seem to reach and need rebooting a couple of times to get them to recognise each other). In the heart surgery ward the teacher showed me the maths apps she uses, meteor maths is a popular one (you have to tap on the two numbers that make the solution before the meteors bump into each other). She had to first of all persuade the boy she was teaching that he didn’t have to be scared because I’m not that sort of a doctor. She says she has to be careful with that one because it can raise the heart rates of the children too much. In the cystic fibrosis ward there was a little girl who at first could only use her head, she now has the use of her arms too, but she was able to interact with it using a stylus in her mouth, and the one thing she wanted when she left was one of her own. Neurology also finds them useful, since even if the children can’t hold a pen, they can trace letters with their fingers. It’s also useful because they can record children’s progress, sometimes which can be only small increments, but by seeing work, or videoing reactions when for example, patterns are touched on their hands, these can form a record over months, all integrated into one place. Everyone was doing Easter-themed work, one girl in an isolation ward was making little chicks, and I could see the work because it had been photographed, printed out and stuck in her workbook (another example of it integrating seamlessly with the usual practice). another advantage: it takes hours to clean up a PC enough to take it into an isolation ward, books can never be made clean enough, but a wipe down with an antiseptic wipe and a tablet is ready to go. The downside is on the neurological ward, the in-built magnets (which are only there to hold covers in place) interfere with the shunts if they’re fitted, so they can’t be used. Another girl (I think on the nephrology ward, but it was towards the end and I was feeling slightly swamped by then) I spoke to said the best thing about it was the games, but these were actually games she was learning with according to the teacher.

This was the theme all the way through: the children took to it because it was interactive, and the apps were often so game-like they weren’t aware that they were learning. with it being tactile and visual, there was a pick-me-up-ness (which is a real phrase, I know i just googled it) to it that generated engagement. This development in practice has only taken a few months. This has applied to the teachers too, in the staff room they are drawn to each other’s practice through the sounds and visuals of the ipads they’re playing with and although they’ve always shared their practice, they said that this has increased with introducing the ipads.

the aspect of mobile devices I’m particularly interested in is the way that the use of them becomes embodied (of course I am, it’s in the title of the blog). I think the reason why mobile devices are a step-change in our relationship with technology is the greater and faster degree to which they become extensions of ourselves. They easily make the jump from tool to prosthesis because they’re tactile, they’re flexible, we carry them close to us constantly, and well of course because they’re mobile. The depth of this is indicated by the relative comfort we have with letting someone else use our desktop PCs (no problem) our laptops (slightly uneasy) and our tablets/phones (feels very much like an invasion of our space). The ipads were kept in amongst the toys, books and so on, in plastic crates and in bags. They were just another part of the kit, albeit the one piece that brought all the other bits together.

Anyway, at the moment I’m looking for funding to expand the degree to which the evaluation can take place. There is definitely a lot of awesome practice going on that more people need to hear about. It’s also a very moving environment to be in. The children there were going through stuff that’s worse than anything I’ve gone through, and yet all were smiling (even the boy with in the heart ward once he realised I was a Ph and not an M). It’s very difficult to find the words without lapsing into cliche or sentimentality, but if you think you’re having a crap time, it really is the best place to force you to get a grip.

Tips on editing

Today I am mostly editing book chapters – this is the fourth book I’ve edited and also have done two sets of conference proceedings. So there are some things I possibly do as rote now that might not be obvious. Although most are:

The obvious ones are good discipline with organising directories with the various versions in them. If I’m writing my own stuff it’s easier, the date of the file goes in the file name (in a YYYMMDD format obviously) so the most recent one is always at the bottom. When you have loads of different authors, all using different naming conventions, and when you might have to take a break of a month or so, while they do rewrites, or you go off and do things that earn you money, then it’s important to make sure they’re always sorted into the right directory, properly labelled, so you know where everything is when you come back to it. And have another directory with things like author emails addresses and so on so it’s to hand. And a list of what everything is and which chapter it is too.

Create a style sheet. It’s a bit laborious but the minutes you spend doing it at the start will save you hours at the end. Create a template using that style sheet and send it out to the authors. You’ll probably need to do some tidying up at the end, but it will save you a lot of hassle.

Get the authors to submit pictures as separate files, and inserted into the text. Publishers want them as separate files and it saves you having to mess about at the end, but it’s good to see where they belong. Find out what the minimum dpi your publisher insists on too.

Here’s the least obvious one … tables should be submitted as images, not spreadsheets. You want the author to be deciding which bits go where and how it should look, not the publisher, leaving it to that stage creates all possibilites for error. Particularly if the table includes images. As few a files as possible is always a good practice.

Check references tie up with citations. The easiest way i’ve found to do this is to go through checking citations tie up with a reference at the back and while doing so highlight the reference the first time it’s cited. When a reference is missing, it’s easy to flag. And at the end you have a mass of yellow (or whatever) at the end. Where there are gaps do a word search on the name. If it doesn’t return anything, then you’ve an uncited reference. Flag that for the author. Then remove the highlights.

The most common mistakes academics make? Plurals. “Data is.” “Media is” or pluralising “medium” as “mediums”. Mixing up phenomenon and phenomena. I even saw “dices” the other day. Sure there are some tricky ones, (octopus for example, but that one doesn’t crop up often), but those are worth always checking.

It’s worth looking up some of the references too. Sometimes they’ll misquote, or misinterpret. Usually common sense will flag those. If you’re thinking “really?” it’s worth a look.

Oh yeah the reviewing thing as a whole is worth another post. I’m really in this one thinking about the bits that are specific to editing, organising, copy editing, since that’s the stage I’m at with this book.

EDIT: Some more points occur to me.

Don’t use the bullet point tab to create bullet points. Create your own style called “bullet” and mark the bulleted lists up with that. Chance are you may want to change your mind about how to lay them out, and then you have to go through changing them all manually. Or you may change the Normal style and it re-sets all your bulleted lists, or just some of them.

Leave the formatting till last. There are lots of reasons for this, one is during the editing process your authors may add new stuff (so you have to do it again) or delete stuff (so you’ve wasted your time). It’s unlikely that they will do the formatting themselves. If they haven’t started off using your template, then they won’t. The golden rule of anything to do with editing anyway is that it’s less effort to do it yourself than get someone else to do it.

The other reason is there is something exhilirating (by my standards anyway) at seeing all of the disparate chapters, with various naming schemes and formatting, and with variations in spelling, all coming together into a uniform look. It’s at that point you really feel like you’ve got a book. At the moment I’m about half way through and have them all for the first time in a single directory with all the figures etc with a standard naming. At the moment, with the chapters that aren’t formatted properly (and bless, the current one I’m working on has tried but the styles on the headings and subheadings are switched).  I usually paste a new chapter into a previously created file, save it as a new file and then delete the old text. It’s the easiest way to import an established style sheet.

Oh and if possible, if it’s a book in a series, have an earlier edition in the series to check against.

A rant about bioethics

Anyone who ever read my last blog (at http://blogs.warwick.ac.uk/markchilds/) will know that I sometimes go a bit off-topic in order to let off a little steam about something. And once off-topic I run the risk of running into territory I know little about. But I read this … http://www.christian.org.uk/news/bioethics-expert-warns-against-gm-babies-plan/ and the full report here http://www.christian.org.uk/wp-content/downloads/3-parents.pdf written by someone who is apparently an expert and it wound me up so much because it’s such a bad (or good) example of what happens when you let your need to come up with a particular position influence your opinion on something, I thought I’d comment.

The discussion is about mitochondrial transplantation in order to address mitochondrial diseases. They’re rare but pretty devastating. It involves taking out the mitochondria from an ovum and replacing it with donor mitochondria. The report does a good job of describing the procedure. Here’s the thing though, although mitochondria are passed from mother to child, they contribute nothing to the physical traits of the individual. That’s the DNA bit. The report seems to blur over this. Anyway, addressing the points one by one.

Biomedical risks: Yes it sounds like there are some, but the point about any procedure is that it contains them, and the researchers take those into account. Making this a particular case simply because it’s about genetics seems disingenuous.

Similarity to cloning: It’s similar inasmuch as you’re messing with ova, but it’s not replacing anything to do with the chromosomes … there could be some ethical problems with creating a human clone, but that’s something to be considered when that’s suggested, not at this point.

Similarity with the ‘male egg’ proposal. This just seems like an excuse to appeal to the homophobes in the audience. It’s not really similar at all.

Moral status of the embryo. Like any procedure to do with embryology, this produces spare embryos. This isn’t exceptional, so isn’t grounds for any additional concerns. Plus embryos aren’t actually defined as alive legally. Get over it.

Modifying the genetic inheritance: Again the sleight of hand with the genetic inheritance of the offspring. Mitochondrial DNA don’t influence appearance, behaviour, whatever, they process energy and growth in cells. That’s it. Sure they’re passed on, but it’s not like you’ll inherit any of the traits of the donor. Oh wait .. .that’s the next argument.

It’s eugenics: OK eugenics does have a bad name, it’s associated with nazis breeding supermen, and ethnic “cleansing” but this is about removing some really delibilitating diseases and isn’t about creating new communities of ubermensch. Again, this is an appeal to some pretty nasty reactionism.

Kinship issues: This is where the report goes from some dodgy appeals to knee-jerk reactionism to some seriously unhealthy worldviews explicitly stated: “a genuine risk exists that future children may be deeply confused and distressed in their understanding of who their parents really are. This may haveserious repercussions on the manner in which they define their identity and self-understanding.” Identity is my field. Identity is a complex and individually negotiated idea that each person works with and comes to terms with in their own way. The idea that someone may feel the person who donated their mitochondria is in any way a parent is remote, but then, people feel connections with the families of organ donors so it’s possible. But so what? Kinship varies, I know people with three parents, four, none. Adopted, fostered. Who have wards and just very close ties. I don’t want to get into knocking religion here (because it’s important to a lot of people who are important to me) but this is often where ethics advisers from a religious viewpoint get it wrong. Humanity, society, people are far more varied than they want to admit. We’re far more able to adjust to diversity than they want to accept, and they try to impose their own limited viewpoint on what is good and bad for people on the rest of us. I don’t mean religious people in general, I mean the ones who take it on themselves to advise the rest of us. It’s petty and small-minded.

I also should point out my own political ethical standpoint here (of course) and why maybe this is relevant to a blog called The Body Electric, but as I’ve said before, I’m a transhumanist. I see something like altering people or society and mixing things up and my first thought isn’t “ugh how scary, let’s stop it”,  it’s “wild, bring it on, let’s see what happens”. Luckily, I think my side is winning.

Finally we get to the part where I felt compelled to write this post: Sperm and eggs represent the whole person. The quote “When parents procreate in a normal way they also give of themselves in love wholly and unconditionally in the sense that it is not only a portion of the person that takes part in the procreation. It is the whole person that takes part, with his or her whole body and soul.” This is quasi-mystical bullshit. We are talking about addressing real people with real problems and this kind of comment is precisely why, if you’re making decisions about actual real things, you need to leave your fairy stories at the door. It would be very worrying if this report and particularly this final statement had any influence on the decisions being made, and it illustrates why something like The Christian Institute is the least competent organisation at addressing moral issues. In short people like Dr Calum MacKellar need to grow the hell up before opening their mouths in public.

An Edutechy Wonderland

This is written in response to a post about re-entering Second Life and the changes (and lack of changes there after a two year break) written by Bex Ferriday at http://mavendorf.tumblr.com/post/45827913344/second-life-second-attempt

Firstly, the problems Bex relates about the course weren’t really due to the design of the course, or the design of SL, in my opinion. I think with anything like that there’s often a problem of commitment from the people taking it. People just over-estimate how much time they have, other things crop up, and so participation wanes. Look out the dropout rate from MOOCs and they only use tried and tested technologies. I got a lot from it anyway.

I think the biggest advantage and disadvantage of using virtual worlds for education was that for most of the latter part of the noughties, virtual worlds were synonymous with just one platform; Second Life. The advantage was that nearly everyone you knew teaching and researching the field were in the one place. If you wanted to visit their build, observe what they were doing, guest lecture in their teaching, then you didn’t need to learn to use a new interface (unless Linden Lab itself decided to screw around with it), you could use your own avatar, inventory etc. If they held a social event, you could meet up with everyone you knew  and worked with, invite other people over to what you were doing. If your work involved a social dimension (like exploring digital culture, or digital identity) then you had a living complex world to send them out into, full of 10s of 1000s of people. There was a real sense of a community of educators working together.

The disadvantage of course was that it was all operating under the discretion of one software company, and when they pulled the plug, it all fell apart.

Well “pull the plug” is a slight exaggeration. For anyone who doesn’t work in the field, Linden Lab, who ran Second Life, ended the educational subsidy. So most institutions could no longer afford to stay in there, and a lot of cheaper options emerged.

Last year I was trying to organise a tour for a group of students, and so went through the normal list of landmarks to show them different resources. Fewer than half were still there. The numbers of people using it are down, but apparently revenue is up. So the customer base is a smaller amount of more committed people. Which I guess suits the provider. Not so helpful for us using it for education though.

The impact on education towards making it more mainstream has been negative. The fragmentation of the community means it’s more difficult to show colleagues the range of stuff it can be used for. It’s more difficult to find good examples of practice, because you first of all have to know where to look.

Bex’s other point is that the technology hasn’t moved on at all. I’m less worried about this. As long as it’s good enough to give you a sense of immersion, (and it can be) and a sense of copresence (and it does) then overall tech quality isn’t a problem. A lot of people’s equipment is still not great, so keeping the graphics at a lower end gives the majority of users a chance to catch up. I’ve given up on IT departments ever doing so though. What I was hoping for though is for the problems to be resolved. But the lag is as bad as ever. In a session I was teaching last week, it was the worst I’ve ever seen, I got booted out several times and struggled to get back in.

But there are still fascinating things to see there, which reassures me that the technology is here to stay, and is an essential part of the educator’s kit. Just the ones I’m involved with:  there’s the palaeontology course at the Field Museum of Natural History in Chicago. The Science Ethics course at the University of Iowa. The digital cultures course at Newman University, the Human Behaviour course at University of Southern Maine, the Extract / Insert performance and installation by Stelarc, Joff Chafer and Ian Upton. All fascinating. All excellent from an education perspective, (or performance), and all only really possible in a virtual world. And all, (maybe coincidentally, maybe not), taking place in Second Life.

I think what will emerge is either another single platform that will replace SL and everyone can migrate back to that to recreate that single community, or the technology for hypergridding (i.e. linking together the different platforms) will fill the same role. In this thread responding to Bex’s post in Facebook, Anna Peachey she always thought of SL as the fluffer for the bigger event. In the physical world, the work of the fluffer has been made redundant by Viagra. Hopefully the field of virtual worlds will see a similar game-changing technology.

Belief and the impossible

I saw the Daily Post prompt today http://dailypost.wordpress.com/2013/03/18/daily-prompt-impossibility/  and immediately went off on an internal rant. bah belief and all that. Reading others’ blog posts in response I realise that the spirit of the challenge is to list things that are seemingly impossible (feats of physical endurance, forgiveness, stuff like that) and choose to believe in them. I took a completely different tack with my thoughts, maybe taking it too literally, but this is where I went with it.

Firstly there’s a notion of philosophical scepticism which is that nothing can be absolutely known.Even “I think, therefore I am” assumes too much (how do you really know it’s you doing the thinking?) The Universe may actually be a hologram projection of a 2D surface http://en.wikipedia.org/wiki/Holographic_principle, it may not exist, this may just be a simulation, or brought into existence randomly a millisecond ago complete with memory of the past.

The rational person is aware of all of this, and brings it to mind occasionally, but it would be pretty difficult to allow this to weigh on all of one’s decisions. When we say “is” therefore, we’re using that as a short-hand for “is to the best of our knowledge”. To the best of our knowledge I exist, this laptop exists. The Universe was created 13.4 billion years ago in a Big Bang and arose out of the final heat death of the previous one. And so on. These aren’t a matter of belief though, to say these things are true just means that, looking at the evidence, these are the best explanations we have. That’s really what truth is.

It’s therefore true that there is no God. Or no afterlife. To the best of our knowledge there isn’t. That’s not to say that there definitely 100% isn’t one. It’s always possible that there is an omnipotent divine being who just doesn’t seem to have an impact on anything. But accepting that doesn’t make me an agnostic, any more than accepting that the universe may be a hologram has an impact on my daily life. I will act in such a way that the truth is there isn’t one. To the best of our knowledge. That for me is the essential difference between an atheist and an agnostic. An atheist has made that observation, but is prepared to change his mind (and in the case of Tim Minchin carve “fancy that” on his cock with a compass if proved wrong). An agnostic has put off making that observation. If it was choosing my dessert (the majority of my metaphors include chocolate at some point) an atheist would have ordered their dessert, but be prepared to pick another one if they see one that’s better. An agnostic is still looking at the menu and not picking one.

So is anything impossible? No. Actually it isn’t. There’s a very very very tiny possibility that anything can happen. Even God. Do I actually hold any beliefs about anything? No I don’t. There are making decisions based on weighing of evidence, there are observations. But none of these constitute beliefs. This is what rational people (and rationality is the most human thing we can aspire to) do.

On creativity

This is another blog post following up on one Grainne Conole has written (at http://e4innovation.com/?p=661) which is ironic, I suppose, given the nature of the topic. I wanted to chip in on the conversation too, because I wanted to offer a slightly different perspective on what creativity is, and what constitutes a particularly creative person. I think our culture is obsessed by the lonely, creative genius who works away creating rare works of art, and I think this is both limiting and offputting for those of us who aren’t actually geniuses. So I’ll offer some examples of what also comprises creativity, using as an e the person I’d consider to be one of the most creative people, if not the most creative person, whose work I follow, Gregg Taylor.

You might not have heard of Gregg Taylor because he’s not someone held up as one of the great creative geniuses of our time, because what he does doesn’t fit in with that image. Gregg is the force behind Decoder Ring Theatre, which produces podcasts in the style of 40s’ and 50s’ radio serials. He’s been doing that for eight years. and I first came across them about seven years ago.

These podcasts come out twice a month. So 24 a year, of which he writes 18. That’s 18 a year, for 8 years. Without fail. We underestimate that as an aspect of creativity. Quantity. Sure it’s important to have the novelist spending his entire lifetime creating one world altering novel. But to be able to sit down and come up with something new, every fortnight. That’s an incredible achievement. I think more of us should look at the amount someone produces as a mark of a creative person.

That’s not to say the quality isn’t there. Sure there are better writers. I’m reading Midnight’s Children at the moment, by Salman Rushdie, and there’s a great writer. But the content of the podcasts are entertaining, there’s character development, nearly always a plot (as much as you can get into 25 mins), there’s some fun lines, poignancy. They have the lot. And Gregg is a better writer than most. And he comes up with that every two weeks. For eight years. Very few creative outputs have those attributes of quality and consistency.

But I think what also makes the truly creative people stand out, isn’t just the ability to succeed in one area. There’s a good team of actors in these podcasts, of whom Gregg Taylor is one. He acts, directs, does post-production and markets them. He’s also written novels based on the characters and now has launched the first comic book, to great reviews. Specialism is over-rated, adaptability is a mark of a very creative person.

I think, though, the most unhelpful of the characteristics we associate with creativity is the idea of the emotional erratic soul suffering for his art. We all know people who are jerks, who people let get away with being a jerk, simply because they are creative and innovative. This probably happens more in the academic world than the art world. Happens a lot in movie making too. They’re perfectionists, or they’re obsessed, or any one of a number of excuses we give for their bad behaviour. But really, if it’s such an effort for them to create, then really they’re not that good at being creative. Sure everyone needs to put their work first occasionally. I get ratty if I get interrupted in the middle of thinking about something. But actually … that is because if I lose my train of thought it takes me ages to get it back, sometimes I never do. So that’s a case in point, I’m actually not that creative, otherwise I could recall it whenever I want. In contrast, the DRT troupe engage with their audiences, through twitter, facebook and there’s an approachability there that you wouldn’t get, say, with other writers, actors etc. Not sure what I’d call this as a quality, but maybe not being really up yourself … humour or humility would cover it.

While on the subject of humility, I first heard someone describe themself as a Creative at a seminar day a couple of years back (at the University of Hull actually). Talk about lack of humility. To describe yourself as a Creative is, by default, implying that you’re somehow different from everyone else, that you’re creative and they’re not. No. You’re just lucky enough to be in a job that supports you to be creative, that doesn’t make you special. I get to spend a big chunk of my time writing. Sometimes I get paid for that. I am therefore a jammy bastard, http://www.urbandictionary.com/define.php?term=jammy%20bastard and I never forget that. If you ever refer to yourself as A Creative, you’re pronouncing it wrong, it’s actually pronounced “wank-er”.

Finally, and maybe the most controversial of the pre-requisites for creativity, the DRT output is free. One of the mistakes a lot of people who create things make is that they think that because they are talented, the world owes them the opportunity to put those talents to use. No it doesn’t. Work for the vast majority of people, is doing stuff they hate that they get paid for. If you like doing it, it’s not work, and essentially there’s no reason to be paid for it. People need you to stack shelves, mend roads, grow food. They don’t need your book or your music in the same way. So yes, the majority will share your music, download your movie, pass on pdfs of your book chapter. That’s tough, but it’s a fact of life and you probably need to just face up to that rather than whinging about it and trying to come up with legislation to stop it. I’ve never actually taken anything like that for free, I pay for the music I listen to, and the TV shows I watch, and I donate to DRT and soma fm and any of the free content that’s out there, but I do so because I see it as a moral obligation, not a legal necessity. And from a selfish point of view, I want to see them continue. If enough people value your work, then they will pay for it and the work will continue. If they don’t then they won’t,  and it won’t. The truly creative will therefore make their stuff open source, they’ll share it for free and then see what happens. A freemium model, whereby you find ways to make money by selling extra content works too – monetizing the long tail as I almost managed to say at a recent transmedia conference, then had an attack of self-respect at the last minute.  (Other people ripping off your stuff and making money off it is another matter, that’s out and out theft).  It’s inappropriate to rail against creating music, or writing books or making movies under those conditions.because you knew that’s the way things are when you got into it. If you don’t like it, there’s always stacking shelves, mending roads, or teaching to fall back on. If you’re really creative, you’ll still feel driven to create anyway.

Oh and if you want to check out DRT, their website is at http://www.decoderringtheatre.com/

Challenges of using educational technology

Just another one (still procratinating about those 200 unread emails) a response to this http://digitalliteracywork.wordpress.com/2013/03/13/the-6-biggest-challenges-of-using-education-technology-edudemic/

From the point of view of someone who used to be in staff development for educational technology I think the mistake we make is to focus on showing people how to get the technology to work, and not focus on showing how best to use the technology. Frankly anyone who’s alive in the 21st century should not need to be shown how to upload files, click on the right link, install software, call up a skype ID or any one of the numerous things I was asked to show. We should take digital literacy for granted from professionals. If you don’t have it, f..k off and get it, then come back to work. There is nothing that can’t be picked up on one’s own with an hour or so of playing about with it. I mean … really this is ridiculous what we allow lecturers to get away with not knowing and what we feel obligated to provide for them.

What we should be focusing on in staff development is showing what the amazing new pedagogical things we can achieve with the use of technology are, and what the skills and techniques that make best use of them. Inspiring people, or even just giving them a few ideas, is enough to get them started.

Where we really let them down though is in the fact that the technology doesn’t work in the majority of cases. PCs in lecture rooms that take 30 mins to boot up, or don’t have the right drivers to run USB sticks, firewalls that block access to Skype or Second Life. IT suites lacking the minimum spec graphics to run even the most basic virtual world platform. Admin rights jealously guarded and with no-one on hand to install the software needed. The horror stories heard about insufficient IT support for lecturers continue to do the rounds. It’s no wonder that people are put off from implementing new forms of teaching when it’s a constant struggle to get anything to work. Once tried and failed (in front of a room full of students) it takes a lot of courage to give it a second go.

Flow and writing

This is my own observations and response to Grainne’s latest post http://e4innovation.com/?p=658 mainly because I’ve just spent three days solid writing and doing nothing else in order to meet a deadline so it’s on my mind at the moment.

The subject of Grainne’s post is flow, and I’ve definitely been in the zone today. The book is on Making Sense of Space and is written with a long-standing friend and collaborator Iryna Kuksa – she got the publishing deal, we came up with a subject we could both write about, and then off we went. Or rather I didn’t. I did an introduction back in September, then left it until December, didn’t quite get the last chapter written in the time I had allocated and it’s taken me until now to get it written.

What helps? Well deadlines help. They are the best cure for writer’s block there is. We all know the stories about Douglas Adams and deadlines, so I don’t need to repeat them here … I’m not as bad as DNA, I nearly always meet them, but this one has been particularly difficult to get started on. The reason, mainly, was because I didn’t believe I could do it. Although I did my half of the intro with no problem this was mainly because Iryna had laid out what she wanted from me and how much, so no real thought required there. So really six months (I started thinking about it in July) of panicking before I got down to it. But then I remembered something I really wanted to write about, which was a proposal I’d started to put together for a Marie Curie fellowship, something I’d noted about descriptions of game spaces, ritual spaces, theatre and virtual worlds while doing the PhD and had emerged in conversations with colleagues and friends. That gave me something I wanted to say. I was no longer just doing this because I felt I ought to write somethign, this was something I cared about. So that’s lesson 1 for writing: Find something you care about. Even then though it was a while before I started. I was really waiting for an opportune time, I’d taken on a few projects, and needed to get those written, but had most of December and early January set aside for writing the book (well my half of it). Other writing commitments eroded that though, so bit by bit I was reduced to only about two weeks: a few days before Christmas and about two weeks of January. This was a good time though, I’m not a huge fan of Christmas, and luckily I had a huge back muscle cramp that meant I couldn’t walk for about two weeks anyway, so I could shut everyone away, turn off the email, turn down Facebook and focus on the book. Because really you need to think, and you need to immerse yourself totally to do that properly. Lesson 2: Shut yourself away from distractions. That worked this week, three days with no Facebook, no email and no visitors and I got it done. This morning I had the conclusion to write and the only way to do that is read it through, hold everything in your head at once, and try and look for the common themes. That needs protracted durations of quiet. I wanted to link experiences of space, experiences of technology, willingness to bond with technology and ultimately look at longterm effects on what that means to be human. A lot of disparate stuff, but I think i got there without sounding too mixed up.

The reason why I wanted to bring together all those different things was because throughout the book — in fact a big part of the pitch to the publishers, was that this would be a book with a lot of contributors, but with the majority of the writing by Iryna and me. I’ve quoted friends, got them to add stuff through Facebook, interviewed them, quoted their dissertations. Of the 26k words I’ve written I’d say that about 5k were written by others (all credited obviously). I like having those viewpoints and voices, and I figure that it’s a platform for other people who have influenced me to also get into print. I’ve also let anyone read it who wants to, through posting it as FB notes, or emailing it to them. It’s made it a lot more fun, and hopefully readable. so lesson 3 Don’t do it alone.

The other thing that helped too, over Christmas particularly when I had 10 days over three weeks of concentrated work was to keep a spreadsheet of how much I was doing and set a target every day. This was around 1000 words, which doesn’t sound a lot, but some days I’d delete half that before starting anything else. The advantage of this is that you have to keep going, even when you want to stop. And also when you get to that point, you can stop. One of the mistakes with writing is to always think that you should do a bit more. The mistake with that though is, if you don’t stop, where’s the incentive in writing? If you keep going at it and get your 1000 words done by 4:00 the evening is yours, aiming towards that goal is then a point you can reward yourself, so you keep going. If you faff about and are still at it at 10:00, tough. The flow thing is all about feeding back how well you’re doing and thereby remaining motivated. so lesson 4: lots of small targets and stick to them, feed back regularly.

Although I was originally a bit peeved at the time taken off the chunk of time I’d set aside to write, when I got down to it I could see that this had been an advantage, because during that time a friend had given me a book called Virtual Literacies. In it there was a chapter on the schome project, which she’d contributed to. This ended up being the place where I started my chapter, because the discussions in the Gillen et al chapter in the book had stuff to say about how learners in Schome had related to those places. I could start by recapping that chapter and then branch out to talk about teh bigger picture. This then became the format for the other chapters too, start with a case study of one thing, to illustrate the argument, then talk about the chapter. Without finding a formula like that I’d’ve been prevaricating for a couple of days each time trying to get started. This applies really to each individual day too. If you finish one day with sort of an idea of what to do next, or even start by editing what you’ve already done, it makes it easier to start, because you know what you have to do. Some peopel I know even leave sentences half way through so they can start off the next day by finishing it off. I wouldn’t take the chance that I would be able to, but a few notes on what the next bit is, or a plan, really helps. lesson 5, if you only know one thing, know how you’re going to start.

That really applies to the conclusion too, I find it helps to start those of with one specific thing … maybe something new, or maybe something said in the chapter, that can kick off the discussion. The first bit doesn’t have to be profound. It can be only connected vaguely or occur to you because of something else completely. Just write that down and see what follows on from that. I was stuck on the conclusion for one chapter and couldn’t see what the lessons learned were, but then had a conversation about how we always try and fix things by making the technology better rather than the pedagogy. That seemed to be a lesson that also arose from the case studies I’d been writing about, so I put that down. After writing for a while, I realised that actually, it was true. lesson 6 if you’ve got too many things to say just start with one, pick it at random if you like. that’s still better than not picking one.

And finally, lesson 7, sometimes you just have to go with the flow and let yourself be distracted. The last post I wrote was when I was still trying to get down to the final chapter. I saw the Daily Post challenge and i spent a couple of hours writing a short story when I should have been working.  I can’t really argue that it helped me get the work done, but I really don’t think I’d have been able to focus until it was written. Same with this blog. I have 200 unread emails and about 250 unanswered ones, but I thought of this first so got it out the way.

Daily Post Challenge

I’ve just started following the Daily Post and read this week’s challenge to write a short story about a dystopia http://dailypost.wordpress.com/2013/02/25/writing-challenge-dystopia/. Just as I finish I noticed that the deadline was Friday … aaghh .. just missed it. Anyway here is the story:

Harvest

Maybe this is a dystopian view of the future, maybe a utopian one. I don’t know; you decide.

Plenty has been written about the life of Edwin Janus Talbot, analyses, homilies, diatribes; all trying to decide if he was a saviour or a judas, remaking the world in his own image or betraying us to alien intervention. What all can decide on is that he was an astronaut, out there he made contact with Something, and what he brought back changed us all. His motivations for doing so, however, have been subject to intense scrutiny.

The death of his wife and son, only months before his spaceflight in 2015, were obviously a huge influence. While driving along a highway they were sideswiped by a truck, driven by a drunk driver, with previous convictions. Edwin survived. His family didn’t. The driver was found guilty but was given only a suspended sentence and had his license revoked for a year. That may have been a bigger motivator for his later decisions. Some said his flight should have been scrubbed, but he passed all of his pysch evaluations, and the comet fly-by could not be delayed. So up he went.

Accounts vary of what happened during the flight. It is a matter of record that for five minutes during his extra-vehicular activity all ground crews lost his signal. No voice, no EEG readings, no ECG readings. Not even static. During that time it was believed that perhaps some comet debris had struck him or his craft, or the solar activity noted at the time had damaged on-board systems. Then, miraculously, he reappeared.

Dazed, confused, unable to properly communicate until days after his return, speculation about what had happened during those five minutes was rife. When he was finally able to communicate coherently it did nothing to reduce the speculation. He reported that glowing forms had emerged from the chunk of ice, surrounded him, spoke to him. They wanted to know what he was, where he came from. For weeks of his subjective time they interrogated him until finally they set him back. And asked him one gift they could bestow upon Talbot’s planet. His answer was immediate. “No more murder.” Murder was an unknown concept to these beings. He had to explain it was the deliberate taking of life.  And then each one of those terms also needed explaining. “Life.” “Deliberate.” Around Edwin Janus Talbot’s clarity of definitions of those two words our whole world now gravitates.

On hearing of his accounts of First Contact, Talbot was returned to quarantine. He was subjected to a series of tests, and these found, replicating away in his blood stream, small nanotechnological mites that had not shown up in their previous analyses. They appeared to have no effect on him, until they ran an MRI scan of his brain. There, in the part of the cortex that interpreted his vision, they found a lump, formed from a collection of these mites. And it was growing.

Talbot never again left quarantine, but by then it was too late. In the days he had spent in contact with the investigators, they had become infected. And their families had become infected. And so on.

Each new revelation caused a new wave of panic amongst the populace. Astronaut disappears then reappears. Astronaut reveals First Contact. Astronaut infected by alien nanites. This last, that this infection was in the wild produced the greatest. But after weeks and months of speculation, and there being no evident effects of this infection, the hysteria died down. People went back to their regular lives. Thousands, had tests, the nanites were found in their bloodstream, the lump was found in their visual cortex, but they did nothing just sat there. People adapted.

Then, in 2017, the grandparent of one of the first people to come into contact with Talbot died. As a close member of his family she’d long been known to have suffered tertiary infection, but this had long been dismissed as a cause of illness. A stroke had killed her, and she had lain in her bedroom for several days. When she was found her body was in an advanced stage of metamorphosis. Again the quarantine, again the constant surveillance. The public’s horror grew as information about the change the body was undergoing was leaked to them. Then, after weeks, the full horror was reported. A grainy black and white video, copied from security tapes without permission and leaked through social media showed the body suddenly fragmenting into dozens of small insect-like creatures. They scurried over walls trying to find an exit, scratching their way through the plastic containment walls, then disappearing through the underground facility.

Again speculation was rife. The answer of where they came from was presumed to be the nanites. After the death of the host, the nanites had formed into these synthetic creatures. Their purpose was unknown though. Then someone thought to exhume the bodies of other family members who had died during the previous two years. All were gone. All graves showed evidence of having been chewed away from the inside.

That was when we as a planet, first knew the fear of the Harvesters. Although the first Harvest had not then happened, there was still the anxiety about what these things were, what they were planning. Then Bradley Inglenook killed his girlfriend.

Bradley lived near the base on which Talbot and his interrogators lived and worked, but was, as far as anyone knew, had no direct contact with anyone who worked there. One night, after too many drugs and too much drink, in an argument with his partner he picked up a bat and beat her to death. He had a history of domestic abuse, and when the police officers arrived at the apartment they had a good idea that finally, awfully, he had taken this abuse too far. The concerned neighbours that had called for them watched as the police officers broke down the door, and fully expected to see Bradley hauled away in handcuffs. What they actually saw were the police officers backing away, and a man running between them, pursued by a wave of what looked like small spiders. As he fell, he screamed, and the creatures passed over him in a wave. As they watched, horrified, the things dismantled him, then disappeared into the night.

It was the first Harvest witnessed. Little by little, as more occurred, more of the process was pieced together. The lump in the visual cortex received and transmitted visual information, to where it was not properly known for a while. Everything someone saw, if they were infected, was perceived and analysed, but as far as could be determined, with only one purpose; to detect murder. If a murder was committed, the perpetrator was identified, and the Harvesters were summoned.

If you were identified as a perpetrator there was no appeal, and no escape. Bradley’s death was only the first. It was as if the metamorphosed bodies of the infected that had died had suddenly reached critical mass. Within weeks another death, this time a child killed by a woman and her partner. Both had beaten the child, but only the one responsible for the final blow was sought out by the organic machines. The mother watched and screamed as her partner was slowly devoured by them, then watched as they disappeared.

Neither was there any escape. A drunk driver ploughed into a parked car on the highway only a little distance from where Talbot’s family had died. Fully realising what he had done, and what the punishment would be, the driver fled back to his car and sped back to his home. Locking all the doors, sealing the windows, shoring up every conceivable entrance to the house he waited. Neighbours reported hearing the ominous susurration of the Harvesters as they gathered around the building, swarming over the windows, clustering by doorways. The driver, still worse for drink phoned the police, begging for help, the 911 call reported on every news channel heard him saying he would do anything, but just to keep the damn things away. TV cameras arrived as the Harvesters found a gap under one of the doors, and flooded into the hallway where he stood broadcasting his screams over the phone as they showed images of the exterior.

In his quarantine on the base, it was reported that Edwin Janus Talbot watched the live news feed with a slight smile on his face. Then closed his eyes and did not open them again.

The small town in Florida where the infection had started was merely the first place on Earth to achieve this critical mass of Harvesters. The infection had already spread to be almost totally worldwide. In Chicago a parent watched horrified as their teenage son, who unbeknown to anyone, was in a street gang was consumed by Harvesters. It was presumed he’d been responsible for a shooting earlier in the day. In London, three people beat someone to death in a street fight, and even before they left the scene, were Harvested, all caught on CCTV, and broadcast around the world. In Israel a soldier that had shot and killed a stone-throwing teenager was consumed by the small alien devices. Distance was no defence. He and several comrades had fired, only one shot had hit. The Observer in his head and the other soldiers had relayed the information to whatever processing system made the judgment and the execution was automatically carried out.

Eventually the final link in the chain was discovered. Deep underground beneath a subway system in Delhi a large mass of neural networks was discovered, comprised of the connected bodies of billions and billions of nanites. The need for a large critical mass was evident, until enough of the infected had died, and their bodies transmuted, then there was not enough mass to create one of these alien brains. Without them, the sentences could not be carried out. That brain was destroyed, and funeral practices everywhere required cremation rather than burying to be carried out, but it was too late to stop. Enough Harvesters, and enough Judges, existed for the genie to be entirely out of the bottle.

Needless to say, murder rates fell drastically once people realised that there would be no escape from Retribution as the act of disassembly by the Harvesters came to be called, and that the Judges made no allowances for context, or provocation, or political motivation. Indeed, the Judges’ interpretation of deliberate was open to interpretation. A crime of passion committed in the heat of the moment still met with Retribution. An accident may or may not meet with the sound of hundreds of crawling insects. An act of incompetence by a doctor led on a number of occasions to a hospital ward being flooded with screams of the medic being torn apart soon after their patient died, and for a while this resulted in a widespread moratorium on operations. As the Judges became (it was presumed) larger, and more sophisticated, the consistency and nuances of Judgments improved, and these days it is rare for accidental death in surgery to lead to Retribution.

And as the alien lifeforms defined and redefined deliberate, so too did they redefine “life”. To the disappointment of many, it was not considered murder to kill many animals. Swatting a fly did not lead to death, neither, surprisingly for many did fishing. Again people cursed Talbot for the subjectiveness of his definitions. Until people noticed that loggers in Papua New Guinea had almost entirely vanished. It appeared that the Judges considered all primates as “Life” and so with each orang-utan that died in a deliberate fire, at least one logger that started the fire would be Harvested. It appeared that they had based their understanding of life on the template of Edwin Janus Talbot, and stage by stage, as the Judges understood sapience better, more animals appeared to be taboo. Japanese whalers would return home, to be met by a wave of Harvesters that would consume them, leaving their catches unclaimed in the docks. The last few, on learning of the fate of the others, chose to always live at sea, never stepping on dry land, along with a small community of those who have murdered. It was found that the only protection against that wave of death-like insects was water, and some, though not many, choose that as a way to protect themselves against Retribution. For more, suicide is the only sure way to ensure a painless death.

And that is the reality we all live with now. Most of us feel liberated. No longer needing to fear the ultimate violence from other people. Occasionally an aggrieved lover, or a frustrated parent, or a political extremist may still kill, in the heat of the moment, or with their belief in a calling to kill. A psychopath may still shoot an innocent bystander, or a street fight go too far and result in death. And for some it is the most extreme statement of suicide they can imagine. And then the reports will be of another Harvest, and we will all become very conscious of the recording and transmitting device in our heads, and that alien neural net, hidden away beneath our feet somewhere, ready to Judge us. But genocide no longer happens, once the first blow falls from a machete, there is never a second, wars cannot take place, when superiority of firepower, or distance from target, or perceived notions of right and wrong, cannot defend against that wave of death crawling towards the killer, ready to dismantle him or her.

So is this dystopia, or utopia? Are we living in Talbot’s nightmare, or his dream? By now, when we have lived with this for so long, it’s all we know. And so it’s neither. It’s just the way things are.