On Liminality, Bullshit and Lorraine

Bullshit

Bullshit is defined in the literature as unevidenced claims (Mackenzie and Bhatt, 2020) . I would like to extend this definition to describe anything miscategorised ontologically.

Broadly there are four ontological categories

  • “Proven”
  • Unproven
  • “Disproven”
  • Unprovable

So “proven” claims are those with sufficient evidence to convince the majority of people who have viewed the evidence. The scare quotes are because nothing is ever completely proven to be true, the best we can say is that the statement is the one, of all the possible statements, that best explain the observable evidence. Examples are evolution, general relativity, the standard model, climate change, and so on.

Unproven are those claims which have insufficient evidence to convince the majority of people who have viewed the evidence, but for which there is some, or where there are competing explanations. Examples are string theory, …. These are contested, and often there are social, hierarchical, cultural reasons why some lead over others. For example, those published in English are likely to be forerunners over those published in other languages.

“Disproven” are those where the overwhelming evidence is that the claims are false. Vaccines cause autism, creationism, etc.

Unproveable are those categories of statements for which evidence cannot be acquired. God, unicorns, afterlife, etc. The claims are that these things exist despite there being no evidence. Absence of proof is not proof of absence, is the argument.

So any statement properly attributed to the correct category isn’t bullshit, but if it is misattributed it is. So for example, “I believe in God and that belief sustains me through my bad times”, is not bullshit because it makes no untrue claims. “God loves you all”, is – because it’s claiming that God actually exists, and we have no evidence for His existence.

“The Earth is flat” is bullshit, as is “vaccines cause autisms”. Those are both claiming disproven things are proven. But so is “science is just a matter of perspective”, as it’s stating a “proven” thing is unproven. Yes, you could overthrow the current paradigm, and people have, but you would need a wealth of evidence to outweigh the current best “proven” explanation, and move it to a different category through presenting that argument. To state that theories agreed across all cultural perspectives are just a male, western white perspective, when science is being used by all countries to determine truth from fiction, is bullshit.

An addendum – I’m talking here about the positivist end of the spectrum – astrophysics, biology, etc, the things based on measuring stuff (see a previous blog post). My own bias, as I go there when I think about science rather than the more interpretivist stuff like anthropology, psychology, education. With those there is a strong argument that there’s a western domination which influences the field – have a read of this https://www.nasw.org/article/science-writers-urged-tell-stories-include-indigenous-perspectives

Within the “proven” category we also have the distinction between positivist and interpretivist perspectives. Positivist observations are more powerful, and indicate stronger causal links. There is instrumental reality to back them up (although instruments can be wrong). But interpretivist data is also useful. To state that a model needs to predict behaviour absolutely in order to have value is bullshit, because even if it’s useful most of the time, it can still inform decisions. But to say that measurable phenomenon is no more value than a collection of qualitative data is also bullshit.

So yes, things move from category to category, but only over time, and only with evidence and reasoned argument.  There are blurry lines between the categories, and opinion might vary on which side some things legitimately belong. Bullshit only applies outside of these blurred lines.

These distinctions weren’t always so evident. It’s only with the Enlightenment in the 18th century and the development of the scientific method, that humanity developed a mechanism to fully determine the difference between finding stuff out and making stuff up. And to state that that science is the dominance of a western male perspective is bullshit. Anyone who wants to tell the difference between making stuff up and finding stuff out uses the scientific method. Indian scientists put stuff into space using science, not Western science. Science. The only difference between a Chinese scientist investigating copper nanotubes and an American one is the abbreviation they use. It’s always been this way. Current science is an amalgam of Islamic scholars, Greek philosophers, Chinese inventors. The first university was in Timbuktu. Reality is a humanity-wide endeavour.

Part of the problem is that there is a perceived difference in value between stuff made up and stuff found out. The Enlightenment has led to the perception that only things that are true have value. Hence, we’ve had epistemicide, where whole systems of making stuff up have disappeared. But just because something’s made up doesn’t make it useless. But in order to compete with real stuff, everyone feels they have to claim that their worldview is real. Hence people claiming that God is real, I mean really real in a literal sense in the same way that I, and probably you, are.

Before the division into real and not-real, people felt comfortable with mixing ideas they would create alongside stuff they saw. So we’d have theologians arguing about how many angels dance on the head of a pin, or people dancing so the sun came up. They didn’t really believe that those things were really real in a literal sense. The distinction didn’t matter. There are a lot of cultures around the world that still haven’t adopted this hierarchy. The idea of qi informs the design of buildings, but no-one tries investigating copper nanotubes using those principles. It’s not really real in that sense. The Dreamtime doesn’t actually literally exist in the same way the world does; it’s a signifying mythical system that exists alongside the real world.

But since the Enlightenment people who like the made-up stuff feel they have to place it on the same footing as stuff that’s found out, which means claiming that made-up things are really real too and then using made-up stuff to make decisions about real things. So they redact science books because it contradicts the made-up stuff about creationism, or they use a line in the Bible or the teachings of an Imam to decide which real people should be allowed to fuck whom.

It’s a misassignment of ontological categories.

It’s bullshit.

On liminality

The idea of liminality started with Victor Turner, who was a drama theorist. The idea is that there are spaces separated from the normal space in which we live, and these spaces are separated and sustained by belief. Liminality derives from the word “limen” or the edge of the stage. So the events of a play exist within a liminal space. Within that space we suspend our disbelief – an actor becomes the character, the backdrop becomes an actual landscape or drawing room. But the same is true of a film, or a book; it’s called the diegetic effect. We can sustain that level of engagement, while also knowing that it’s not real. The state of knowing something is real and not-real at the same time is known as metaxis, or double-consciousness. We know deep down it’s not real, but while we’re in the liminal space we suppress that knowledge in order to fully immerse ourselves.

While we’re engaged in the film the real world doesn’t intrude according to this view. We know that it’s not real, but while we watch it, that doesn’t matter. We know aliens aren’t real, but we’re still scared by them invading, we know that’s just an animated drawing of a deer, but we still cry when Bambi sees her die. We can take part in liminal spaces too. A game space is a liminal space. We know it’s only play money, but when we land on Mayfair with a hotel on it, we’re really pissed off. Virtual worlds are liminal spaces too. As are ritual spaces. Within them, roles are changed; identities can be changed; rules are changed. The made-up is made real. Huizinga called the boundary separating the liminal from the regular spaces The Magic Circle. Rules are changed within it. Identities are changed. Huizinga particularly talked about games, and the Magic Circle for a game, for example Monopoly, extends around the players. Within that space, the play money matters, there are specific rules that govern behaviour. We all become capitalists.

Liminal spaces aren’t just defined by space, they are also bounded by time. A stage outside of a performance isn’t a liminal space. It’s just a normal space. It’s transformed into a liminal space by ritual elements. The surroundings help here. There’s an interesting paper by Pierpoint (Childs et al, 2014; 121-124) in which the surrounding elements of a proscenium theatre are included in this ritual. There is the design of the front of house, the direction to the seat, sitting down, the reading the programme, all those build up to the moment where the orchestra plays, and the curtain goes up. These are all signifiers of the moment when the liminal space is created. Performances where actors drift on stage, and there is no real start feel odd because this ritual commencement hasn’t taken place. Site-specific theatre is more challenging partly because this liminal ritual is absent, so we don’t know when or where the liminality is.

Liminal spaces can also be returned to and invoked repeatedly. By having multiple texts, a series of movies, or a TV show, a consistent repeated diegesis is created. This can also be extended outside of those texts, by others, like for instance fanfiction, or conferences like the Sherlock Holmes society run, where the canon is engaged with as if it were real.

The pedagodzilla podcasts are liminal spaces. The Godzilla roar, the music, Mike’s intro, all set up the liminality of the space. It’s important because it signifies that within that 40 mins, making stuff up is legitimised. Mike sets out the rules; that there is a genuine piece of pedagogical theory, a description of a piece of pop culture, and then we will apply the real stuff to the made-up stuff as if it was real. We are deliberately misattributing the ontological nature of, for example, Yoda as a supply teacher, because we know it’s inappropriate, and therefore fun. We know that he doesn’t exist, and wasn’t created in order to be analysed in that way.  And we know the audience knows that. And we hope the audience knows that we know that. It would spoil the lusory nature of the liminal space for someone to criticise the argument with “but he’s not real.” That’s not the point. Made up stuff is legitimate within the liminal space.

Ditto church services. The organ music, the singing, the sermon. All of those add to the liminality. Gee would also describe the space as a semitioic social space, if you can read the signs around you, in the vestments, the props, the stained glass windows, it all adds to the experience of it as a liminal space. Within that liminal time-bounded space, misattributing the ontological status of God is fine. You can say He’s real within that space, and share fellowship and agape and all that feelgood stuff, because the normal rules of engagement with reality are suspended. Made up stuff is permissible.

And also liminal spaces can exist within other liminal spaces. So for example, later in the same chapter as the Pierpoint reference Ian Upton (Childs et al, 2014; 127-130) talks about ritual spaces within Second Life. We adopt one set of rules on entering the virtual world, and then within the virtual world cross another limen where rules and identities are transformed again. Ian argues that the change between the non-performance SL space and the performance SL space is a greater one that between RL and SL.

Where it breaks down a bit

This idea that liminal spaces are separate, discrete places cut-off from normal space doesn’t always hold, however. The membrane around that magical circle is permeable. Anyone who’s had to placate a child who’s got upset by landing on Mayfair, or fallen out with someone because they lifted money from the bank, will know that what happens within the Monopoly game space does have an impact on the rest of the world. More positively, the liminal space can excite us, or sustain us, in the rest of our lives, by us looking forward to the next movie in a series, or building a fan community around those spaces.

It works the other way too. In novels and films, often the exterior world will intrude, to remind you it is only a book. In Vanity Fair, Thackery interjects to remind the reader that he’s writing the novel. The sense of immersion is undermined, the diegetic effect broken.

And sometimes the membrane extends way more than the liminal space. A football ground is a liminal space. There is the ritual of the singing, the buying of the pie, the Chinese dragon dancing between halves (I’m guessing because I’ve only ever been to one football match in my life). The crowd shares in the made-up thing that it matters whether one set of the 11 people on the pitch get the ball in the net more than the other 11. That’s what the game is. That’s what all games are. They’re enjoyable because we’ve invented a set of criteria that matter, not because they do intrinsically, but for the sense of camaraderie, of communitas, that occurs when the criteria are met. One woman jumps higher than the other woman, one robot flips the other robot out of the ring. We all know deep down that they don’t matter, but it’s fun to believe that they do.

But that liminality is broached when that extends to people’s entire lives. Outside of the match, that liminal space bounded by space and time, can dominate those lives. At some level there is the awareness that actually, it’s a manufactured reality, but that realisation is permanently suppressed. It’s the same with religion. The statement that God is real isn’t left at the church door, but is taken out into the real world and is acted upon as if it were true all the time. It’s self-evidently not, it’s unproveable, that’s obvious, but the ontological status is misattributed.

Let’s remind where the bullshit lies. “I know I can’t prove God exists, but I choose to believe he does, because that belief gives me comfort, and ties me to my community” – not bullshit. “God exists and He says you’re going to Hell” – bullshit.

There’s an extra level of complexity with liminality, and that is where it’s intricately woven with the external world. This liminality isn’t obviously tied to a space or a time, but it’s liminality nonetheless. This is where we come to the Dalai Lama, Tony Stark and Lorraine Kelly.

The Dalai Lama, Iron Man and Lorraine Kelly

What these three have in common is that they are all fictional characters; they have identical ontological status.

The Dalai Lama is the latest incarnation of Avalokitesvara a Bodhisattva. This obviously is made up as there is no evidence for reincarnation or the existence of Bodhisattvas. He is performed by a real person named Lhamo Dhondup. If people believe in that sort of thing, then if they meet Dhondup then they might believe they have met the Dalai Lama. Where the reality and fantasy are distinguishable is difficult to say. Maybe when he gets home Dhondup takes off the assumed identity and just becomes a normal guy. Maybe he performs that identity 24/7. Similarly with Tony Stark. The character was created more recently, and we know by whom (Stan Lee, Larry Lieber, Jack Kirby and Don Heck) whereas the name of whoever made up the Bodhisattva stuff is lost in the mists of time. In his most recent incarnation Tony Stark is performed by Robert Downey Jr. However, that performance isn’t restricted to the liminal space of the MCU, as Downey (like Dhondup appearing as the Dalai Lama) goes to hospital wards to meet sick kids who (like Buddhists) really believe he’s Tony Stark. Downey doesn’t do that all the time, he has an out of liminal space, but he carries that liminality around with him, able to generate it when it’s required. Again, that liminality legitimises the madeupness. The child in the ward isn’t meeting an actor, he’s meeting a superhero. For the moment Downey is there, the fantasy is real. Ditto Dhondup.

Lorraine Kelly is slightly more complex, in that the fictional Lorraine Kelly is performed by a real person also named Lorraine Kelly. This was actually a point of law, proven by the real Kelly because the fictional nature of Lorraine means that she’s a performer when she’s doing her presenting, she’s not being herself. When she meets fans in events, she’s also Lorraine, but where the real Kelly exists, and the fictional Lorraine exists, is a blurred liminality.

In the world of professional wrestling this is known as kayfabe. Although professional wrestling resembles a sport, its roots are actually in the strongman sideshow of carnivals. Initiated by the golden trio in 1920s’ New York, the matches are actually “worked”, ie fictions created as performances. The ring is a liminal space (as are all sports spaces) but the liminality extends beyond the ring, as the worked narratives are played out in public outside of the ring, extending the narrative into mainstream space. The wrestlers abuse each other in publications, carry on feuds in public spaces and the wrestling news describes these stories as if they were real news. As internet culture has formed, the liminality has extended to websites, but this also makes maintaining the work constantly more difficult, as fans may spot enemies together in restaurants etc.

This is still liminality, but here the wrestlers carry that liminality with them. In dressing rooms etc if a “mark” (ie someone not part of the work) is spotted, the wrestlers will call out “kayfabe” and switch on their characters in the same way that Kelly, Downey Jr, and (presumably) Dhondup do, generating that liminality around them.

And what? It gets more complicated?

This blurring of liminality is also deliberately played with in professional wrestling, in a level of complexity rarely developed in other media.. A wrestler might be really hurt, or go AWOL, or fall out with his coach. Or a “face” and a “heel” might fall in love etc. This is called a shoot, (as with most carnies, there is a huge terminology describing the differences between the liminal and external spaces). A shoot is when the liminality is unintentionally dropped and reality inevitably intrudes. This could happen with the other examples. Anything could happen to cause Downey Jr, Kelly or Dhondup to slip out of their roles,  with varying consequences.

Where professional wrestling is more complex, however, is that there is also worked shoots. What may seem to be a falling out, and a narrative in which the liminal space has been broken, can actually turn out to be part of a larger narrative, and it’s all part of the work. Fans are constantly kept uncertain as to what’s real and what isn’t. But they work it out, or adapt in retrospect if they haven’t. Professional wrestling fans’ realities are constantly being retconned and it’s all part of the fun. We could learn a lot from them.

So what’s got fucked up?

Believing in things is fun. Make-believe is reassuring, it brings respite from the harsh realities of life, and particularly death. We can console ourselves there is a jeaven, or whatever it’s called, and that gets us through. It’s more exciting to meet the Dalai Lama, or Tony Stark, or Lorraine, than it is to meet Dhondup, Downey Jr, or Kelly. It’s tedious to constantly have to follow up a statement about God or Yoda, with “I know he’s not real, but just for the sake of discussion let’s pretend he does.”

The problem is that people feel the Enlightenment has forced on us this hierarchy between finding stuff out and making stuff up. People feel that stuff has to be true in order to be justified in believing in it. And worse. Deep down people know claiming unproveable things are true is bullshit (once you know how to tell the difference, you can’t unlearn it) but that just means they end up defending it even more vociferously. You could argue that there are other ways of knowing, that evidence is not the only way to find things out, but then that’s bullshit about bullshit. That level of self-deception is going to wear you out.

The effect of all this bullshit (and metabullshit) is that we get people attacking soap stars because of something their character did in last week’s episode, we get climate change denial, antivaxxers, holocaust denial, homoeopathy, we get statements like “it’s Adam and Eve, not Adam and Steve”, we get people forgetting that ultimately it’s just a game, etc. etc.

And on the other hand, where many epistemologies collide with scientific rationalism, scientific rationalism wins (because it’s the only one that works) and we lose all these alternative worldviews in a global epistemicide.

The answer to this either or state between accepting or rejecting reality is liminality. You can have your cake and eat it. You don’t have to pretend stuff that others have found out is made-up, just so you can still have stuff you’ve made up. Have your liminal spaces, but acknowledge that they are liminal spaces. You just need to be able to see the limen. Within the delineated liminal spaces, you can call anything you like true. Go to your mumsnet group and complain about all the chemicals in your food, have your YouTube channels about the earth being flat, have your services where you talk about all the wonderful things your God has done for you. But see the limen.

From all the examples above, we can see how flexible liminality is, it can be delineated within specific spaces, it’s permeable, it can be spontaneously generated once it’s been established, it can follow people around. The boundaries can be played with. So feel free in applying liminality when and where you like, but when you come back out into the real world, acknowledge that it’s just football, or religion, or a movie and use real things for making decisions about the real stuff.

Recognise that every damn thing has chemicals in them and act accordingly, don’t go up in rockets to prove the Earth is flat, acknowledge that God is no justification for stopping your son from marrying his fiancé because God is something someone made up at some point. Acknowledge your inbuilt bullshit detector and end the self-denial. Accept reality into your lives.

Go to your liminal space. Have your fun. Have your life-affirming moments. Share your beliefs with your fellow worshippers as if they were real things. But see the limen, as you transition back out into the world you share with the rest of us.

See the limen and we’ll all get along just fine.

References

Childs, M., Chafer, J. Pierpoint, S., Stelarc, Upton, I., and Wright, G. (2014) “Moving towards the alien ‘other’”, in Kuksa, I. and Childs, M. Making Sense of Space: The Design and Experience of Virtual Spaces as a Tool for Communication. Chandos, UK:Oxford. pp . 121-138

MacKenzie, A., Bhatt, I. Lies, Bullshit and Fake News: Some Epistemological Concerns. Postdigit Sci Educ 2, 9–13 (2020). https://doi.org/10.1007/s42438-018-0025-4

Cancel culture and the limits of free speech

I’m currently boycotting Twitter in support of the antisemitism protests. If you’re not up with the Twitters basically some grime artist called Wiley (how do these people become famous without me ever hearing of them) had a full-on rant about Jewish people and Twitter took way too long to take down his account. I know not tweeting for 48 hours is the armchairiest of armchair activism, but it’s something. Maybe.

But it’s been a bit of a relief not being on there. It seems like every day there’s some moral controversy about someone who’s worked with someone else when they were cancelled, or whether cancelling itself is a good idea or not. The argument is that everyone has a right to free speech. The opposing argument is that no-one can expect to say what they like without consequences. Actually, the challenges of working through these moral quagmires is part of the reason I’m on there. It’s a constant test of where the right lies, and where I want to position myself ethically. And it’s not always as easy to spot where the line is as it was with Wiley (the grime guy not the publisher).

But positioning myself ethically all the time is tiring, so I’ve been trying to encapsulate what I described in a tweet as a moral quagmire into a few key aphorisms because that makes it way simpler for me. I thought I’d share them.

I’d been thinking about it a bit more because in the recent Buffy episode of Pedagodzilla there was much idolising of the work of Joss Whedon. We didn’t once address the revelations about his alleged history of being emotionally abusive towards women. I was fully expecting some flak for this, but it hasn’t yet emerged.

It’s also cropped up because of the letter by JK Rowling, Salman Rushdie etc condemning cancel culture. I also read this article https://theintercept.com/2020/07/14/cancel-culture-martina-navratilova-documentary/ which details the struggles to get a documentary made about Martina Navratilova made because of a couple of cancel culture incidents.

More personally for me, within the comics industry there’s been a kerfuffle because Dynamite Comics recently contributed to and then publicised a variant cover for a comic published by the leader of the Comicsgate movement. For anyone not keeping up Comicsgate is a group of people who oppose what they see as a political agenda forced onto comics by liberal progressives. So “forced diversity” such as non-white characters being introduced, and gay couples, within comics when their ethnicity or sexuality isn’t relevant to the plot. Their position is that they just want good storytelling without having homosexuality forced down their throat. In isolation, the argument about not sidelining storytelling with political agendas sounds like a reasonable one. Very few people like authors using their platform as an opportunity to push politics, because they’re exploiting their relationship with their audience to fulfil their own personal ends. Where the argument falls down, of course, is that not including non-white or LGBT characters is just as political a decision. CGers just don’t see that as a political choice in the same way that fish don’t see water – it’s the norm that they’re used to so that it seems neutral to them. Also being white and straight predominantly means they want to see themselves, and only themselves, reflected in what they read.

Also what CGers fail to recognise is that comics have always had a liberal progressive agenda. If you look at the characters in the MCU for example, 90% of the characters were created by second generation Jewish, Irish or Ukrainian immigrants. Hang on, I will check that. To be precise: 80% of the title characters (and all of the title characters if you exclude the movies that are set off-world) were created by offspring of Jewish, Irish or Ukrainian immigrants. Superheroes are the wish-fulfilment fantasies of the oppressed and disenfranchised who wanted something to stand against the inequities of this world. And have been read for 80 years by geeks who felt the same.

But the CGers feel they are the oppressed now. Oppressed by the influx of non-white, non-male, non-straight people into what they see as their world, not realising it never really was.

Aphorism 1: Just because you’re not getting your own way, doesn’t mean people are out to get you.

But on a larger scale this is how a lot of mainstream culture sees itself. We can no longer say what we think, is the complaint, without being cancelled, or losing our jobs. We’ve lost our freedom of speech.

And freedom of speech is a tricky one. What should be the limits on what you can say?

Well, actually we have a pretty useful law on how freedom of speech works. You can say what you like as long as it doesn’t affect someone else’s fundamental human rights. What’s also cool is that there is no protection because your opinion is a deeply held religious belief. For example, the legal response to someone who feels they can be homophobic because their religion says it’s evil is “nope, the law’s right, your religion’s wrong. STFU.” Which is the correct response.

Freedom of speech is a tricky one. I may have said that already. I remember recently on the twitters a famous TV mathematician was accusing Noam Chomsky of being antisemitic because he was defending someone’s right to publish a book denying the holocaust happened. This is a huge reach, The Chomsk’s statements are more those of being a hardcore free-speecher. Anything goes. I recognise the validity of the argument – if you stop people from saying stuff you don’t like, then what happens when someone stops you from saying stuff they don’t like?

Aphorism 2: Agreeing with someone’s right to say something doesn’t mean you agree with what they say.

This was a tricky one for me, because I was firmly committed to the idea of free speech. Some background: I was one of the Thatcher generation – in my first teaching job Section 28 came in, which meant I could get fired if I promoted homosexuality as a valid lifestyle. Of course, the kids like to get their teachers into trouble by asking them outright what they thought.  I said it was as valid as straight relationships. Because it is. No-one ever fired me. We also had Mary Whitehouse and her bunch of thugs who liked to ban things because they were fucked up evil people. No other reason. And we had Salman Rushdie and the Satanic Verses. More fucked up evil people. All points at which freedom of speech had to be defended at any cost.

But on the other hand. Holocaust denial. Wtf? How do you balance those two opposing principles?

My answer.  Actually: I don’t agree with free speech.

Aphorism 3: You do not automatically have a right to express an opinion.

Earning the right to express an opinion takes work. You have to check your facts. You have to work out your argument. It has to make sense. Spreading misinformation is a bad thing. I disagree with Chomsky on this one (but aphorism 2 – that doesn’t make him antisemitic). You shouldn’t publish or sell books on holocaust denial because it’s not true. The holocaust did happen. You want to prove it didn’t that’s going to take a lot of work – an impossible amount of work. Similarly, you don’t have a right to say that vaccines cause autism, the Earth is flat, evolution didn’t happen, God exists. None of those things are true. I figure the mythical stuff is ok as long as it’s presented as myth, under the “let’s pretend” category, as the reality or not of God stands outside proof or disproof (see the previous post about ontology). But either you ban all lies or you ban none. Ethics have to be consistently applied or they don’t really work as ethics.

But … what about the grey areas? Ones where people are wading in with facts and figures on both sides? Aren’t there some areas where we need to have a debate? Rowling’s fears of trans women invading women’s safe spaces seem to be genuinely felt and shared with other women, even though there’s no evidence for them being a threat. Should she be banned from saying those things? Well her fears are real, so probably not. But, claiming that transsexuality isn’t real so obviously lacks even a glinner of a connection with reality, then I would say you don’t have a right to express those claims. It’s not about as subjective a thing as feelings. It’s about facts.

That’s not to say you have to allow them to be said on social media or printed in newspapers. The letter about cancel culture complains about censorship. But refusing to print your books, or removing you from a newspaper column, because people don’t like what you said isn’t suppressing free speech. You can still write a blog, or self-publish, you know, like regular people do. If someone rounded up all your self-published books and burnt them, or put you in prison for writing a blog, or speaking the truth then that’s censorship. And that’s going on in many many parts of the world. All that’s happening to the Rowlings and their ilk is that they’re losing their privileged position of having a more magnified voice.

Aphorism 4: Burning books is censorship. Refusing to print them is just removing your privilege. Get a grip.

So, is it OK to cancel people? Yes. If someone is going to say stuff that’s untrue, they need to be stopped from saying it. If they’re going to say stuff that people don’t like, or may harm people’s feelings, those people have a right not to buy their stuff, or encourage others not to buy their stuff, or refuse to work with you any more. Although no-one has a right to threaten anyone for what they’ve said. That’s psychopathic.

But it’s a response that’s best used judiciously.

Going back to the ComicsGate scenario. I’ve read comics for 50 years. I’ve never read a huge amount at a time though, and my interest has waxed and waned over the years. At the moment, I read about 8 titles. 6 of those are Dynamite Comics because they are the ones that seem to best embody the pulp sf of REH and ERB. The other two are DC. And those are both by Tom King. So you can see the degree to which I admire the key players.

So when Dynamite publicised their support for the Comicsgate title it was a bit of a dilemma. In the conversations around it I found out some other gross things about other writers I admire. People were refusing to buy any more titles. I never cancelled my orders. The head of Dynamite then changed his mind, and his response was that he hadn’t realised there would be such a kick back against the move.

People didn’t believe him. He must have realised that people would be outraged.

Directly after that Tom King complained that DC had hired an artist – Jae Lee – to do one of the covers to his new title because he’d been working with the CGers. Jae Lee got lots of online harassment. King then apologised because he’d talked with Lee and discovered Lee didn’t even know what CG was. He’d been hired to do some work. He’d done it. That was it. No political allegiance implied. Or even known.

I get it. I get the mistake that Tom King (like I said, a writer whose work is keeping me into comics) made, and the anti-Dynamiters. I recognise the frisson of pleasure at outwoking someone else – I felt it when I told my elder stepson that Warren Ellis was cancelled. You feel like you’re one step ahead of others, you can claw a little bit of moral highground for a while, which might stand you at a bit of an advantage the next time you fuck up. But it’s an illusion of moral superiority.

Because here’s the reality:

Aphorism 5: Keeping up with who’s a dick and who isn’t is a niche hobby. Always bear that in mind when dealing with people who don’t know or don’t care.

It’s a lot of work keeping track. Some people don’t want to put the time or effort in. Some people avoid it because it’s too much of a distraction or too damaging to their mental health, or their enjoyment of their culture. Don’t make the mistake of thinking that just because someone’s reading Rowling they’re transphobic, or working with ComicsGate people they don’t care about online harassment of women, or waxing lyrical about Buffy that they don’t care about domestic abuse. Maybe they don’t know because they haven’t kept up. Maybe they do know and they continue to read the work because it has such a deep value to them they want to continue to connect to it. Maybe they’re working with them because they need the work, or the break, or because actually they have a personal connection to the person because there’s another side to them we don’t see. Although with most of these people it’s difficult to see there could be.

I personally would probably not start to read something by someone if I knew they were a fascist, or a racist, or an abuser, or transphobic. But if I’ve already engaged with their work, and learnt to love it before finding that out, then for me it’s too late to give it up. So I’ll probably not start on the Harry Potter stuff, or Buffy, because there’s other TV shows and books I can read instead. But I’m not going to stop the Cthulhu Mythos bingeing, or listening to Magma, or re-reading Astonishing X-Men, because I only became aware of the dodginess of their creators after I got into them. I’ll certainly not be part of the guilt-by-association lobby. If Julie Comer wants to have a relationship with a right-wing asshole (and she might not be anyway) that’s her choice. If Jae Lee does some work for a sexist abusive person, but that work itself isn’t sexist or abusive, then that’s ok too.

Aphorism 6: Judge people by what they do, not what the people they hang out with, or work with, or sleep with, do.

Finally, the element that seems most egregious in the various things I’ve read is the treatment of Navratilova for being pilloried and unfairly accused of transphobia, simply for questioning the position of trans women in women’s sports. Someone who’s stood up for trans rights being wrongly labelled for one statement. Social media isn’t a great platform for nuanced arguments. Even the most intelligent of people can sound like a right Dawk when they’ve cut their arguments down to 280 characters. I’d be uncomfortable discussing anything like this with fewer than 2617 words. Yet one poorly phrased sentence, or one question, and there’s a contingent of people who will let loose. And like I said, I get it, because finding someone to despise can feel good, and dumping on them is enjoyable. This is predominantly why people bully others, which seems to get missed out of when discussing how to combat bullying in schools. Bullies bully because they enjoy it. The point at which something makes you feel good is the point at which you need to question your motives.

Aphorism 7: Political positions are represented by a lifetime of work. Not 280 characters.

Oh and that previous sentence should be there too.

Aphorism 8: The point at which something makes you feel good is the point at which you need to question your motives.

Sorted.

For now.

 

Ontology, Epistemology, Positivism, Interpretivism and Belief

Ontology epistemology positivism interpretivism

Ontology – degrees of reality

Ontology is the discussion around what is real or not real – and also – if something is real how do we classify it? So we could do the Father Ted thing of having a list on the wall of real and not real and adding to them, but there’s a seven-point scale Richard Dawkins came up with on where to put ideas about how real things are. He meant it specifically for talking about god, because he seems to be particularly obsessed with that, but I think it helps to apply it to anything.

So on this scale we have at 7 – well I’m not sure what end is 7 and what is 1 but let’s call it 7 –  we have stuff that 100% absolutely exists.

The problem is that we can’t know with 100% confidence that anything exists. I don’t know that you exist, or this table exists, or even I exist. It could just be data that’s being pumped into my senses, and my thoughts might actually just be thoughts that make me think I’m alive, like Cat says to Rimmer in Red Dwarf 13. And at the other end we can’t know for 100% that something doesn’t exist. So we don’t have any evidence for unicorns, god, the tooth fairy, star wars existing. But absence of proof isn’t proof of absence. There might actually be a god, He might even be exactly as one of the various religions describes Him. Or Her. Or Star Wars could really have happened a long time ago in a galaxy far, far away.

So although we have a seven point scale, really we’re just looking at a scale that runs from 6 to 2. Like a grading system, it’s out of 100 but in reality we only give marks between 20 and 90.

So when we say something is real, we’re really looking at stuff around the 6 mark. “True” is just a shorthand for “this is the explanation that best fits our observations for the time being”. Everything that we say is “true” is really just an operating assumption. So you, me, the Big Bang, dark matter, the standard model, they’re all around the 6 mark, some maybe slightly higher, some maybe slightly lower. But we can’t get through the day constantly bearing in mind things might not exist. I’m going to assume you exist and get on with things, although occasionally it’s worth remembering what we’re experiencing is only a 6 not a 7. Same at the other end. We don’t have to worry all the time about what god might think, or try and use the force to open doors. Chances are those things aren’t real, so it’d be wrong to rely on them.

Right in the middle we have the things that ontologically we’re totally unsure about. It’s completely 50/50. Then just above that, we have the stuff that’s around 5. So maybe we’re leaning towards it being true, but there’s still some doubt. So, superstring theory for example. Multiple universes. Then on the other side there are all the things at 3, so unlikely but the jury’s still out. Like, I don’t know, the Illuminati or something.

Ontology – categorising reality

If we’re looking for an example of an ontological argument about how to categorise reality a familiar example would be taxonomies of living things. When people first started categorising living things they went by what they looked like, so feathers make you one type of thing, scales make you another. It’s a system based on morphology. As scientists have mapped more and more genomes though, they can see how closely related things are to other things, they can work out at what point in evolution they diverged. Everything that’s descended from a particular organism is called a clade. If you look at cladistics rather than morphology, birds and crocodiles are more closely related to each other than crocodiles are to lizards, so grouping the crocodiles and lizards together, but excluding birds makes no sense. It’s paraphyletic. So now birds are classified as a type of reptile. It’s also why there’s no such thing as a fish. You can’t group them all together sensibly in a way that includes all “fish” but excludes all “non-fish”. Cladistically. Obviously if you’re adopting the old system of looking at what they look like, then you can.

Ontologicial questions about how to organise things then runs throughout our perception of reality, it can actually alter how we view reality. “This is part of this, but not part of that” can sometimes be absolutely crucial. Linnaeus may have been really keen on labelling plants and opisthokonts (ie fungi and animals) and that might have helped us understand the natural world, but he was well shite when it came to categorising humans, for example. He also obliterated indigenous people’s names for things when he did so, which may have changed how we perceive Western academia’s relationship to them.

But perception is more the domain of the next bit.

Epistemology – positivism

What gets you closer to the truth (or not) is a question of epistemology. So ontology is what’s real or not, epistemology is the approach by which we determine what’s real or not. There’s basically three types of epistemology. Finding things out by measuring things, finding things out by interpreting things, and making things up. So that’s positivism, interpretivism and belief.

So first off positivism. The positivist approach is to just look at things you can measure with instruments. The idea is that this is objectively getting at the truth by looking at numbers on dials, or scans, or whatever, what’s called sometimes instrumental reality. Positivism is the cornerstone of the scientific method, which works like this:

  1. you have these theories about how the world works.
  2. You test them with your experiments.
  3. The results match your theory so you think you’ve got to the truth.
  4. Then you carry on doing experiments until one of them doesn’t match the theory, so you need a better theory.
  5. When you’ve come up with a few theories you then do more experiments to confirm which one is the best. That becomes the new truth.
  6. And then you start the whole cycle again.

People are pretty bullish about positivism because it’s been really effective at working out what’s actually going on.

There are problems with the approach though. One is that people sometimes forget nothing scores above a 6. They mistake their current best guess for what’s actually happening. It’s the best way to get closest to the truth, true. But you never quite get there. Like Zeno’s arrow.

The other problem is that sometimes the experiments give the wrong results. So for instance you fire neutrinos through the Earth and find out they’re travelling faster than light, but then later figure out that there’s a loose cable that has thrown off your timing. Or maybe it’s your analysis that’s wrong, like the dead fish experiment in neuroscience. If you do a brain scan you can see effects that look like there’s a causal relationship between showing someone pictures and the reaction in the brain, but you also get a reaction if you plug in a dead salmon at the other end. You need to account for random fluctuations.

Then there’s a lot of cultural bias. So for example, if you’re testing a theory, the one that gets the most funding is the one propounded by the most eminent of scientists, and they’re often old white guys. If there’s other theories, they can get held back for a while. Usually until all of that generation of old white guys are dead. You can see the social effects on the progress of science.

The thing is though, that the process is self-correcting for social bias. If a theory doesn’t work, you’ll have lots of people doing experiments in all parts of the world, and coming up with theories and eventually one will look better than the rest to most people, and that’s the one that generally gets adopted. You get a consensus irrespective of culture. At the boundaries there’s contention, but in the main body of science there isn’t – the main body is more or less everything that happens after the first 10-35 seconds after the big bang up to now, everything bigger than a quark, anything smaller than the observable universe. This main core of science is the same for everyone, no matter where they are and has been contributed to and tested by cultures on every continent on the planet. The cultural bias doesn’t change the overall direction, it just slows it down.

Epistemology – interpretivism

The other approach is interpretivism. Interpretivism is more subjective, in that it’s interpreting what’s going on. You might not have anything you can actually measure with an instrument, so you need to ask a lot of people a lot of questions. This is a bit more systematic than a bunch of anecdotes, in that the idea is that you ask a large representative sample of people, and aren’t selective about which responses you look at. The criticism is that it’s still just a collection of opinions and it’s not reliable enough. As Roosta would say, you can’t scratch a window with it. Interpretivists would argue that positivism is so culturally biased that everything is interpretivist, which is just fashionable nonsense. Obviously if thousands of people from all over the world do an experiment and get the same result, which confirms the generally accepted theory, that’s not open to interpretation. To claim it is just seems like an inferiority complex on behalf of the interpretivists. Where the real strength of interpretivism is, is that it’s producing something like a version of the truth that can be useful where positivism couldn’t get you anything. Anything to do with how people behave socially has to be interpretivist, because people are way, way more complicated than cosmology. You can’t put them in a laboratory and see how they perform in the real world, because once they’re in a lab they’re not in the real world any more. So all you can get is a mass of opinions to interpret. But that’s OK because it’s better than the alternative. Which is nothing.

And there’s a huge number of interpretivist approaches, feminist, postcolonialist, Marxist, basically anything with an ist on the end. They’re all a valid way of approaching the world to some extent, as long as they can accommodate all the data observed and are precise about what their limits are. The mistake is calling them theories. That’s a positivist word. There’s nothing predictive about interpretivist approaches. You can’t say “in this and this situation with people, this will happen”. It’s too complex. And vague. What you’ve actually got with interpretivist approaches are different narratives, or lenses, through which to describe what’s going on. As Jitse said in a previous episode of Pedagodzilla, all models are wrong, some models are useful. The important thing is not can we prove it, but is it reproducible enough, and generalisable enough, and explain enough of the observations to be useful?

Epistemology – belief

Finally, we have making things up as an approach. There’s a lot of in-built elements to the way minds work that mean we tend to look for patterns that aren’t there – which is called apophenia. We recognise simple messages rather than complex ones. When we make connections in our heads that make particular sense to us we get a dopamine hit. That leads to aberrant salience, things get connected that shouldn’t get connected. So for example, there’s a lot of intricate stuff about crystal healing and resonance, which makes no sense physically, but sounds good as a story. There’s no scientific rationale behind it at all, but it works as a placebo because it sounds plausible to people who skipped physics in school.

One thing positivism and interpretivism are bad at is creating the sort of stories that have emotional truth for people. You can’t all get together and have a good time based on the standard model, or the general theory of relativity. The myths that we create hold communities together. They bring people comfort. So if you’ve moved to a new place and you’re wondering what church to join, for example, someone coming along and saying well you have no evidence for your faith so why bother? is completely the wrong epistemology. We talked about Buffy as if the show was real in a previous episode.  It would be completely out of place to continually remind everyone it’s not real while we’re doing that. I’ve used the phrase “science needs to stay up its own end” before, which I don’t think people would get unless they grew up on a working-class housing estate in the 60s. Basically, those spaces could be very territorial. You learnt where your patch was, and if you strayed into someone else’s you get told to stay up your own end. Too many epistemologies try and muscle in on someone else’s patch. Lots of epistemologies are dying out because of competition from other worldviews because of just this sort of intrusion – it’s called epistemicide. That seems like a bad idea because we’re losing other ways of perceiving the world. Colonialists need to stay up their own end.

But … the problem also works the other way when you start using your beliefs to make decisions about real things. So if you’re looking for a response to covid-19 you need to use a positivist approach and do clinical trials to find out what will work, and what won’t, you don’t just tell people you’re protected by the blood of Jesus. That’s a category error. Or you’re deciding whether gay people should be able to adopt. You can’t use a positivist epistemology (because there’s no instrument that can measure that) or a belief-based one (because it’s way too important to base it on something someone made up). You need to look in between at interpretivist approaches and gather data about what people’s experiences are about children of gay parents. And as it turns out, there’s no major difference. To insist on something being your way because you read it in a book somewhere is simply bizarre. I don’t need to do a routine on that because Patton Oswalt has already done that.

Critical realism and ontological hygiene

So what’s the proper epistemological approach? Well one of the things I learnt from physics is where you’ve got a binary choice, the answer is nearly always both are right. So is light a wave or a particle? It’s both. Same’s true here. I’m really suspicious of people who say “I’m a positivist” or “I’m an interpretivist”. Neither are appropriate all the time. There’s an epistemological approach called pragmatism, or realism, sometimes critical realism. It’s about adopting the correct epistemology for the domain that you’re looking at. So you have a physical science or chemistry or medicine, you have to take a positivist approach, you measure things and look at the numbers, and that gives you something ontologically that scores a 6 or maybe a 5 (or is disproved down to a 2). Or you’re looking at how people think or behave. You need interpretivism, because there’s no laws that predict how people behave, and that’s only going to be a 5 at best. That’s not as good as a 6, but it doesn’t have to be to be useful. Just let it go. At the other end you have all the stuff that has no evidence for it at all. But that’s ok too, science can stay up its own end. And as anything you can think of is ontologically a 2 and never a 1 that gives you a lot of wriggle room. “You know it’s possible God, or Severus Snape, or the Dalai Lama does exist, and believing that makes me feel happy, so I’m going to believe it.” The problem is when you start misapplying the made-up stuff to make decision about real things. Even then, I guess as long as your actions don’t harm someone else, feel free. But if someone else is going to be affected, you need enough evidence to score a 5 or a 6 on the ontological scale, or you’re being a complete dick.

It’s all about being aware of  where things are on the ontological spectrum and using them appropriately – what’s called ontological hygiene. Maintaining that ontological hygiene, and being able to switch between the different epistemologies, is where liminality comes in, but that’s another episode.

 

 

 

Predicting virtual worlds #5

Augmented reality

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Augmented reality. One function of many mobile devices is that they can combine the camera images with an overlay of additional information. In the same way that a global position and orientation can be used to calculate the position of stars as seen from a particular viewpoint, these can also be used to determine at which geographical location the tablet is being pointed. These data can then be combined with a database of information to create an overlay of text to explain, for example, the historical background of a building, or the direction and distance of the nearest Underground station or Irish pub. Locations can be digitally tagged, either with additional information (such as in a learning exercise with students adding their own content to locations), artwork, or even graffiti[i]. As with the astronomy apps described above, this provides learning in situ, and provides a kinaesthetic element to the activity.

The potential of combining geotagged images onto the physical world is indicated by augmented reality games such as Paranormal Activity: Sanctuary[ii]. In this, images of ghosts are located at particular physical world co-ordinates, which can be seen with a dedicated iphone app that overlays these images onto a camera image. Players can create sanctuaries, or cast spells, at locations which then influence the experience of other players. The game therefore becomes a massive multiplayer roleplay game played in a blending of the physical and a virtual world.

Greater precision than that enabled by global positioning can be provided through Radio Frequency Identification (RFID) tags, the technology for recognising which will soon be available on mobile technology[iii]. By placing an RFID tag in clothing, or furniture, or on a person, information about that object or person (i.e. metadata) are then always available, whenever a device is pointed at them. For example, products could be linked directly to their user manual; simply hold your tablet PC over your oven and pop-up boxes appear over the knobs decoding the icons, or attend a conference and each person there could have information linked to them, such as name, institution and research interests, which is revealed by holding up your phone and tapping their image on the screen. Several museums and exhibitions already have augmented reality exhibits; when a room is looked at through an AR viewer, the physical objects in the room are overlain with animations or animated characters, bringing the static displays to life[iv]. A further enhancement of augmented reality is achieved by enabling the animated characters to address the attendee directly, with their gaze following the attendee around the room, as they are tracked through the use of an RFID bracelet[v]. The characters can address many attendees simultaneously since, from the perspective of each, the character is looking at them individually, a transformed social interaction known as non-zero sum mutual gaze[vi]. These interactions can be made more seamless by plans to create AR projections within glasses[vii]. Rather than clicking on a screen, input can be through the detection of hand movements[viii] or, for the mobility-impaired, deliberate blinking[ix].

If this is possible with pre-recorded characters, then it is only a short leap to enabling this to take place with avatars or bots in realtime, by layering the virtual world image onto the physical as it is created. This activity resembles the mixed reality performances created by Joff Chafer and Ian Upton; originally these performances used images from a virtual world projected onto a gauze, so that they could share the stage with physical world actors, and more recently Chafer and Upton have used 3D imaging to bring the virtual world images out from the screen and into a physical space[x]. Capturing the images of avatars in the virtual world, and geotagging them, would enable people with the appropriate AR viewer to see avatars moving and communicating all around them. As the sophistication of bots develop, then the use of them as companion agents, guiding learners through virtual learning scenarios, could be brought into the physical world as guides and mentors seen only by the learner through their AR viewer. With ways of imaging the avatars through something as immersive as AR glasses, physical world participants and avatars could interact on an equal footing.

For learning and teaching, the advantages of blending the functionality and flexibility of the virtual and the real are enormous. For the learners who see virtual learning as inauthentic, relating the virtual world learning directly to the physical may overcome many of their objections. The integration of an object and its metadata as well as data providing context for that object (called paradata) is easily done in a virtual world; AR in combination with RFID tagging enables this feature to be deployed in the physical world too, since information, ideas and artefacts can be intrinsically and easily linked. User generated content, which again is simply created and shared in the virtual, can also be introduced to the physical. Participation at a distance, on an equivalent footing with participation face-to-face, could be achieved by the appearance of avatars in the physical environment and RFID tagging the physically-present participants and objects.

[i] New Scientist, Augmented reality offers a new layer of intrigue, 25th May, 2012. http://www.newscientist.com/article/mg21428652.600-augmented-reality-offers-a-new-layer-of-intrigue.html,

[ii] ‘Ogmento Reality Reinvented, Paranormal Activity: Sanctuary’, 22nd May 2012. http://www.ogmento.com/games/paranormal-activity-sanctuary

[iii] Marketing Vox, ‘Married to RFID, What Can AR Do for Marketers?’, 4th March, 2010. http://www.marketingvox.com/married-to-rfid-what-can-ar-do-for-marketers-046365/

[iv] Canterbury Museum, ‘Augmented reality technology brings artefacts to life’, 28th September 2009. http://www.canterburymuseum.com/news/13/augmented-reality-technology-brings-artefacts-to-life,

[v] A. Smith, ‘In South Korea, Kinect and RFID power an augmented reality theme park’,  Springwise,  20th February, 2012. http://www.springwise.com/entertainment/south-korea-kinect-rfid-power-augmented-reality-theme-park/

[vi] J. Bailenson, A. Beall and M. Turk, ‘Transformed Social Interaction, p. 432

[vii] S. Reardon, ‘Google hints at new AR glasses in video’, New Scientist, 4th April, 2012. http://www.newscientist.com/blogs/onepercent/2012/04/google-hints-at-new-ar-glasses.html

[viii]C. de Lange, ‘What life in augmented reality could look like’,  New Scientist, 24th May, 2012. http://www.newscientist.com/blogs/nstv/2012/05/what-life-in-augmented-reality-will-be-like.html

[ix] Eduardo Iáñez, , Andrés Úbeda, José Azorín, Carlos Pérez, Assistive robot application based on a RFID control architecture and a wireless EOG interface Science Direct, Available Online 21st May, 2012. http://www.sciencedirect.com/science/article/pii/S0921889012000620

[x] Joff Chafer and Ian Upton, Insert / Extract: Mixed Reality Research Workshop, November 2011. http://vimeo.com/32502129

Prescience Factor: 0/10. Despite AR apps becoming more popular since 2013, AR is still not really a thing in that it’s not an embedded part of what we do. Linking AR and virtual worlds in the way I’ve described here isn’t any further along (as far as normal practice) than it was when I wrote the above.

Predicting virtual worlds #4

Gone to mobiles every one

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Gone to mobiles every one. As noted above, the rate of take-up of virtual worlds anticipated by Gartner in 2007 has not been realised. Some predictions also state that the rate of development of the high end graphics technology required for virtual worlds will be slowed by the adoption of mobile technology. Essid[i] notes that the tablet PCs owned by students cannot run the viewers required for Second Life, and these are now the predominant technology with which students access online learning. In addition, many apps provide innovative and offline education, such as the use of Google Sky, Zenith or Sky Safari for learning astronomy. In these apps, the learner holds up their tablet PC and through global positioning and inbuilt sensors that detect orientation, the tablet displays the position of stars, planets and Messier objects as they appear in the sky in the direction in which the tablet is pointed. This provides learning that is interactive, kinaesthetic, and in situ. Essid’s prediction is that the predominant use of mobile technology as the new wave of learning will stall the uptake of virtual worlds. As Essid states in his blog post on the subject:

One does not wish to be on the wrong side of history, and I think SL evangelists are clearly on the wrong side, unless they are early in their careers and have a Plan B for research and teaching.

[i] J. Essid, ‘Mobile: Shiny? Yes. Hyped? Yes. Fad? No’, 3rd May, 2010, http://iggyo.blogspot.co.uk/2012/05/mobile-shiny-yes-hyped-yes-fad-no.html

Prescience factor: 8/10. To be fair, not my prediction really, but Joe Essid’s. The increasing usage of mobile devices has meant that learning can take place anywhere, but it has caused the development of some technologies to slow down because as a platform they are more limited,  in terms of the processing power when compared to PCs, but also in due to the speed of input (two thumbs are never as fast as 10 fingers) and the readability of the screen. It’s not 10 out of 10, because I think both Joe and I underestimated the capacity and functionality that smartphones would attain by 2018. Moore’s Law is severely difficult to anticipate because it’s a geometrical increase. This example shows why it’s impossible to get your head around geometrical increases.

Predicting virtual worlds #3

Moves to games consoles

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Move to games consoles. A move in the other direction, to more sophisticated technologies, is the repositioning of virtual worlds to run on games consoles such as the Playstation 3, or the Xbox 360. Games consoles have very sophisticated graphics processors, and the quality of the rendering of games is much higher than is available using most PCs. Many Massive Multiplayer Online Games are already available on games consoles, and shared virtual worlds such as Minecraft, previously running on PCs have made the transition to this technology. In the Minecraft case this has proved immensely popular[i]. The advantages of running virtual worlds on games consoles is due not to just the more sophisticated graphics available, but also the control devices. Many people find games controllers a more intuitive mechanism to control the movement of an avatar than keys on a keyboard. However, text chat and a drag and drop functionality are less well integrated.

The next generation of games controllers offer even more interactivity as they allow detect physical interaction by the users, through the use of cameras and motion detectors. Devices such as the Xbox 360 Kinect controller have already been used to animate avatars. There are two ways in which this can be done, either avatars can be animated inworld through physical actions triggering pre-set animations, (for example, the act of raising your hand triggers a hand-raising animation) or, as in the work of Fumi Iseki and a team at Tokyo University[ii], the animations are used to animate avatars in realtime, but in a local viewer only. Because avatars are animated inworld using preloaded animation files, there is no way with current technology to map motion capture to inworld movements of avatars in realtime.

This opens up the potential to a new, closer relationship between user and avatar. As Jelena Guga notes[iii], this will be the next step change in the developing degrees of immersion that have been enabled by the changes in technology. Although the sense of immersion may be increased, requiring the user to be physically active may also, simultaneously, make the user more aware of their physical body while interacting inworld, so their sense of embodiment may actually be reduced. The individual experience of virtual worlds varies enormously, and a likely discovery will be that whether physical operation of an avatar increases or reduces the sense of engagement inworld will be different depending on the person. Another consideration is that a one-to-one correspondence between physical action and resulting motion of the avatar is, as Stelarc points out,[iv] possibly the least interesting way in which to use motion recognition to animate avatars. In his performances, Stelarc uses his body to create inworld performances, but his gestures cause his avatar to fly, float, operate cyborg attachments and so on.

From a learning point of view, a move to games consoles could have advantages and disadvantages. The move would overcome some of the objections to virtual worlds with regard to the low resolution graphics, and technical issues such as slow rendering times and lag, however, they could marginalise activity even further, since few computer suites in universities have games consoles, and it cannot be guaranteed that all users will have access to them. Developing motion controlled interfaces would address some of the issues that some users find; that operating within a virtual world is too sedentary an experience. Offering the opportunity to operate avatars through physical motion may appeal to these users, though indications are that these users actually find the virtual nature of these experiences intrinsically problematic, equating the virtual with inauthentic. However, the use of a motion recognition system will have interesting opportunities for performance.

[i] M. Hawkins, ‘Minecrafton Xbox Live a smash success, MSNBC, May 12th, 2012http://www.ingame.msnbc.msn.com/technology/ingame/minecraftxboxlivesmashsuccess-766955,

[ii] Second Lie, ‘Kinect Hack Brings Real Time Animation To Second Life’, November 2011http://second-lie.blogspot.co.uk/2011/11/kinect-hack-brings-real-time-animation.html,

[iii] J. Guga, ‘Redefining Embodiment through Hyperterminality’, Virtual Futures 2.0, University of Warwick, 18th – 19th June, 2011.

[iv] Stelarc, Keynote, From Black Box to Second Life: Theatre and Performance in Virtual Worlds, University of Hull, Scarborough, May 20th, 2011

Prescience Factor 4/10. The only thing I nailed here was that consoles would become more of a platform for interacting in a social world way. Lots of RPGs now allow users to build spaces in a shared virtual environment, and not necessarily in service of the game directly, but just to settle a permanent online 3D space. The flexibility of the spaces and avatar interactions in games like, for example, Conan Exiles or Fortnite Creative is more limited than a full social virtual world, but you could potentially create a home and then invite someone round for a chat.

Predicting Virtual Worlds #2

A virtual world in your browser

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

A virtual world in your browser.  There are numerous legitimate reasons for using standard web browsers for access to virtual worlds. The first of these is that the processing power, particularly of a graphics card, required to run a virtual world viewer is beyond the capacity of the technology available to many people, and particularly of institutions. Secondly, the bureaucratic hurdles many practitioners face when requiring additional software to be downloaded and installed preclude the use of virtual worlds in many institutions, suffering as they do from the obstructive policies of their IT departments. Finally, enabling virtual worlds to be viewable from within a web browser means that the practice of accessing virtual worlds can be easily integrated into the majority of people’s normal internet usage, and so potentially widen the demographic of users. The initial effort required to begin using them in an educational situation would consequently be reduced.

It would be reasonable to anticipate that these factors would lead to the usage of virtual worlds becoming much more widespread. Making virtual worlds viewable through the web should have been very successful, in effect though, Lively only lasted for the second half of 2008. Newer virtual worlds, such as Kitely, although trying to widen the demographic of potential users by offering other platforms such as Facebook and Twitter for access, have returned to the use of the viewer-based technology rather than be browser-based.

The reasons for the failure of Lively are still being discussed. The direct experience of those contributing to this chapter, however, is that reducing the functionality of the virtual world in order to enable it to work within a browser removed the elements that made a virtual world worth pursuing. The sense of immersion was reduced, the opportunities to create and interact with virtual artefacts within the world were lessened, and consequently the rapid adoption by the marketplace, needed for the survival of any social medium, did not materialise. Lively disappeared before many people realised it had been launched, and new web-based viewers have not emerged to take its place.

Prescience Factor: 0/10. A total overestimation of the versatility and processing power of browsers now.

Predicting virtual worlds #1

The Metaverse Lives

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

The metaverse lives. Of the chapters in the book, four chapters use Second Life , one uses OpenSim, one World of Warcraft, one uses a 2D multimedia website and one began with Second Life and then, due to the price increases imposed by Linden Lab, moved to OpenSim. From this (admittedly small) sample, it appears that Second Life is still the strongest contender for a platform to host virtual world activity, but that educators are more becoming more likely to consider alternative, though similar, platforms, with OpenSim leading the way.

Educators’ dissatisfaction with, and the expense of, Second Life is beginning to cause fragmentation of the virtual world community. Whereas before it was almost guaranteed that educators would share a single grid, increasingly they are becoming spread across a range of different platforms. One saving grace of this diaspora is that many of the most popular of these virtual worlds use the same viewer. Whether one uses the Second Life viewer, Imprudence, Phoenix or Firestorm or any of a number of others, once a user has learned to interact with the world using that particular interface, then it is of little difficulty to switch to another one. This is particularly important with virtual world as a technology (moreso than, for example, with a word-processing package, or an online forum); since what is required for an effective learning opportunity is immediacy of experience rather than hypermediacy; any changes in the interface are extremely disruptive, since this makes the technology more visible and reduces the transparent nature of the interaction.

However, although they are operated in the same manner, the grids remain separate. The step that will reintegrate this fragmented community, and enable educators to once again easily share and visit their educational resources will be the successful employment of hypergridding. Hypergridding is the connecting of these separate virtual worlds to create a collection of linked worlds, an example of Stephenson’s metaverse. Once it becomes possible to move not only avatars, but also their inventories, from world to world, then these separate grids will perform as a single platform; so, for example, objects purchased from within Second Life (which has a thriving creators’ market) could be employed within OpenSim (which gives institutions greater control over privacy and ownership of the space). This would greatly expand the choices, and the flexibility of using virtual worlds for educators, and to a large extent enable far more effective collaboration. Simple and effective hypergridding is close to deployment, but, as of writing in 2012, has not been realised.

Prescience factor 0/10. Hypergridding is not a thing.

Sex with robots: the case against the case against Part two

Taking apart the interview, and the logic behind the argument, we get to these statements.

“Sex dolls and sex robots in the form of women and girls do something else. In the mind of someone buying and using them – they ARE women and girls. They are designed deliberately to resemble women and girls because they want the man buying and using the dolls to believe it is a woman or girl. These are markedly different things. Sex dolls and mechanical dolls in the form of women and girls play on the idea that women are orifices to be penetrated.

Imagery that dehumanises others in order to justify rule over them serves a political purpose. These sex dolls of women and girls are serving a political purpose to reinforce the idea that women and girls are sub-humans/orifices.”

“In the mind of someone buying and using them – they ARE women and girls.”

This doesn’t follow at all; it needs some evidence to back it up. The only thing we can say for sure is that someone having sex with a robot wants sex with a robot. Maybe it plays on the idea that they stand in for real women, but also it’s likely that that’s just play. There are a huge number of presumptions here, none of which are supported by research.

“Imagery that dehumanises others in order to justify rule over them serves a political purpose. ” True. This is what makes the argument such a problematic one. Dropping in valid political statements, that everyone can agree with, but then indicating a consequence that is no consequence is a standard bait-and-switch ploy. You agree with statement A and (you claim) A causes B, therefore you have to agree with B. Everyone can agree there is a systemic oppression of women in the patriarchal society. And that is formed by men with power in society. That sex dolls are contributing to this is not at all evident though. The power of this as a series of statements is that if you oppose B (because “therefore” is not proven) then somehow you are against A. It’s a specious and underhand way of carrying your argument.

What makes this “therefore” unlikely is that although men with power rule, men with sex dolls are rarely men with power. One of the areas I looked at with avatars is the role of the zeta males in many of the activities in virtual worlds. It is the men who have little or not power that compensate for this lack of power in their own lives by playing at being powerful in their fantasies. Their actions have no impact on wider society because nothing they do has impact.

OK generalisation there, which I admit. See how that works as a way to obfuscate relationships between concepts though? Zeta males have no power, only zeta males have sex with dolls, having sex with dolls therefore has no impact on society.

There may be a link. There may not. Acting on suspicions though is not really very ethical.

I suppose the bottom line for any ethical debate is do you deny a group of (some would call creepy) males’ expression of their sexuality out of caution that their actions may exacerbate the oppression all females, or not? It’s a classical deentological vs consequentialist dilemma. Do you take that chance of conducting a possibly (or even probably) unnecessary act of oppression on a minority group just to be on the safe side?  Or do you take the route of preserving all people’s rights, unless they are demonstrated to be dangerous?

While you’re considering that, I’ll remind you of another analogy. When the pigs finally get to run things in Animal Farm, they end up being just as bad as the people they replaced. Power is intoxicating, you get to control things so that you can make them the way you want them to be. When you’re in power you don’t have to worry about the consequences for disenfranchised people if you’re never likely to be one of them. Prof Richardson has a platform, the agalmatophiles do not; it is evident where the power lies in this debate.

“Four legs good. Two legs better.” should haunt anyone acquiring power; before you act check you’re not simply replicating the iniquities of those who’ve had the power before you.

A professor of ethics should know that.

Sex with robots; the case against the case against part one.

One of the sites I often read to get a good line on an ethical issue is Conatus News. It’s sort of generally progressively liberal, and usually well-argued. It offers a range of opinions, and doesn’t contest them, which is open-minded of them. Some of them, though, make my skin crawl. This article https://conatusnews.com/kathleen-richardson-sex-robots/ was one of them.

It’s an interview with Kathleen Richardson, Professor of Ethics and Culture of Robots and AI at the Centre for Computing and Social Responsibility (CCSR) at De Montfort University and spearhead of The Campaign Against Sex Robots. The rationale is that they exacerbate the objectification of women. I get the impression from the argument made that that’s not what’s going on.

The first alarm bells in the argument are some unsupported (and from what I know, plain wrong) statements. Here’s one:

“In the last twenty years, with the age of the ‘cyborg’ informed by anti-humanism and non-human distinctiveness, there has been this prevailing sense that humans and machines are equivalent. This implies that the only difference between a machine and a human is the ‘man who is creating it’ rather than some empirical and radical difference between a human and an artefact.”

In actual fact, if anything, the more people have looked at recreating consciousness, the more they’ve realised how essentially different the two are. While soft AI is being achieved, hard AI looks like an ever more distant, if impossible, goal. In The Emperor’s New Mind (26 years old now), Roger Penrose made some telling arguments about the differences; that no systematic machine-like process can replicate the organic creation of thought. The Turing test is being failed more often than it used to, because even though bots are being programmed better, the people judging are getting better at telling the difference. If anything, from the bits of research I’ve done, the increase is in more false positives, rather than false negatives. That is, rather than people mistaking bots for humans, people are mistaking humans for bots. Our standards for what makes something human-like are getting higher. Robots are falling behind.

Next one: “It has led to robotic scientists arguing that machines could be ‘social’ ”

This is not what social robotics is. Social robotics is looking at the elements that enable robots to fit into society, not at considering them to actually “have” society. This is a deliberate misrepresentation.

Now we come to the quite disturbing part of the argument.

“If a person felt like they were in a relationship with a machine, then they were. In this way, two seemingly different ways of understanding the world came together to support arguments for human relationships with machines. The first was the breakdown in distinction between humans and machines. The second was the egocentric, individualistic, patriarchal model (‘I think therefore I am’) – what I am thinking, feeling, and experience is the only thing that counts. I am an egocentric individual.”

One of the fascinating things about having worked in virtual worlds is that you come across a whole range of people. A lot of them are finding self-expression in ways that they couldn’t do in the physical world. A lot of them are finding ways to connect with parts of their identity that weren’t possible in the physical world. Sometimes it’s society, or it can be identity tourism. Quite a few were exploring their paraphilias.

Agalmatophilia is sexual attraction towards inanimate objects, dolls, mannequins … robots. It’s a thing. And real for the people who experience it. One of the major social movements of the last fifty years is the development of a more permissive outlook on sexuality. It’s complemented feminism, gay rights, more recently transgender rights. Even before gay rights legislation made discrimination on grounds of sexuality illegal, you’d hear homophobes say things like “well I don’t like it, but if they do it behind closed doors, then I don’t have a problem with it”. Not the best attitude, but underlies that an essential element of permissiveness is that if it’s between consenting adults, free and able to give their consent, then it’s not for us to get involved. Or to judge. If even some homophobes get that, we should be able to do even better.

“If a person feels like they are in a relationship with a machine, then they are.” “what I am thinking, feeling, and experience is the only thing that counts.” Those are positions Prof Richardson is critical of. If we are to respect all sexual expression (between consenting adults, free and able to give their consent), and we are, then we have to accept their own definition of identity, sexuality, gender, etc. That’s not patriarchal (in fact, the attitude has stood against the patriarchy in the past), it’s not egocentric (any more than respecting someone’s identity in terms of sexuality, gender, religion etc is). It’s respect.

It’s respect for people who think and feel and experience pleasure and sex differently, to think and feel differently. In ways we might feel uncomfortable in recognising. Which, I guess, is what makes it hard for the neopuritans, of whom Prof Richardson appears to be one. I assume she is otherwise why dismiss something that doesn’t meet with her recognition of legitimate human experience?

It must be tricky times for the neopuritans. Wanting to monitor and dictate what happens in private, between consenting adults (free and able to give their consent), but finding that homosexuality and transsexuality are now no longer legitimate targets. Who else is next? Let’s identify a remaining marginalised form of experience. Let’s go for the agalmatophiles. As Prof R. says later in her interview “I think, most people would agree they’re a bit creepy”. Yep like most people agreed gay people were a bit creepy a few decades ago? But if we target those that enjoy that sort of thing and dress up our distaste for what we’ve deemed are corrupt and perverse with words like patriarchy, that’ll make it look more liberal.

And if you’re thinking that wanting a relationship with a doll is a bit weird, so why stand up for agalmatophiles, there’s a poem by Martin Niemöller you need to re-read.

So yes, “two seemingly different ways of understanding the world” have come together in Prof Richardson’s argument, but those two things are luddism and neopuritanism, basically fear of technology and fear of other forms of sexuality.

There’s some more unethical opinions stated during the second part of the interview. I’ll leave them for the next post.