On Ludicity, Bullshit and Lorraine

This is a re-post of an earlier blog entry https://markchilds.org/2020/10/03/on-liminality-bullshit-and-lorraine/ I’ve just tried updating, but can’t save it – at least now you’ve got the original for comparison, I guess?

Bullshit

Bullshit is defined in the literature as unevidenced claims (Mackenzie and Bhatt, 2020) . I would like to extend this definition to describe anything miscategorised ontologically.

Broadly there are four ontological categories

  • “Proven”
  • Unproven
  • “Disproven”
  • Unproveable

So “proven” claims are those with sufficient evidence to convince the majority of people who have viewed the evidence. The scare quotes are because nothing is ever completely proven to be true, the best we can say is that the statement is the one, of all the possible statements, that best explain the observable evidence. Examples are evolution, general relativity, the standard model, climate change, and so on.

Unproven are those claims which have insufficient evidence to convince the majority of people who have viewed the evidence, but for which there is some, or where there are competing explanations. Examples are string theory, …. These are contested, and often there are social, hierarchical, cultural reasons why some lead over others. For example, those published in English are likely to be forerunners over those published in other languages.

“Disproven” are those where the overwhelming evidence is that the claims are false. Vaccines cause autism, creationism, etc.

Unproveable are those categories of statements for which evidence cannot be acquired. God, unicorns, afterlife, etc. The claims are that these things exist despite there being no evidence. Absence of proof is not proof of absence, is the argument.

So I’d argue any statement properly attributed to the correct category isn’t bullshit, but if it is misattributed it is. So for example, “I believe in God and that belief sustains me through my bad times”, is not bullshit because it makes no untrue claims. “God loves you all”, is – because it’s claiming that God actually exists, and we have no evidence for His existence.

“The Earth is flat” is bullshit, as is “vaccines cause autisms”. Those are both claiming disproven things are proven. But so is “science is just a matter of perspective”, as it’s stating a “proven” thing is unproven. Yes, you could overthrow the current paradigm, and people have, but you would need a wealth of evidence to outweigh the current best “proven” explanation, and move it to a different category through presenting that argument. To state that theories agreed across all cultural perspectives are just a male, western white perspective, when science is being used by all countries to determine truth from fiction, is bullshit.

An addendum – I’m talking here about the positivist end of the spectrum – astrophysics, biology, etc, the things based on measuring stuff (see a previous blog post). My own bias, as I go there when I think about science rather than the more interpretivist stuff like anthropology, psychology, education. With those there is a strong argument that there’s a western domination which influences the field – have a read of this https://www.nasw.org/article/science-writers-urged-tell-stories-include-indigenous-perspectives

Within the “proven” category we also have the distinction between positivist and interpretivist perspectives. Positivist observations are more powerful, and indicate stronger causal links. There is instrumental reality to back them up (although instruments can be wrong). But interpretivist data is also useful. To state that a model needs to predict behaviour absolutely in order to have value is bullshit, because even if it’s useful most of the time, it can still inform decisions. But to say that measurable phenomenon is no more value than a collection of qualitative data is also bullshit.

So yes, things move from category to category, but only over time, and only with evidence and reasoned argument.  There are blurry lines between the categories, and opinion might vary on which side some things legitimately belong. Bullshit only applies outside of these blurred lines.

These distinctions weren’t always so evident. It’s only with the Enlightenment in the 18th century and the development of the scientific method, that humanity developed a mechanism to fully determine the difference between finding stuff out and making stuff up. And to state that that science is the dominance of a western male perspective is bullshit. Anyone who wants to tell the difference between making stuff up and finding stuff out uses the scientific method. Indian scientists put stuff into space using science, not Western science. Science. The only difference between a Chinese scientist investigating copper nanotubes and an American one is the abbreviation they use. It’s always been this way. Current science is an amalgam of Islamic scholars, Greek philosophers, Chinese inventors. The first university was in Timbuktu. Reality is a humanity-wide endeavour.

Part of the problem is that there is a perceived difference in value between stuff made up and stuff found out. The Enlightenment has led to the perception that only things that are true have value. Hence, we’ve had epistemicide, where whole systems of making stuff up have disappeared. But just because something’s made up doesn’t make it useless. However, in order to compete with real stuff, everyone feels they have to claim that their worldview is real. Hence people claiming that God is real, I mean really real in a literal sense in the same way that I, and probably you, are.

Before the division into real and not-real, people felt comfortable with mixing ideas they would create alongside stuff they saw. So we’d have theologians arguing about how many angels dance on the head of a pin, or people dancing so the sun came up. They didn’t really believe that those things were really real in a literal sense. The distinction didn’t matter. There are a lot of cultures around the world that still haven’t adopted this hierarchy. The idea of qi informs the design of buildings, but no-one tries investigating copper nanotubes using those principles. It’s not really real in that sense. The Dreamtime doesn’t actually literally exist in the same way the world does; it’s a signifying mythical system that exists alongside the real world.

But since the Enlightenment people who like the made-up stuff feel they have to place it on the same footing as stuff that’s found out, which means claiming that made-up things are really real too and then using made-up stuff to make decisions about real things. So they redact science books because it contradicts the made-up stuff about creationism, or they use a line in the Bible or the teachings of an Imam to decide which real people should be allowed to fuck whom.

It’s a misassignment of ontological categories.

It’s bullshit.

On the update

<I’ve gone through updating the original post because that version conflated the ideas of liminal and ludic spaces. I was aware initially that the ideas were different, but was convinced through conversations around 2015, 2016 that the ideas had become conflated, but recently (2022) I’ve have had a few more chats with people and have realised that actually, many people make a distinction. I’ve changed “liminality” to “ludicity” where that’s what I actually meant and any extra text I’ve added (19/9/22) is in <> parentheses. >

On liminality and ludicity

The idea of liminality started with Victor Turner, who was a drama theorist. Liminality derives from the word “limen” or the edge of the stage. <Turner’s idea was that in the cross-over between off-stage and on-stage, there is a moment where there are no rules, no roles, everything is held in abeyance until we enter the roles assigned for us on the stage. And this also applies to any other transition, the commute between home and work is a liminal space. I guess there’s the highway code to follow, but apart from that we are set adrift from all the other pressures of social interaction. I wrote this while driving back from a conference down the M6. Ideas would flow, I held onto them until i could get to the next service station, I’d write them down, and once that set of ideas were transcribed I set off again. This was only really possible because of the liminality of the space I was in. With that extended period to just think, it was possible to organise everything in my mind.

However, once the play starts, this is no longer liminal. There are rules to follow, parts to play, but they are different from those of regular life. So the events of a play exist within a ludic space, alternatively called a magic circle by Huizinga, or a membrane by Castronova, or a fourth place (though that was just me in one book and it didn’t catch on).> Within that space we suspend our disbelief – an actor becomes the character, the backdrop becomes an actual landscape or drawing room. But the same is true of a film, or a book; it’s called the diegetic effect. We can sustain that level of engagement, while also knowing that it’s not real. The state of knowing something is real and not-real at the same time is known as metaxis, or double-consciousness. We know deep down it’s not real, but while we’re in the ludic space we suppress that knowledge in order to fully immerse ourselves.

While we’re engaged in the film the real world doesn’t intrude according to this view. We know that it’s not real, but while we watch it, that doesn’t matter. We know aliens aren’t real, but we’re still scared by them invading, we know that’s just an animated drawing of a deer, but we still cry when Bambi sees her die. We can take part in ludic spaces too. A game space is a ludic space. We know it’s only play money, but when we land on Mayfair with a hotel on it, we’re really pissed off. Virtual worlds are ludic spaces too. As are ritual spaces. Within them, roles are changed; identities can be changed; rules are changed. The made-up is made real. For example Monopoly, extends around the players. Within that space, the play money matters, there are specific rules that govern behaviour. We all become capitalists.

Ludic spaces aren’t just defined by space, they are also bounded by time. A stage outside of a performance isn’t a ludic space. It’s just a normal space. It’s transformed into a ludic space by ritual elements, <passing through a liminal moment during the transition>. The surroundings help here. There’s an interesting paper by Pierpoint (Childs et al, 2014; 121-124) in which the surrounding elements of a proscenium theatre are described as part of this ritual. There is the design of the front of house, the direction to the seat, sitting down, the reading the programme, all those build up to the moment where the orchestra plays, and the curtain goes up. All these liminal experiences are signifiers of the moment when the ludic space is created. Performances where actors drift on stage, and there is no real start feel odd because this ritual commencement hasn’t taken place. Site-specific theatre is more challenging partly because this ritual is absent, <we miss the liminal moment,> so we don’t know when or to where the ludicity extends.

Ludic spaces can also be returned to and invoked repeatedly. By having multiple texts, a series of movies, or a TV show, a consistent repeated diegesis is created. This can also be extended outside of those texts, by others, like for instance fanfiction, or conferences like the Sherlock Holmes society run, where the canon is engaged with as if it were real.

The pedagodzilla podcasts are ludic spaces. The Godzilla roar, the music, Mike’s intro, all set up the ludicity of the space. It’s important because it signifies that within that 40 mins, making stuff up is legitimised. Mike sets out the rules; that there is a genuine piece of pedagogical theory, a description of a piece of pop culture, and then we will apply the real stuff to the made-up stuff as if it was real. We are deliberately misattributing the ontological nature of, for example, Yoda as a supply teacher, because we know it’s inappropriate, and therefore fun. We know that he doesn’t exist, and wasn’t created in order to be analysed in that way.  And we know the audience knows that. And we hope the audience knows that we know that. It would spoil the lusory nature of the ludic space for someone to criticise the argument with “but he’s not real.” That’s not the point. Made up stuff is legitimate within the ludic space.

Ditto church services. The organ music, the singing, the sermon. All of those add to the ludicity. Gee would also describe the space as a semitioic social space, if you can read the signs around you, in the vestments, the props, the stained glass windows, it all adds to the experience of it as a ludic space. Within that ludic time-bounded space, misattributing the ontological status of God is fine. You can say He’s real within that space, and share fellowship and agape and all that feelgood stuff, because the normal rules of engagement with reality are suspended. Made up stuff is permissible.

And also ludic spaces can exist within other ludic spaces. So for example, later in the same chapter as the Pierpoint reference Ian Upton (Childs et al, 2014; 127-130) talks about ritual spaces within Second Life. We adopt one set of rules on entering the virtual world, and then within the virtual world cross another magic circle where rules and identities are transformed again. Ian argues that the change between the non-performance SL space and the performance SL space is a greater one that between RL and SL.

Where it breaks down a bit

This idea that ludic spaces are separate, discrete places cut-off from normal space doesn’t always hold, however. The membrane around that magical circle is permeable. Anyone who’s had to placate a child who’s got upset by landing on Mayfair, or fallen out with someone because they lifted money from the bank, will know that what happens within the Monopoly game space does have an impact on the rest of the world. More positively, the ludic space can excite us, or sustain us, in the rest of our lives, by us looking forward to the next movie in a series, or building a fan community around those spaces, or having faith in a divine being.

It works the other way too. In novels and films, often the exterior world will intrude, to remind you it is only a book. In Vanity Fair, Thackery interjects to remind the reader that he’s writing the novel. The sense of immersion is undermined, the diegetic effect broken.

And sometimes the membrane extends way more than the ludic space. A football ground is a ludic space. There is the ritual of the singing, the buying of the pie, the Chinese dragon dancing between halves (I’m guessing because I’ve only ever been to one football match in my life which was West Brom vs China). The crowd shares in the made-up thing that it matters whether one set of the 11 people on the pitch get the ball in the net more than the other 11. That’s what the game is. That’s what all games are. They’re enjoyable because we’ve invented a set of criteria that matter, not because they do intrinsically, but for the sense of camaraderie, of communitas, that occurs when the criteria are met. One woman jumps higher than the other woman, one robot flips the other robot out of the ring. We all know deep down that they don’t matter, but it’s fun to believe that they do and share that with other people.

But that ludicity is broached when that extends to people’s entire lives. Outside of the match, that ludic space bounded by space and time, can dominate those lives. At some level there is the awareness that actually, it’s a manufactured reality, but that realisation is permanently suppressed. Your team loses and you will be depressed all week. It’s the same with religion. The statement that God is real isn’t left at the church door, but is taken out into the real world and is acted upon as if it were true all the time. It’s self-evidently not true, but the ontological status is misattributed.

Let’s remind where the bullshit lies. “I know I can’t prove God exists, but I choose to believe he does, because that belief gives me comfort, and ties me to my community” – not bullshit. “God exists and He says you’re going to Hell” – bullshit.

There’s an extra level of complexity with ludicity , and that is where it’s intricately woven with the external world. This ludicity isn’t obviously tied to a space or a time, but it’s ludicity nonetheless. This is where we come to the Dalai Lama, Tony Stark and Lorraine Kelly.

The Dalai Lama, Iron Man and Lorraine Kelly

What these three have in common is that they are all fictional characters; they have identical ontological status.

The Dalai Lama is the latest incarnation of Avalokitesvara, a Bodhisattva. This obviously is made up as there is no evidence for reincarnation or the existence of Bodhisattvas. He is performed by a real person named Lhamo Dhondup. If people believe in that sort of thing, then if they meet Dhondup then they might believe they have met the Dalai Lama. Where the reality and fantasy are distinguishable is difficult to say. Maybe when he gets home Dhondup takes off the assumed identity and just becomes a normal guy. Maybe he performs that identity 24/7. Similarly with Tony Stark. The character was created more recently, and we know by whom (Stan Lee, Larry Lieber, Jack Kirby and Don Heck) whereas the name of whoever made up the Bodhisattva stuff is lost in the mists of time, but ontologically they are just as real or unreal as each other. In his most recent incarnation Tony Stark is performed by Robert Downey Jr. However, that performance isn’t restricted to the ludic space of the MCU, as Downey JR (like Dhondup appearing as the Dalai Lama) goes to hospital wards to meet sick kids who (like Buddhists) really believe he’s Tony Stark. Downey Jr. doesn’t do that all the time, he has an out-of-ludic-space life, but he carries that ludicity around with him, able to generate it when it’s required. And that’s OK because that ludicity legitimises the made-up-ness. The child in the ward isn’t meeting an actor, he’s meeting a superhero. For the moment Downey Jr. is there, the fantasy is real. Ditto Dhondup.

Lorraine Kelly is slightly more complex, in that the fictional Lorraine Kelly is performed by a real person also named Lorraine Kelly. This was actually a point of law, proven by the real Kelly because the fictional nature of Lorraine means that she’s a performer when she’s doing her presenting; she’s not being herself. When she meets fans in events, she’s also Lorraine, but where the real Kelly exists, and the fictional Lorraine exists, is a blurred edge to the ludicity.

In the world of professional wrestling this is known as kayfabe. Although professional wrestling resembles a sport, its roots are actually in the strongman sideshow of carnivals. Initiated by the golden trio in 1920s’ New York, the matches are actually “worked”, ie fictions created as performances. The ring is a ludic space (as are all sports spaces) but the ludicity extends beyond the ring, as the worked narratives are played out in public outside of the ring, extending the narrative into mainstream space. The wrestlers abuse each other in publications, carry on feuds in public spaces and the wrestling news describes these stories as if they were real news. As internet culture has formed, the ludicity has extended to websites, but this also makes maintaining the work constantly more difficult, as fans may spot enemies together in restaurants etc.

This is still ludicity, but again the wrestlers carry that ludic space with them. In dressing rooms etc if a “mark” (ie someone not part of the work) is spotted, the wrestlers will call out “kayfabe” and switch on their characters in the same way that Kelly, Downey Jr, and (presumably) Dhondup do, generating that ludicity around them.

And what? It gets more complicated?

This blurring of ludicity is also deliberately played with in professional wrestling, in a level of complexity rarely developed in other media.. A wrestler might be really hurt, or go AWOL, or fall out with his coach. Or a “face” and a “heel” might fall in love etc. This is called a shoot, (as with most carnies, there is a huge terminology describing the differences between the ludic and external spaces). A shoot is when the ludicity is unintentionally dropped and reality inevitably intrudes. This could happen with the other examples. Anything could happen to cause Downey Jr, Kelly or Dhondup to slip out of their roles,  with varying consequences.

Where professional wrestling is more complex, however, is that there is also worked shoots. What may seem to be a falling out, and a narrative in which the ludic space has been broken, can actually turn out to be part of a larger narrative, and it’s all part of the work. Fans are constantly kept uncertain as to what’s real and what isn’t. But they work it out, or adapt in retrospect if they haven’t. Professional wrestling fans’ realities are constantly being retconned and it’s all part of the fun. We could learn a lot from them.

So what’s got fucked up?

Believing in things is fun. Make-believe is reassuring, it brings respite from the harsh realities of life, and particularly death. We can console ourselves there is a jeaven, or whatever it’s called, and that gets us through. It’s more exciting to meet the Dalai Lama, or Tony Stark, or Lorraine, than it is to meet Dhondup, Downey Jr, or Kelly. It’s tedious to constantly have to follow up a statement about God or Yoda, with “I know he’s not real, but just for the sake of discussion let’s pretend He does.”

The problem is that people feel the Enlightenment has forced on us this hierarchy between finding stuff out and making stuff up. People feel that stuff has to be true in order to be justified in believing in it. And worse. Deep down people know claiming unproveable things are true is bullshit (once you know how to tell the difference, you can’t unlearn it) but that just means they end up defending it even more vociferously. You could argue that there are other ways of knowing, that evidence is not the only way to find things out, but then that’s bullshit about bullshit. That level of self-deception is going to wear you out.

The effect of all this bullshit (and metabullshit) is that we get people attacking soap stars because of something their character did in last week’s episode, we get climate change denial, antivaxxers, holocaust denial, homoeopathy, we get statements like “it’s Adam and Eve, not Adam and Steve”, we get people forgetting that ultimately it’s just a game, etc. etc.

And on the other hand, where many epistemologies collide with scientific rationalism, scientific rationalism wins (because it’s the only one that works) and we lose all these alternative worldviews in a global epistemicide.

The answer to this either or state, between accepting or rejecting reality is ludicity. You can have your cake and eat it. You don’t have to pretend stuff that’s made-up is real, in order to feel it’s legitimate to carry on believing it. Have your ludic spaces, but acknowledge that they are ludic spaces. You just need to be able to see the crossover point – the limen. Within the delineated liminal spaces, you can call anything you like true. Go to your mumsnet group and complain about your food having chemicals in it, have your YouTube channels about the earth being flat, have your services where you talk about all the wonderful things your God has done for you. But see the limen.

From all the examples above, we can see how flexible ludicity is, it can be delineated within specific spaces, it’s permeable, it can be spontaneously generated once it’s been established, it can follow people around. The boundaries can be played with. So feel free in applying ludicity when and where you like, to gain your emotional sustenance from it, but when you come back out into the real world, acknowledge that it’s just football, or religion, or a movie and use real things for making decisions about the real stuff.

Recognise that every damn thing has chemicals in them and act accordingly, don’t go down conspiracy-theory rabbit-holes to prove the Earth is flat, acknowledge that God is no justification for stopping your son from marrying his fiancé because God is something someone made up at some point. Acknowledge your inbuilt bullshit detector and end the self-denial. Accept reality into your lives.

Go to your ludic space. Have your fun. Have your life-affirming moments. Share your beliefs with your fellow worshippers as if they were real things. But see the limen, as you transition back out into the world you share with the rest of us.

See the limen and we’ll all get along just fine.

References

Childs, M., Chafer, J. Pierpoint, S., Stelarc, Upton, I., and Wright, G. (2014) “Moving towards the alien ‘other’”, in Kuksa, I. and Childs, M. Making Sense of Space: The Design and Experience of Virtual Spaces as a Tool for Communication. Chandos, UK:Oxford. pp . 121-138

MacKenzie, A., Bhatt, I. Lies, Bullshit and Fake News: Some Epistemological Concerns. Postdigit Sci Educ 2, 9–13 (2020). https://doi.org/10.1007/s42438-018-0025-4

Failing to get irony isn’t the flex you think it is

In Plato’s The Republic, he has his old teacher, Socrates, engage in a series of conversations about how to create a utopian society. The people he’s conversing with (I’m hesitating to call them friends because tbh he comes across as _really_ annoying) offer ways to construct this society, for example, having officials elected from amongst Olympic athletes as they’d have commitment, and sport is an objective measure of who is better at something e.g. the fastest gets to the finish line first.

Ah, says Socrates, so you’re saying that only the fittest and healthiest should make decisions about ruling. To which they answer yes as they have sounder minds. Ah says Socrates, so you also then are saying that the infirm have nothing to offer, to which they make another response, and so on, each one leading them step by step to a more untenable position by using the logical consequences of their positions against them.

This then, is Socratic irony. Showing people the egregious nature of their positions, even though they might not appear so, but while claiming to understand them.

It’s the basis of a lot of humour from the past few thousand years.

Though, not great humour, as it’s pretty annoying.

And as a recent example we have Jimmy Carr. The statement is that when we look at the holocaust, we decry (quite rightly) the death of six million Jewish people. We don’t decry the death of a million Roma and Sinti people. Ah, says Jimmy, that’s because we’re OK with that. The audience laughs.

The laugh – the “joke” – is the shock of recognition that by not including those deaths in with our teaching of the holocaust, the implication of what we’re saying is those deaths are OK. Of course, it’s not. The response isn’t one of enjoyment, it’s not really meant to be funny, it’s that instead of the expectation that the usual declaration of how wrong deaths are, someone is espousing the logical consequence of a prevailing opinion (the Holocaust was the death of six million, not seven, or 14) which actually runs counter to that outrage. We’re being caught out in a double standard.  It’s being suddenly faced with the sudden recognition that something is wrong here

That’s how socratic irony works. The ironist says “you haven’t thought this through, your position is untenable” by stating the untenable.

There are some valid arguments that this still isn’t a great way to convey an antiracist message, though.

One is that there’s the danger it could be taken literally, and that could end up being counter-productive. Never underestimate the range of things you think untenable that other people do think are all too tenable. It’s not really conceivable that a comedian and a TV channel would condone that level of racism, but the endemic anti-Roma sentiment around is horrendously high and people are understandably unnerved by it. It’s also possible for people to not actually understand socratic irony. I’m sure some of the people taking those comments literally genuinely believe that because someone says something, that’s what they mean. There are language issues, literacy issues, the potential to take things out of context. All of which could lead someone to seriously think a racist message is actually being conveyed.

And secondly, the Holocaust. I mean, even if you can tell socratic irony when you hear it, that’s still too horrendous a subject to include in a routine. I follow the Auschwitz memorial twitter feed and sometimes that’s overwhelming, seeing that inhumanity on a daily basis. Hourly. I don’t think I’d laugh for the rest of the evening for thinking about it  if it got mentioned – even though I get the point that Carr is making.

And also, I don’t really want to go to a comedy gig to have society’s shortcomings as far as double-standards with racism addressed. I kind of like stuff about people’s own lives, and their own perspectives. I already get the fact that the Roma and their suffering in the camps is overlooked. It’s personal observations on life I get a kick from hearing about, I don’t need to be woken up about people’s inhumanity to each other when I go out for the evening.

So – suitable subject matter – not really. Racist, literally obviously, but I do suspect the motivations of the people who are taking it literally. What is going on there?

Old left and new right

This post is prompted by a few twitter discussions I’ve had with people over the idea that the left has abandoned the working classes, in favour of liberal progressive agendas, which is why the tories are winning elections, and the solution is for Labour to do more to appease the working class tory voters. It’s difficult to conduct an argument when you’re limited to 280 characters, so here’s a more considered attempt.

First off – this isn’t really based on historical or political analyses – those aren’t really my field – instead it’s based on looking for cognate themes in the discussions people are having and trying to identify patterns in them. Which is.

People looking back at the origins of left wing movements usually talk about a pretty clear-cut dichotomy. On the one side there are the propertied classes, who own the means of production, and the working classes, who do the producing. The owners exploit the workers. The workers band together to accrue enough power to confront them. The left is labour, the right is conservative. There’s one issue – sharing of wealth. If you want to share it, you’re on the left, if you don’t share it, you’re on the right.

Like this

But in the intervening years, a whole slew of additional issues have started to face us: Women’s rights, Immigration, Black lives matter, Gay rights, Preventing climate change, Animal rights, and most recent (but not least) Trans rights. Our lives have got bigger. Our world has got bigger, and so we’ve each had to encompass these ideas and choose where we stand on them.

If you’re for any of those you’re more leftward leaning, against those you’re on the right. Which means if you’re historically for sharing wealth, but not for sharing power or defending the rights of those who are gay, non-white, transgender, or non-Christian, or you don’t believe in climate change (or do believe in it but don’t give a fuck) then without changing your original position on the whole wealth sharing thing, you need to face the fact that you are now, predominantly, right wing. Like this:

What’s frustrating is that people who agree with so much of the new right stuff, are letting their approval of that overwhelm their awareness of the traditional divide. If you have no money the tories will not give a fuck about you. That’s worth repeating THEY WILL NOT GIVE A FUCK ABOUT YOU.

After the murder of David Amess, obituaries were full of the great things he’d done. It seems to be a British phenomenon to only speak good of the dead. But this is piss poor journalism. The guy was not a good person. He repeatedly voted against gay rights, against immigration, and he repeatedly voted against benefits increases. He was pro animal rights. So hated foreigners, gays. the poor, but loved animals. Let’s give him a 25% then. That’s not a pass. Obviously his murder was appalling (I’m not arguing against that) but let’s not whitewash his lack of ethics. And if you voted for him because you’re against immigration or gay marriage too, then you’re not a good person either. And if you voted for him and you’re on benefits, you’re a fucking idiot as well.

The idea that a party that focuses on all those things in the bottom left-hand corner of that chart, has somehow lost the confidence of all the working classes is bullshit. Because there are plenty of working class people who care about those things too. Part of the problem is that the tories have successfully convinced people that the reason they are poor is because of immigration, or because all the left cares about is a liberal progressive agenda. That you’ve lost your job because of a foreigner or because of quotas. But that is so evidently straight out of the fascist playbook that to fall for it there’s got to be plenty of underlying prejudice to build on. I think there should be more emphasis by the left on the equality for all classes a lot more, just to remind people that this is still a major concern of the left – of course it is. But if you’re after equal treatment for yourself, but not equal treatment for everyone, you’re a hypocrite. And if you actually want unequal treatment for others, then you’re a loathsome arse.

And this seems to be the dilemma people say is facing the labour party. At the moment it seems like Starmer is trying to appeal to the right wing working classes, because the working classes are the traditional demographic for labour. So something like this:

That’s creating such a mixed message that no-one is falling for it. Which means a choice between the top row, or the left-hand column. And he doesn’t seem either clued up or brave enough to make that choice.

What someone with integrity would do would be to appeal directly to those appealing for equality, equality for all, though. A minimum wage, a united europe, immigration, an NHS, social welfare, greenhouse gas reduction, pro-women, pro trangender, the whole bit.

And if that means abandoning to the tories those section of the working classes who are against those things then that seems like a reasonable loss.

Because they are assholes, and the tory party is the party for assholes.

It’s a Sin

Just a warning. Spoilers for It’s a Sin. It’s weeks since it was on, but (for some bizarre reason) I see from Twitter that some people have been watching is one episode per week. We binged 4 ½ episodes in one day. It would have been 5 but the wifi packed in. Probably thought it could only handle so much of an emotional wringer in one go.

It’s highly recommended. The characters are only one year older than I am, and I lived in London from 1985 to 1988, but the AIDS epidemic only really impinged on my life when the “Don’t die of ignorance” billboards started springing up. Everyone pretty soon caught up with the bare facts, and when celebs you’d heard of started dying, then it made some sort of connection, but no-one I know (as far as I know) has died from AIDS, so I’ve been spared any direct experience how absolutely devastating it was. Seeing it on the screen was hugely powerful.

So – obviously part of that emotional impact is because of how dreadful it is as a disease. But also, people have recognised the power of RTD’s writing. Which of course it is. The man’s a genius. The power of the acting too. Everyone on screen is spot on. This blog though is about the direction, which few people are raving about.

That’s because direction, if it’s done perfectly isn’t noticeable. It’s not meant to be.

When we’re looking at technology there’s two phrases that describe our relationship with what we’re seeing – immediacy and hypermediacy. Immediacy is when we’re not conscious of the means by which the content is conveyed to us. We’re there, in the action, and not making any deliberate effort to make sense of things. The difficulty with teaching anyone to read is that until the reading is effortless, the fun of reading is diminished. You’re so caught up with trying to make sense of each individual word that the fun in having a story emerge in your head without thinking about it, isn’t there. So the fun is never there. Once you’re literate then you just get the book out, read the words, they automatically make sounds in your head, the process is transparent. The tech is immediate. You, the text, the story are one. It’s called the diegetic effect. Occasionally there’ll be a word you don’t know, it’ll blip you out of the diegetic effect. But just highlight the word, click define, and the internet will provide the info, and you carry on reading. Diegetic effect restored.

Virtual worlds on screen are the same. At first you’re struggling to move around, find the animation for the gesture, work out how to move the camera. You’re not focused on the interaction that’s happening because you’re having to focus too much on how to interact. But you get used to it, and the tech becomes transparent. The experience is immediate.

The counter to that is hypermediacy. The point of hypermediacy is to make the tech apparent. There are loads of reasons why this might happen. Obviously one of the reasons is accidental. A spelling or punctuation mistake for example. You’re reading a book and that whole autonomous words to sounds in the head to images to story are all working in harmony and there’ll be a blip, like a speed bump while you’re driving and you’ll think that was weird, what happened there? Going back you’ll see someone’s written “it’s” instead of “its”, or whatever, and it was just enough to interrupt the flow. You just set off reading again. But I find if it happens too often it just stops the process being enjoyable and I have to give up. This is why punctuation matters.

Or it can be done deliberately. I was reading James Acaster’s Classic Scrapes. Really funny, well-written. Narrative flows, you get drawn in. But at one point there was the blip – where I’m half aware something went wrong with that autonomous reading thing. Scanning back I saw him writing about getting his “just desserts” 😊 Now that’s funny, the implication being that James likes his puddings so much he puts an extra “s” in there. And you’d have to really stick to your guns to get something like that past a technical edit, so he must have really wanted it. I know I’ve tried. And failed. But then I don’t have the same weight as James Acaster. There was a “free reign” pun as well, which I didn’t get, but I’m guessing some play on words that went over my head.

My point is … hypermediacy is valid too. It pushes you out of the diegetic effect, but if you’ve got a reason, like you’re making a point about the process of mediation. Or making a pun. Then no problem.

It’s a Sin is directed by Peter Hoar. He incidentally also directed my favourite episode of Dr Who “A Good Man Goes to War” (the Seven Samurai homage one image below I think). You won’t notice the direction though. That’s because throughout he follows the basic rules. Film-making has been around for about 140 years and pretty quickly worked out how best to tell stories visually. If you look at movies from the 1920s, whether they’re German, Russian, Japanese whatever they follow certain rules.

https://images.app.goo.gl/D7NqZ7ngRP2FHe137

One of these is to take you on a structured walk through a scene. There’s an establishing shot, there’s close-ups, the camera picks up on specific bits of action, then pulls out again. It draws on theatrical traditions of the tableau, but there’s a limit to where you can put the camera, not because of technical limitations, but because of the way the brain decodes the scene. In film studies it’s called spatial verisimilitude – but it’s how the hippocampus works on imagery. There’s a line which you can’t cross without disrupting the ability to work out where everything is. I mean a physical line across the space being filmed. Those earlyish movies did one thing that’s a bit different, which is when they were cutting between two people, both would look out of the screen at the audience. Within 10 years though, these two shots were sorted, so that one person would be shot over the right shoulder of one person, the reverse shot would be shot over the left shoulder of the other. The camera moves more, but it never crosses over that line. On the screen, it means that one person will always be looking towards the right hand side, the other to the left. There’s a continuity that makes sense. If you want to reposition the scene, you need to do this carefully, tracking from one place to another, or moving in 30 degree jumps, then continuing from that new position. Fail to do this and the viewer loses the sense of where everything is. To do this requires meticulous storyboarding and blocking of the scene.

https://images.app.goo.gl/ZgtXUKiYh5LTXczN6

Follow these rules and the audience does not consciously have to decode what’s going on. They can be immersed. Break them and the audience is putting their effort into making sense. You’re in the realm of hypermediacy.

Watching It’s a Sin and you are never thrown out of this sense of immersion. The emotional power of the acting and the writing comes through constantly. You’re caught up in the drama. And because it’s done right, you never notice it.

There are exceptions (here comes the spoiler). When the Keeley Hawes character finds out her son has AIDS and is gay, at the same time, she takes off like a rocket, walking from ward to ward to try and get something sorted. She’s trying to overcome her sense of powerlessness (not only then but retrospectively how powerless she’s been as a mother) by doing *something*. The camera moves backwards the whole time, following her as she strides towards it. It has power because it’s so visually different from anything before – AND IT’S ALL ONE TAKE. There’s no cutting, which would undermine the intensity. You’re not consciously aware of this, but it works on the way you read that scene.

Contrast that with another Keeley Hawes TV show. Finding Alice. I managed about 10 minutes. There’s a scene in a kitchen where her husband’s death is being investigated. The camera cuts quickly and aimlessly (it seems) through the space. It crosses the line constantly, a new character walks in and there’s no establishing shot. We never have a continuance sense of where everyone is in relation to each other. There is no emotional engagement, because there’s no immersion. We’re watching a series of discrete, disconnected images. There’s no flow. We’re constantly struggling with the form, so much that we’re not engaging with the emotion of the scene.

What’s weird is no-one has really commented on this in the reviews. I think one called it “stylish”.

I mean fine, if you want style, then creating something where you’re not immersed works. I’ll watch a Godard movie, where from one shot to the next there is no continuity. It deliberately defies engagement – in a move called distantiation. Distantiation makes you aware of the artifice of what’s going on. Which is great for cineastes or film studies students. It’s not really fun though. People want immersion. It’s a Sin got 6.5 million viewers – the highest streamed tv show ever on channel 4. I don’t know anyone who managed longer of Finding Alice than I did.

Which makes you wonder why someone would deliberately make something that’s not immersive? Unless they just don’t actually know how to direct?

And just to reiterate, this isn’t just a cultural thing – what people are used to. It’s a neurological thing. I watched Violence Voyager – which is made in Gekimation – the same sort of cut out cardboard animation that we got with Noggin the Nog in the 70s. This is a long way from Smallfilms though – it’s possibly the most disturbing film I’ve seen. But despite the imagery, and the form, it still followed the rules of not crossing the line,  that careful establishing shot, two shot, close up, two shot dance that makes sense of visual spaces. And that’s one where I’d have appreciated some distantiation.

My hope is that the impact It’s a Sin has had will perhaps persuade other film-makers that establishing immersion requires these basic rules and that if you follow them you will get bigger audiences. There seems to be more and more TV shows that screw them up, where you get blipped out of that diegetic effect because the camera has crossed the line, and suddenly everyone’s facing in a different direction from where they were a moment before and it just looks weird, or there’s no eye-line match because they’re shooting from random directions and your brain isn’t tying up the space in a coherent way. And it’s just sad – because the writing and the acting is excellent, but the director is trying to be stylish, or is incompetent, and I want to love the show, but they’ve made it impossible to get into. It’s a sin.

Getting creative online

This is written mainly in response to this blog post by a colleague Emily Dowdeswell, so maybe have a read of that first https://www.cambridgeartsnetwork.com/news/finding-ways-to-be-creative-1605259163

I also had a conversation with colleagues at Durham about the teaching of dance online. That the dance elements were alternated with the in-person elements. This resonated with me as one of the first workshops I supported as a learning designer was a dance workshop held by an academic in Canada, with performing arts students in Warwick.

The creative arts must be some of the most difficult to translate to an online environment, and yet so much of art is done in solitude. Yet it’s tactile and can be collaborative. Knowing people who are doing arts degrees in the time of lockdown has brought that to mind a lot recently too.

So what are the issues these things raise?

Online sessions can become dull and repetitive. Something more creative and expressive is needed for everyone. Being more tactile and allowing movement can make a difference. I would resist the notion that there are “kinaesthetic learners” specifically, but I have noticed that some people are particularly locked into their physical bodies than others. Working with virtual worlds, I’ve seen the people who are most upset by relating virtually are those with a specialism for example in sports and sculpture. “I don’t want to be sitting in front of a screen I want to be out on my bike” “I can’t make something with my hands, I don’t get it”, were some quotes (I’m paraphrasing ) that came out in my doctoral research. Which is why they struggled particularly with my sessions in Second Life because full engagement in a virtual space requires the ability to be disconnected from the physical body and fully embodied in the extended body on the screen. A recent post from Dave White http://daveowhite.com/spatial/ states how this body can be something as minor as an image, as long as there is a spatial component to where it is located on the screen. I’ve seen learner interactions transformed online by the simple act of sharing a space with their selves represented by just a cursor. As long as they can gesture within that space and affect the space, and have those changes perceived by others, they have an online body and then communicate more effectively. But roughly only 25% become fully immersed in this way, 50% it’s a mixture of feeling located in their extended body and in their physical body, and 25% never experience that online embodiment. There may be some neurological evidence to back this up.

But despite this, there are no specific kinaesthetic learners, we are all kinaesthetic learners at some points, in the same way that we move in and out of learning most effectively between visually, textually, aurally constantly (ie why learning styles are complete arse). Sometimes holding something, drawing it, so-called visceral writing (ie writing with a pen) can help us learn from a different perspective. Unlock something creative.

Also, movement helps everyone. In IET’s 2020 Innovating Pedagogy report there’s a chapter on Spaced Learning. People learn better if they break up their studies with some exercise every 20 mins. If I’ve got deeply engrossed in something (it happens sometimes) when I start moving again afterwards, everything has seized up. That’s not healthy. We need to stop trapping people in front of a laptop for hours at a time.

So the idea of mixing the offline and the online, breaking up the zoom, or whatever, session is vital.

However, the zoom session itself is too. I like the idea of zoom as a campfire. The metaphor is that we have these rooms and bodies and practice outside of the space, but the virtual space is where we really connect. That seems to run against the statement “Zoom is just a place where we happen to meet.” Where we meet is everything. But I guess what the statement means is that there is nothing special about zoom. In fact, there are some severely limiting aspects to it. The lack of spatiality that Dave White mentioned, whereas perhaps something that gives us a sense of spatiality would be better. Zoom has a spaciality setting, where it puts each face on a seat in a forest setting. It looks awful. Each person is still just a disembodied head, only this time much smaller on the screen. The virtual backgrounds help add a bit of fun to the process and they help, but I think there’s more to be done there.

What Emily mentions in the blog post are a whole set of excellent techniques to make the connection work better, whiteboards, chat, gestural elements like handraising. We need more though. In a Rumpus meeting last week we talked about how sound is underused. The sound of applause instead of just a clap emoji would help. A little signature tune to announce when someone logs on would be great. We need to experiment with more ways to be present. Maybe take some hints for that from how we are present when we are together in-person, but there may be some completely new ways to be thought of. Or dump zoom altogether and find something more expressive.

What is particularly difficult though is making the transition between offline and online. If we’re moving people out of the online space to do something movement-based or tactile and creative, then this sense of sharedness – vital for communication – will be constantly undermined. We need ways to transition between the two. The campfire metaphor is excellent here. Campfires are liminal spaces in many ways, there is the shared heat, the passing of objects to add to the fire, we don’t necessarily have formal rituals of entering and leaving, as we do with ritual and performance spaces, but maybe those could help. I was once asked why we wave at the end of a videoconference session, whereas we don’t at the end of a meeting, as if that was weird. But it’s not, virtual spaces are liminal. In order for them to work we have to cross over from the physical into the virtual. And we have to cross back out at the end. That epistemic shift is more effective if facilitated through an act – even one as minimal as waving. We need to identify more things like that to make this transition in and out more rapid, if it is to be taking place many times in the same session. So reconnecting with our physical bodies that Emily suggests but also effective disconnecting from those and reconnecting with our virtual bodies. We probably also need additional ways to bring in the physical artefacts we’ve made if those are also intended to be part of the shared experience. Maybe tech is a way to do this, visualisers for drawings, camera peripherals to capture movement. Haptics, body motion tracking? But a lot of it will need developing practice too. And passing of artefacts?? Absolutely key to the shared campfire experience, but how to replicate that online? In a zoom meeting where is the toasted crumpet?

So – in summary – which is tricky because this is an unstructured brain dump in many ways. I don’t think we’ve got many answers in the short term, but at least we have a different name for the problem.

2020 vision

I had one of those Covid stress dreams last night. We’ve all had them. Somehow they’re more vivid than regular dreams, there’s a vague feeling of dread or sadness that overhangs them, which sticks around for hours after you wake up.

Although I have way fewer reasons to be stressed than most. Career-wise the pandemic has probably been beneficial for me. Online learning has suddenly been forced upon people, and those who are doing it well find that there are benefits, and students appreciate it. Not only that but people are working out what in-person teaching is good for, and what it’s not necessary for. After 23 years of researching how online learning works, and teaching people how to teach online for 15, for the first time everyone is listening.

So 2020 hasn’t been the worst year of my life by a long chalk. I started a new job, and then four months later I got another one. I finished (and passed) my MA. VR got a boost through the introduction of the Oculus Quest and so I found a renewed interest in my research specialism. I managed to get my hands on a pair of limited edition Scooby-Doo Converse.

The lockdown didn’t affect me greatly. My colleagues are all experienced in remote team-working, so I didn’t feel any disconnection from them starting a new job. I helped run a successful conference, and chaired a great panel session with colleagues from the UK and Canada. There were low points, not being able to see my wife for several months because we were locked down separately, not being able to see my mother when she was admitted to hospital with a stroke (she’s fine now). Missing Christmas with the kids just now because we were all put into Tier 4 just days before.

The lack of impact on my job is partly by design (one of the reasons I chose to specialise in online learning was so I could work at home), chiefly by luck (I got to exercise that choice because of financial security, if I hadn’t owned my own house I’d long ago have been forced out of HE and into something with more job security). There was an aphorism that did the rounds a few months back, “there was no lockdown, the middle classes stayed at home while the working classes brought them things”. That’s bullshit, obviously. True it’s 10 times worse for someone who’s also experiencing financial stress, but we’ve all experienced lockdown to some extent, gigs cancelled, families out of bounds, holidays at home. And for someone who actually likes being stuck inside with just a pile of books, a TV and an Xbox, that’s not the same hardship as it is for someone who doesn’t have the space at home, and needs to do social stuff for their mental wellbeing.

I guess the fears that we all have are the longer term impact, economic ruin, food insecurity, civil unrest. Those continue, and it’s that that has obviously been preying on my subconscious. I’m assuming this because this is what it foisted on me this morning.

The pandemic is over, but before it ran its course, it drove civilisation into collapse. I am now part of a nomadic people travelling through a post-urban landscape. There is an attack, someone fires on us, one of the loners who preys on such groups. He kills one of our group but the attacker is killed too. I scavenge the belongings of both of the dead. I lay the weapons to one side, but then the camera (I dream like a movie, establishing shots, two-shots, reversals, jump cuts) zooms in on the backpacks. The attacker’s contains clothes, neatly folded, and (the music swells into something poignant – yes i get a soundtrack too, wanna make something of it?) amongst them is a copy of Newsweek, bagged and boarded, with the attacker’s image on the cover. The implication is clear; this scavenger was once someone powerful, wealthy, and is clinging to his prepandemic identity – a reminder of his life before. I return the magazine to the backpack, dig two shallow graves and lay the corpses in them, placing the backpack as a pillow, burying his past with his body.

I’m not sure what I’d make of this. Probably it’s just random shit my brain is pumping out, but the pathos inherent in trying to cling to something that’s gone is quite a strong image. I mean; we could be going through this for years to come – and even once the pandemic is over, there’s the fallout from it (and Brexit) and more pandemics possibly to come. Should we be so worried about what we don’t have any more, and are we wrong to keep clinging onto it, is it holding us back and should we move on? Or is it something that we retain as being important to our identity, even though it’s gone? Our glory days as a society bagged and boarded and weighing us down past the point where those things are at all useful.

Well, OK, that turned out to be a bit more depressing than I expected.

Maybe next time I’ll just post some pictures of my cats.

On Liminality, Bullshit and Lorraine

Bullshit

Bullshit is defined in the literature as unevidenced claims (Mackenzie and Bhatt, 2020) . I would like to extend this definition to describe anything miscategorised ontologically.

Broadly there are four ontological categories

  • “Proven”
  • Unproven
  • “Disproven”
  • Unprovable

So “proven” claims are those with sufficient evidence to convince the majority of people who have viewed the evidence. The scare quotes are because nothing is ever completely proven to be true, the best we can say is that the statement is the one, of all the possible statements, that best explain the observable evidence. Examples are evolution, general relativity, the standard model, climate change, and so on.

Unproven are those claims which have insufficient evidence to convince the majority of people who have viewed the evidence, but for which there is some, or where there are competing explanations. Examples are string theory, …. These are contested, and often there are social, hierarchical, cultural reasons why some lead over others. For example, those published in English are likely to be forerunners over those published in other languages.

“Disproven” are those where the overwhelming evidence is that the claims are false. Vaccines cause autism, creationism, etc.

Unproveable are those categories of statements for which evidence cannot be acquired. God, unicorns, afterlife, etc. The claims are that these things exist despite there being no evidence. Absence of proof is not proof of absence, is the argument.

So any statement properly attributed to the correct category isn’t bullshit, but if it is misattributed it is. So for example, “I believe in God and that belief sustains me through my bad times”, is not bullshit because it makes no untrue claims. “God loves you all”, is – because it’s claiming that God actually exists, and we have no evidence for His existence.

“The Earth is flat” is bullshit, as is “vaccines cause autisms”. Those are both claiming disproven things are proven. But so is “science is just a matter of perspective”, as it’s stating a “proven” thing is unproven. Yes, you could overthrow the current paradigm, and people have, but you would need a wealth of evidence to outweigh the current best “proven” explanation, and move it to a different category through presenting that argument. To state that theories agreed across all cultural perspectives are just a male, western white perspective, when science is being used by all countries to determine truth from fiction, is bullshit.

An addendum – I’m talking here about the positivist end of the spectrum – astrophysics, biology, etc, the things based on measuring stuff (see a previous blog post). My own bias, as I go there when I think about science rather than the more interpretivist stuff like anthropology, psychology, education. With those there is a strong argument that there’s a western domination which influences the field – have a read of this https://www.nasw.org/article/science-writers-urged-tell-stories-include-indigenous-perspectives

Within the “proven” category we also have the distinction between positivist and interpretivist perspectives. Positivist observations are more powerful, and indicate stronger causal links. There is instrumental reality to back them up (although instruments can be wrong). But interpretivist data is also useful. To state that a model needs to predict behaviour absolutely in order to have value is bullshit, because even if it’s useful most of the time, it can still inform decisions. But to say that measurable phenomenon is no more value than a collection of qualitative data is also bullshit.

So yes, things move from category to category, but only over time, and only with evidence and reasoned argument.  There are blurry lines between the categories, and opinion might vary on which side some things legitimately belong. Bullshit only applies outside of these blurred lines.

These distinctions weren’t always so evident. It’s only with the Enlightenment in the 18th century and the development of the scientific method, that humanity developed a mechanism to fully determine the difference between finding stuff out and making stuff up. And to state that that science is the dominance of a western male perspective is bullshit. Anyone who wants to tell the difference between making stuff up and finding stuff out uses the scientific method. Indian scientists put stuff into space using science, not Western science. Science. The only difference between a Chinese scientist investigating copper nanotubes and an American one is the abbreviation they use. It’s always been this way. Current science is an amalgam of Islamic scholars, Greek philosophers, Chinese inventors. The first university was in Timbuktu. Reality is a humanity-wide endeavour.

Part of the problem is that there is a perceived difference in value between stuff made up and stuff found out. The Enlightenment has led to the perception that only things that are true have value. Hence, we’ve had epistemicide, where whole systems of making stuff up have disappeared. But just because something’s made up doesn’t make it useless. But in order to compete with real stuff, everyone feels they have to claim that their worldview is real. Hence people claiming that God is real, I mean really real in a literal sense in the same way that I, and probably you, are.

Before the division into real and not-real, people felt comfortable with mixing ideas they would create alongside stuff they saw. So we’d have theologians arguing about how many angels dance on the head of a pin, or people dancing so the sun came up. They didn’t really believe that those things were really real in a literal sense. The distinction didn’t matter. There are a lot of cultures around the world that still haven’t adopted this hierarchy. The idea of qi informs the design of buildings, but no-one tries investigating copper nanotubes using those principles. It’s not really real in that sense. The Dreamtime doesn’t actually literally exist in the same way the world does; it’s a signifying mythical system that exists alongside the real world.

But since the Enlightenment people who like the made-up stuff feel they have to place it on the same footing as stuff that’s found out, which means claiming that made-up things are really real too and then using made-up stuff to make decisions about real things. So they redact science books because it contradicts the made-up stuff about creationism, or they use a line in the Bible or the teachings of an Imam to decide which real people should be allowed to fuck whom.

It’s a misassignment of ontological categories.

It’s bullshit.

On liminality

The idea of liminality started with Victor Turner, who was a drama theorist. The idea is that there are spaces separated from the normal space in which we live, and these spaces are separated and sustained by belief. Liminality derives from the word “limen” or the edge of the stage. So the events of a play exist within a liminal space. Within that space we suspend our disbelief – an actor becomes the character, the backdrop becomes an actual landscape or drawing room. But the same is true of a film, or a book; it’s called the diegetic effect. We can sustain that level of engagement, while also knowing that it’s not real. The state of knowing something is real and not-real at the same time is known as metaxis, or double-consciousness. We know deep down it’s not real, but while we’re in the liminal space we suppress that knowledge in order to fully immerse ourselves.

While we’re engaged in the film the real world doesn’t intrude according to this view. We know that it’s not real, but while we watch it, that doesn’t matter. We know aliens aren’t real, but we’re still scared by them invading, we know that’s just an animated drawing of a deer, but we still cry when Bambi sees her die. We can take part in liminal spaces too. A game space is a liminal space. We know it’s only play money, but when we land on Mayfair with a hotel on it, we’re really pissed off. Virtual worlds are liminal spaces too. As are ritual spaces. Within them, roles are changed; identities can be changed; rules are changed. The made-up is made real. Huizinga called the boundary separating the liminal from the regular spaces The Magic Circle. Rules are changed within it. Identities are changed. Huizinga particularly talked about games, and the Magic Circle for a game, for example Monopoly, extends around the players. Within that space, the play money matters, there are specific rules that govern behaviour. We all become capitalists.

Liminal spaces aren’t just defined by space, they are also bounded by time. A stage outside of a performance isn’t a liminal space. It’s just a normal space. It’s transformed into a liminal space by ritual elements. The surroundings help here. There’s an interesting paper by Pierpoint (Childs et al, 2014; 121-124) in which the surrounding elements of a proscenium theatre are included in this ritual. There is the design of the front of house, the direction to the seat, sitting down, the reading the programme, all those build up to the moment where the orchestra plays, and the curtain goes up. These are all signifiers of the moment when the liminal space is created. Performances where actors drift on stage, and there is no real start feel odd because this ritual commencement hasn’t taken place. Site-specific theatre is more challenging partly because this liminal ritual is absent, so we don’t know when or where the liminality is.

Liminal spaces can also be returned to and invoked repeatedly. By having multiple texts, a series of movies, or a TV show, a consistent repeated diegesis is created. This can also be extended outside of those texts, by others, like for instance fanfiction, or conferences like the Sherlock Holmes society run, where the canon is engaged with as if it were real.

The pedagodzilla podcasts are liminal spaces. The Godzilla roar, the music, Mike’s intro, all set up the liminality of the space. It’s important because it signifies that within that 40 mins, making stuff up is legitimised. Mike sets out the rules; that there is a genuine piece of pedagogical theory, a description of a piece of pop culture, and then we will apply the real stuff to the made-up stuff as if it was real. We are deliberately misattributing the ontological nature of, for example, Yoda as a supply teacher, because we know it’s inappropriate, and therefore fun. We know that he doesn’t exist, and wasn’t created in order to be analysed in that way.  And we know the audience knows that. And we hope the audience knows that we know that. It would spoil the lusory nature of the liminal space for someone to criticise the argument with “but he’s not real.” That’s not the point. Made up stuff is legitimate within the liminal space.

Ditto church services. The organ music, the singing, the sermon. All of those add to the liminality. Gee would also describe the space as a semitioic social space, if you can read the signs around you, in the vestments, the props, the stained glass windows, it all adds to the experience of it as a liminal space. Within that liminal time-bounded space, misattributing the ontological status of God is fine. You can say He’s real within that space, and share fellowship and agape and all that feelgood stuff, because the normal rules of engagement with reality are suspended. Made up stuff is permissible.

And also liminal spaces can exist within other liminal spaces. So for example, later in the same chapter as the Pierpoint reference Ian Upton (Childs et al, 2014; 127-130) talks about ritual spaces within Second Life. We adopt one set of rules on entering the virtual world, and then within the virtual world cross another limen where rules and identities are transformed again. Ian argues that the change between the non-performance SL space and the performance SL space is a greater one that between RL and SL.

Where it breaks down a bit

This idea that liminal spaces are separate, discrete places cut-off from normal space doesn’t always hold, however. The membrane around that magical circle is permeable. Anyone who’s had to placate a child who’s got upset by landing on Mayfair, or fallen out with someone because they lifted money from the bank, will know that what happens within the Monopoly game space does have an impact on the rest of the world. More positively, the liminal space can excite us, or sustain us, in the rest of our lives, by us looking forward to the next movie in a series, or building a fan community around those spaces.

It works the other way too. In novels and films, often the exterior world will intrude, to remind you it is only a book. In Vanity Fair, Thackery interjects to remind the reader that he’s writing the novel. The sense of immersion is undermined, the diegetic effect broken.

And sometimes the membrane extends way more than the liminal space. A football ground is a liminal space. There is the ritual of the singing, the buying of the pie, the Chinese dragon dancing between halves (I’m guessing because I’ve only ever been to one football match in my life). The crowd shares in the made-up thing that it matters whether one set of the 11 people on the pitch get the ball in the net more than the other 11. That’s what the game is. That’s what all games are. They’re enjoyable because we’ve invented a set of criteria that matter, not because they do intrinsically, but for the sense of camaraderie, of communitas, that occurs when the criteria are met. One woman jumps higher than the other woman, one robot flips the other robot out of the ring. We all know deep down that they don’t matter, but it’s fun to believe that they do.

But that liminality is broached when that extends to people’s entire lives. Outside of the match, that liminal space bounded by space and time, can dominate those lives. At some level there is the awareness that actually, it’s a manufactured reality, but that realisation is permanently suppressed. It’s the same with religion. The statement that God is real isn’t left at the church door, but is taken out into the real world and is acted upon as if it were true all the time. It’s self-evidently not, it’s unproveable, that’s obvious, but the ontological status is misattributed.

Let’s remind where the bullshit lies. “I know I can’t prove God exists, but I choose to believe he does, because that belief gives me comfort, and ties me to my community” – not bullshit. “God exists and He says you’re going to Hell” – bullshit.

There’s an extra level of complexity with liminality, and that is where it’s intricately woven with the external world. This liminality isn’t obviously tied to a space or a time, but it’s liminality nonetheless. This is where we come to the Dalai Lama, Tony Stark and Lorraine Kelly.

The Dalai Lama, Iron Man and Lorraine Kelly

What these three have in common is that they are all fictional characters; they have identical ontological status.

The Dalai Lama is the latest incarnation of Avalokitesvara a Bodhisattva. This obviously is made up as there is no evidence for reincarnation or the existence of Bodhisattvas. He is performed by a real person named Lhamo Dhondup. If people believe in that sort of thing, then if they meet Dhondup then they might believe they have met the Dalai Lama. Where the reality and fantasy are distinguishable is difficult to say. Maybe when he gets home Dhondup takes off the assumed identity and just becomes a normal guy. Maybe he performs that identity 24/7. Similarly with Tony Stark. The character was created more recently, and we know by whom (Stan Lee, Larry Lieber, Jack Kirby and Don Heck) whereas the name of whoever made up the Bodhisattva stuff is lost in the mists of time. In his most recent incarnation Tony Stark is performed by Robert Downey Jr. However, that performance isn’t restricted to the liminal space of the MCU, as Downey (like Dhondup appearing as the Dalai Lama) goes to hospital wards to meet sick kids who (like Buddhists) really believe he’s Tony Stark. Downey doesn’t do that all the time, he has an out of liminal space, but he carries that liminality around with him, able to generate it when it’s required. Again, that liminality legitimises the madeupness. The child in the ward isn’t meeting an actor, he’s meeting a superhero. For the moment Downey is there, the fantasy is real. Ditto Dhondup.

Lorraine Kelly is slightly more complex, in that the fictional Lorraine Kelly is performed by a real person also named Lorraine Kelly. This was actually a point of law, proven by the real Kelly because the fictional nature of Lorraine means that she’s a performer when she’s doing her presenting, she’s not being herself. When she meets fans in events, she’s also Lorraine, but where the real Kelly exists, and the fictional Lorraine exists, is a blurred liminality.

In the world of professional wrestling this is known as kayfabe. Although professional wrestling resembles a sport, its roots are actually in the strongman sideshow of carnivals. Initiated by the golden trio in 1920s’ New York, the matches are actually “worked”, ie fictions created as performances. The ring is a liminal space (as are all sports spaces) but the liminality extends beyond the ring, as the worked narratives are played out in public outside of the ring, extending the narrative into mainstream space. The wrestlers abuse each other in publications, carry on feuds in public spaces and the wrestling news describes these stories as if they were real news. As internet culture has formed, the liminality has extended to websites, but this also makes maintaining the work constantly more difficult, as fans may spot enemies together in restaurants etc.

This is still liminality, but here the wrestlers carry that liminality with them. In dressing rooms etc if a “mark” (ie someone not part of the work) is spotted, the wrestlers will call out “kayfabe” and switch on their characters in the same way that Kelly, Downey Jr, and (presumably) Dhondup do, generating that liminality around them.

And what? It gets more complicated?

This blurring of liminality is also deliberately played with in professional wrestling, in a level of complexity rarely developed in other media.. A wrestler might be really hurt, or go AWOL, or fall out with his coach. Or a “face” and a “heel” might fall in love etc. This is called a shoot, (as with most carnies, there is a huge terminology describing the differences between the liminal and external spaces). A shoot is when the liminality is unintentionally dropped and reality inevitably intrudes. This could happen with the other examples. Anything could happen to cause Downey Jr, Kelly or Dhondup to slip out of their roles,  with varying consequences.

Where professional wrestling is more complex, however, is that there is also worked shoots. What may seem to be a falling out, and a narrative in which the liminal space has been broken, can actually turn out to be part of a larger narrative, and it’s all part of the work. Fans are constantly kept uncertain as to what’s real and what isn’t. But they work it out, or adapt in retrospect if they haven’t. Professional wrestling fans’ realities are constantly being retconned and it’s all part of the fun. We could learn a lot from them.

So what’s got fucked up?

Believing in things is fun. Make-believe is reassuring, it brings respite from the harsh realities of life, and particularly death. We can console ourselves there is a jeaven, or whatever it’s called, and that gets us through. It’s more exciting to meet the Dalai Lama, or Tony Stark, or Lorraine, than it is to meet Dhondup, Downey Jr, or Kelly. It’s tedious to constantly have to follow up a statement about God or Yoda, with “I know he’s not real, but just for the sake of discussion let’s pretend he does.”

The problem is that people feel the Enlightenment has forced on us this hierarchy between finding stuff out and making stuff up. People feel that stuff has to be true in order to be justified in believing in it. And worse. Deep down people know claiming unproveable things are true is bullshit (once you know how to tell the difference, you can’t unlearn it) but that just means they end up defending it even more vociferously. You could argue that there are other ways of knowing, that evidence is not the only way to find things out, but then that’s bullshit about bullshit. That level of self-deception is going to wear you out.

The effect of all this bullshit (and metabullshit) is that we get people attacking soap stars because of something their character did in last week’s episode, we get climate change denial, antivaxxers, holocaust denial, homoeopathy, we get statements like “it’s Adam and Eve, not Adam and Steve”, we get people forgetting that ultimately it’s just a game, etc. etc.

And on the other hand, where many epistemologies collide with scientific rationalism, scientific rationalism wins (because it’s the only one that works) and we lose all these alternative worldviews in a global epistemicide.

The answer to this either or state between accepting or rejecting reality is liminality. You can have your cake and eat it. You don’t have to pretend stuff that others have found out is made-up, just so you can still have stuff you’ve made up. Have your liminal spaces, but acknowledge that they are liminal spaces. You just need to be able to see the limen. Within the delineated liminal spaces, you can call anything you like true. Go to your mumsnet group and complain about all the chemicals in your food, have your YouTube channels about the earth being flat, have your services where you talk about all the wonderful things your God has done for you. But see the limen.

From all the examples above, we can see how flexible liminality is, it can be delineated within specific spaces, it’s permeable, it can be spontaneously generated once it’s been established, it can follow people around. The boundaries can be played with. So feel free in applying liminality when and where you like, but when you come back out into the real world, acknowledge that it’s just football, or religion, or a movie and use real things for making decisions about the real stuff.

Recognise that every damn thing has chemicals in them and act accordingly, don’t go up in rockets to prove the Earth is flat, acknowledge that God is no justification for stopping your son from marrying his fiancé because God is something someone made up at some point. Acknowledge your inbuilt bullshit detector and end the self-denial. Accept reality into your lives.

Go to your liminal space. Have your fun. Have your life-affirming moments. Share your beliefs with your fellow worshippers as if they were real things. But see the limen, as you transition back out into the world you share with the rest of us.

See the limen and we’ll all get along just fine.

References

Childs, M., Chafer, J. Pierpoint, S., Stelarc, Upton, I., and Wright, G. (2014) “Moving towards the alien ‘other’”, in Kuksa, I. and Childs, M. Making Sense of Space: The Design and Experience of Virtual Spaces as a Tool for Communication. Chandos, UK:Oxford. pp . 121-138

MacKenzie, A., Bhatt, I. Lies, Bullshit and Fake News: Some Epistemological Concerns. Postdigit Sci Educ 2, 9–13 (2020). https://doi.org/10.1007/s42438-018-0025-4

Cancel culture and the limits of free speech

I’m currently boycotting Twitter in support of the antisemitism protests. If you’re not up with the Twitters basically some grime artist called Wiley (how do these people become famous without me ever hearing of them) had a full-on rant about Jewish people and Twitter took way too long to take down his account. I know not tweeting for 48 hours is the armchairiest of armchair activism, but it’s something. Maybe.

But it’s been a bit of a relief not being on there. It seems like every day there’s some moral controversy about someone who’s worked with someone else when they were cancelled, or whether cancelling itself is a good idea or not. The argument is that everyone has a right to free speech. The opposing argument is that no-one can expect to say what they like without consequences. Actually, the challenges of working through these moral quagmires is part of the reason I’m on there. It’s a constant test of where the right course lies, and where I want to position myself ethically. And it’s not always as easy to spot where the line is as it was with Wiley (the grime guy not the publisher).

But positioning myself ethically all the time is tiring, so I’ve been trying to encapsulate what I described in a tweet as a moral quagmire into a few key aphorisms because that makes it way simpler for me. I thought I’d share them.

I’d been thinking about it a bit more because in the recent Buffy episode of Pedagodzilla there was much idolising of the work of Joss Whedon. We didn’t once address the revelations about his alleged history of being emotionally abusive towards women. I was fully expecting some flak for this, but it hasn’t yet emerged.

It’s also cropped up because of the letter by JK Rowling, Salman Rushdie etc condemning cancel culture. I also read this article https://theintercept.com/2020/07/14/cancel-culture-martina-navratilova-documentary/ which details the struggles to get a documentary made about Martina Navratilova made because of a couple of cancel culture incidents.

More personally for me, within the comics industry there’s been a kerfuffle because Dynamite Comics recently contributed to and then publicised a variant cover for a comic published by the leader of the Comicsgate movement. For anyone not keeping up Comicsgate is a group of people who oppose what they see as a political agenda forced onto comics by liberal progressives. So “forced diversity” such as non-white characters being introduced, and gay couples, within comics when their ethnicity or sexuality isn’t relevant to the plot. Their position is that they just want good storytelling without having homosexuality forced down their throat. In isolation, the argument about not sidelining storytelling with political agendas sounds like a reasonable one. Very few people like authors using their platform as an opportunity to push politics, because they’re exploiting their relationship with their audience to fulfil their own personal ends. Where the argument falls down, of course, is that not including non-white or LGBT characters is just as political a decision. CGers just don’t see that as a political choice in the same way that fish don’t see water – it’s the norm that they’re used to so that it seems neutral to them. Also being white and straight predominantly means they want to see themselves, and only themselves, reflected in what they read.

Also what CGers fail to recognise is that comics have always had a liberal progressive agenda. If you look at the characters in the MCU for example, 90% of the characters were created by second generation Jewish, Irish or Ukrainian immigrants. Hang on, I will check that. To be precise: 80% of the title characters (and all of the title characters if you exclude the movies that are set off-world) were created by offspring of Jewish, Irish or Ukrainian immigrants. Superheroes are the wish-fulfilment fantasies of the oppressed and disenfranchised who wanted something to stand against the inequities of this world. And have been read for 80 years by geeks who felt the same.

But the CGers feel they are the oppressed now. Oppressed by the influx of non-white, non-male, non-straight people into what they see as their world, not realising it never really was.

Aphorism 1: Just because you’re not getting your own way, doesn’t mean people are out to get you.

But on a larger scale this is how a lot of mainstream culture sees itself. We can no longer say what we think, is the complaint, without being cancelled, or losing our jobs. We’ve lost our freedom of speech.

And freedom of speech is a tricky one. What should be the limits on what you can say?

Well, actually we have a pretty useful law on how freedom of speech works. You can say what you like as long as it doesn’t affect someone else’s fundamental human rights. What’s also cool is that there is no protection because your opinion is a deeply held religious belief. For example, the legal response to someone who feels they can be homophobic because their religion says it’s evil is “nope, the law’s right, your religion’s wrong. STFU.” Which is the correct response.

Freedom of speech is a tricky one. I may have said that already. I remember recently on the twitters a famous TV mathematician was accusing Noam Chomsky of being antisemitic because he was defending someone’s right to publish a book denying the holocaust happened. This is a huge reach, The Chomsk’s statements are more those of being a hardcore free-speecher. Anything goes. I recognise the validity of the argument – if you stop people from saying stuff you don’t like, then what happens when someone stops you from saying stuff they don’t like?

Aphorism 2: Agreeing with someone’s right to say something doesn’t mean you agree with what they say.

This was a tricky one for me, because I was firmly committed to the idea of free speech. Some background: I was one of the Thatcher generation – in my first teaching job Section 28 came in, which meant I could get fired if I promoted homosexuality as a valid lifestyle. Of course, the kids like to get their teachers into trouble by asking them outright what they thought.  I said it was as valid as straight relationships. Because it is. No-one ever fired me. We also had Mary Whitehouse and her bunch of thugs who liked to ban things because they were fucked up evil people. No other reason. And we had Salman Rushdie and the Satanic Verses. More fucked up evil people. All points at which freedom of speech had to be defended at any cost.

But on the other hand. Holocaust denial. Wtf? How do you balance those two opposing principles?

My answer.  Actually: I don’t agree with free speech.

Aphorism 3: You do not automatically have a right to express an opinion.

Earning the right to express an opinion takes work. You have to check your facts. You have to work out your argument. It has to make sense. Spreading misinformation is a bad thing. I disagree with Chomsky on this one (but aphorism 2 – that doesn’t make him antisemitic). You shouldn’t publish or sell books on holocaust denial because it’s not true. The holocaust did happen. You want to prove it didn’t that’s going to take a lot of work – an impossible amount of work. Similarly, you don’t have a right to say that vaccines cause autism, the Earth is flat, evolution didn’t happen, God exists. None of those things are true. I figure the mythical stuff is ok as long as it’s presented as myth, under the “let’s pretend” category, as the reality or not of God stands outside proof or disproof (see the previous post about ontology). But either you ban all lies or you ban none. Ethics have to be consistently applied or they don’t really work as ethics.

But … what about the grey areas? Ones where people are wading in with facts and figures on both sides? Aren’t there some areas where we need to have a debate? Rowling’s fears of trans women invading women’s safe spaces seem to be genuinely felt and shared with other women, even though there’s no evidence for them being a threat. Should she be banned from saying those things? Well her fears are real, so probably not. But, claiming that transsexuality isn’t real so obviously lacks even a glinner of a connection with reality, then I would say you don’t have a right to express those claims. It’s not about as subjective a thing as feelings. It’s about facts.

That’s not to say you have to allow them to be said on social media or printed in newspapers. The letter about cancel culture complains about censorship. But refusing to print your books, or removing you from a newspaper column, because people don’t like what you said isn’t suppressing free speech. You can still write a blog, or self-publish, you know, like regular people do. If someone rounded up all your self-published books and burnt them, or put you in prison for writing a blog, or speaking the truth then that’s censorship. And that’s going on in many many parts of the world. All that’s happening to the Rowlings and their ilk is that they’re losing their privileged position of having a more magnified voice.

Aphorism 4: Burning books is censorship. Refusing to print them is just removing your privilege. Get a grip.

So, is it OK to cancel people? Yes. If someone is going to say stuff that’s untrue, they need to be stopped from saying it. If they’re going to say stuff that people don’t like, or may harm people’s feelings, those people have a right not to buy their stuff, or encourage others not to buy their stuff, or refuse to work with you any more. Although no-one has a right to threaten anyone for what they’ve said. That’s psychopathic.

But it’s a response that’s best used judiciously.

Going back to the ComicsGate scenario. I’ve read comics for 50 years. I’ve never read a huge amount at a time though, and my interest has waxed and waned over the years. At the moment, I read about 8 titles. 6 of those are Dynamite Comics because they are the ones that seem to best embody the pulp sf of REH and ERB. The other two are DC. And those are both by Tom King. So you can see the degree to which I admire the key players.

So when Dynamite publicised their support for the Comicsgate title it was a bit of a dilemma. In the conversations around it I found out some other gross things about other writers I admire. People were refusing to buy any more titles. I never cancelled my orders. The head of Dynamite then changed his mind, and his response was that he hadn’t realised there would be such a kick back against the move.

People didn’t believe him. He must have realised that people would be outraged.

Directly after that Tom King complained that DC had hired an artist – Jae Lee – to do one of the covers to his new title because he’d been working with the CGers. Jae Lee got lots of online harassment. King then apologised because he’d talked with Lee and discovered Lee didn’t even know what CG was. He’d been hired to do some work. He’d done it. That was it. No political allegiance implied. Or even known.

I get it. I get the mistake that Tom King (like I said, a writer whose work is keeping me into comics) made, and the anti-Dynamiters. I recognise the frisson of pleasure at outwoking someone else – I felt it when I told my elder stepson that Warren Ellis was cancelled. You feel like you’re one step ahead of others, you can claw a little bit of moral highground for a while, which might stand you at a bit of an advantage the next time you fuck up. But it’s an illusion of moral superiority.

Because here’s the reality:

Aphorism 5: Keeping up with who’s a dick and who isn’t is a niche hobby. Always bear that in mind when dealing with people who don’t know or don’t care.

It’s a lot of work keeping track. Some people don’t want to put the time or effort in. Some people avoid it because it’s too much of a distraction or too damaging to their mental health, or their enjoyment of their culture. Don’t make the mistake of thinking that just because someone’s reading Rowling they’re transphobic, or working with ComicsGate people they don’t care about online harassment of women, or waxing lyrical about Buffy that they don’t care about domestic abuse. Maybe they don’t know because they haven’t kept up. Maybe they do know and they continue to read the work because it has such a deep value to them they want to continue to connect to it. Maybe they’re working with them because they need the work, or the break, or because actually they have a personal connection to the person because there’s another side to them we don’t see. Although with most of these people it’s difficult to see there could be.

I personally would probably not start to read something by someone if I knew they were a fascist, or a racist, or an abuser, or transphobic. But if I’ve already engaged with their work, and learnt to love it before finding that out, then for me it’s too late to give it up. So I’ll probably not start on the Harry Potter stuff, or Buffy, because there’s other TV shows and books I can read instead. But I’m not going to stop the Cthulhu Mythos bingeing, or listening to Magma, or re-reading Astonishing X-Men, because I only became aware of the dodginess of their creators after I got into them. I’ll certainly not be part of the guilt-by-association lobby. If Julie Comer wants to have a relationship with a right-wing asshole (and she might not be anyway) that’s her choice. If Jae Lee does some work for a sexist abusive person, but that work itself isn’t sexist or abusive, then that’s ok too.

Aphorism 6: Judge people by what they do, not what the people they hang out with, or work with, or sleep with, do.

Finally, the element that seems most egregious in the various things I’ve read is the treatment of Navratilova for being pilloried and unfairly accused of transphobia, simply for questioning the position of trans women in women’s sports. Someone who’s stood up for trans rights being wrongly labelled for one statement. Social media isn’t a great platform for nuanced arguments. Even the most intelligent of people can sound like a right Dawk when they’ve cut their arguments down to 280 characters. I’d be uncomfortable discussing anything like this with fewer than 2617 words. Yet one poorly phrased sentence, or one question, and there’s a contingent of people who will let loose. And like I said, I get it, because finding someone to despise can feel good, and dumping on them is enjoyable. This is predominantly why people bully others, which seems to get missed out of when discussing how to combat bullying in schools. Bullies bully because they enjoy it. The point at which something makes you feel good is the point at which you need to question your motives.

Aphorism 7: Political positions are represented by a lifetime of work. Not 280 characters.

Oh and that previous sentence should be there too.

Aphorism 8: The point at which something makes you feel good is the point at which you need to question your motives.

Sorted.

For now.

Ontology, Epistemology, Positivism, Interpretivism and Belief

Ontology epistemology positivism interpretivism

Ontology – degrees of reality

Ontology is the discussion around what is real or not real – and also – if something is real how do we classify it? So we could do the Father Ted thing of having a list on the wall of real and not real and adding to them, but there’s a seven-point scale Richard Dawkins came up with on where to put ideas about how real things are. He meant it specifically for talking about god, because he seems to be particularly obsessed with that, but I think it helps to apply it to anything.

So on this scale we have at 1 stuff that 100% absolutely exists.

The problem is that we can’t know with 100% confidence that anything exists. I don’t know that you exist, or this table exists, or even I exist. It could just be data that’s being pumped into my senses, and my thoughts might actually just be thoughts that make me think I’m alive, like Cat says to Rimmer in Red Dwarf 13. And at the other end we can’t know for 100% that something doesn’t exist. So we don’t have any evidence for unicorns, god, the tooth fairy, star wars existing. But absence of proof isn’t proof of absence. There might actually be a god, He might even be exactly as one of the various religions describes Him. Or Her. Or Star Wars could really have happened a long time ago in a galaxy far, far away.

So although we have a seven point scale, really we’re just looking at a scale that runs from 6 to 2. Like a grading system, it’s out of 100 but in reality we only give marks between 20 and 90.

So when we say something is real, we’re really looking at stuff around the 2 mark. “True” is just a shorthand for “this is the explanation that best fits our observations for the time being”. Everything that we say is “true” is really just an operating assumption. So you, me, the Big Bang, dark matter, the standard model, they’re all around the 2 mark, some maybe slightly higher, some maybe slightly lower. But we can’t get through the day constantly bearing in mind things might not exist. I’m going to assume you exist and get on with things, although occasionally it’s worth remembering what we’re experiencing is only a 2 not a 1. Same at the other end. We don’t have to worry all the time about what god might think, or try and use the force to open doors. Chances are those things aren’t real, so it’d be wrong to rely on them.

Right in the middle at 4 we have the things that ontologically we’re totally unsure about. It’s completely 50/50. Then just above that, we have the stuff that’s around 3. So maybe we’re leaning towards it being true, but there’s still some doubt. So, superstring theory for example. Multiple universes. Then on the other side there are all the things at 5, so unlikely but the jury’s still out. Like, I don’t know, the Illuminati or something.

Ontology – categorising reality

If we’re looking for an example of an ontological argument about how to categorise reality a familiar example would be taxonomies of living things. When people first started categorising living things they went by what they looked like, so feathers make you one type of thing, scales make you another. It’s a system based on morphology. As scientists have mapped more and more genomes though, they can see how closely related things are to other things, they can work out at what point in evolution they diverged. Everything that’s descended from a particular organism is called a clade. If you look at cladistics rather than morphology, birds and crocodiles are more closely related to each other than crocodiles are to lizards, so grouping the crocodiles and lizards together, but excluding birds makes no sense. It’s paraphyletic. So now birds are classified as a type of reptile. It’s also why there’s no such thing as a fish. You can’t group them all together sensibly in a way that includes all “fish” but excludes all “non-fish”. Cladistically. Obviously if you’re adopting the old system of looking at what they look like, then you can.

Ontologicial questions about how to organise things then runs throughout our perception of reality, it can actually alter how we view reality. “This is part of this, but not part of that” can sometimes be absolutely crucial. Linnaeus may have been really keen on labelling plants and opisthokonts (ie fungi and animals) and that might have helped us understand the natural world, but he was well shite when it came to categorising humans, for example. He also obliterated indigenous people’s names for things when he did so, which may have changed how we perceive Western academia’s relationship to them.

But perception is more the domain of the next bit.

Epistemology – positivism

What gets you closer to the truth (or not) is a question of epistemology. So ontology is what’s real or not, epistemology is the approach by which we determine what’s real or not. There’s basically three types of epistemology. Finding things out by measuring things, finding things out by interpreting things, and making things up. So that’s positivism, interpretivism and belief.

So first off positivism. The positivist approach is to just look at things you can measure with instruments. The idea is that this is objectively getting at the truth by looking at numbers on dials, or scans, or whatever, what’s called sometimes instrumental reality. Positivism is the cornerstone of the scientific method, which works like this:

  1. you have these theories about how the world works.
  2. You test them with your experiments.
  3. The results match your theory so you think you’ve got to the truth.
  4. Then you carry on doing experiments until one of them doesn’t match the theory, so you need a better theory.
  5. When you’ve come up with a few theories you then do more experiments to confirm which one is the best. That becomes the new truth.
  6. And then you start the whole cycle again.

People are pretty bullish about positivism because it’s been really effective at working out what’s actually going on.

There are problems with the approach though. One is that people sometimes forget nothing scores above a 2. They mistake their current best guess for what’s actually happening. It’s the best way to get closest to the truth, true. But you never quite get there. Like Zeno’s arrow.

The other problem is that sometimes the experiments give the wrong results. So for instance you fire neutrinos through the Earth and find out they’re travelling faster than light, but then later figure out that there’s a loose cable that has thrown off your timing. Or maybe it’s your analysis that’s wrong, like the dead fish experiment in neuroscience. If you do a brain scan you can see effects that look like there’s a causal relationship between showing someone pictures and the reaction in the brain, but you also get a reaction if you plug in a dead salmon at the other end. You need to account for random fluctuations.

Then there’s a lot of cultural bias. So for example, if you’re testing a theory, the one that gets the most funding is the one propounded by the most eminent of scientists, and they’re often old white guys. If there’s other theories, they can get held back for a while. Usually until all of that generation of old white guys are dead. You can see the social effects on the progress of science.

The thing is though, that the process is self-correcting for social bias. If a theory doesn’t work, you’ll have lots of people doing experiments in all parts of the world, and coming up with theories and eventually one will look better than the rest to most people, and that’s the one that generally gets adopted. You get a consensus irrespective of culture. At the boundaries there’s contention, but in the main body of science there isn’t – the main body is more or less everything that happens after the first 10-35 seconds after the big bang up to now, everything bigger than a quark, anything smaller than the observable universe. This main core of science is the same for everyone, no matter where they are and has been contributed to and tested by cultures on every continent on the planet. The cultural bias doesn’t change the overall direction, it just slows it down.

Epistemology – interpretivism

The other approach is interpretivism. Interpretivism is more subjective, in that it’s interpreting what’s going on. You might not have anything you can actually measure with an instrument, so you need to ask a lot of people a lot of questions. This is a bit more systematic than a bunch of anecdotes, in that the idea is that you ask a large representative sample of people, and aren’t selective about which responses you look at. The criticism is that it’s still just a collection of opinions and it’s not reliable enough. As Roosta would say, you can’t scratch a window with it. Interpretivists would argue that positivism is so culturally biased that everything is interpretivist, which is just fashionable nonsense. Obviously if thousands of people from all over the world do an experiment and get the same result, which confirms the generally accepted theory, that’s not open to interpretation. To claim it is just seems like an inferiority complex on behalf of the interpretivists. Where the real strength of interpretivism is, is that it’s producing something like a version of the truth that can be useful where positivism couldn’t get you anything. Anything to do with how people behave socially has to be interpretivist, because people are way, way more complicated than cosmology. You can’t put them in a laboratory and see how they perform in the real world, because once they’re in a lab they’re not in the real world any more. So all you can get is a mass of opinions to interpret. But that’s OK because it’s better than the alternative. Which is nothing.

And there’s a huge number of interpretivist approaches, feminist, postcolonialist, Marxist, basically anything with an ist on the end. They’re all a valid way of approaching the world to some extent, as long as they can accommodate all the data observed and are precise about what their limits are. The mistake is calling them theories. That’s a positivist word. There’s nothing predictive about interpretivist approaches. You can’t say “in this and this situation with people, this will happen”. It’s too complex. And vague. What you’ve actually got with interpretivist approaches are different narratives, or lenses, through which to describe what’s going on. As Jitse said in a previous episode of Pedagodzilla, all models are wrong, some models are useful. The important thing is not can we prove it, but is it reproducible enough, and generalisable enough, and explain enough of the observations to be useful?

Epistemology – belief

Finally, we have making things up as an approach. There’s a lot of in-built elements to the way minds work that mean we tend to look for patterns that aren’t there – which is called apophenia. We recognise simple messages rather than complex ones. When we make connections in our heads that make particular sense to us we get a dopamine hit. That leads to aberrant salience, things get connected that shouldn’t get connected. So for example, there’s a lot of intricate stuff about crystal healing and resonance, which makes no sense physically, but sounds good as a story. There’s no scientific rationale behind it at all, but it works as a placebo because it sounds plausible to people who skipped physics in school.

One thing positivism and interpretivism are bad at is creating the sort of stories that have emotional truth for people. You can’t all get together and have a good time based on the standard model, or the general theory of relativity. The myths that we create hold communities together. They bring people comfort. So if you’ve moved to a new place and you’re wondering what church to join, for example, someone coming along and saying well you have no evidence for your faith so why bother? is completely the wrong epistemology. We talked about Buffy as if the show was real in a previous episode.  It would be completely out of place to continually remind everyone it’s not real while we’re doing that. I’ve used the phrase “science needs to stay up its own end” before, which I don’t think people would get unless they grew up on a working-class housing estate in the 60s. Basically, those spaces could be very territorial. You learnt where your patch was, and if you strayed into someone else’s you get told to stay up your own end. Too many epistemologies try and muscle in on someone else’s patch. Lots of epistemologies are dying out because of competition from other worldviews because of just this sort of intrusion – it’s called epistemicide. That seems like a bad idea because we’re losing other ways of perceiving the world. Colonialists need to stay up their own end.

But … the problem also works the other way when you start using your beliefs to make decisions about real things. So if you’re looking for a response to covid-19 you need to use a positivist approach and do clinical trials to find out what will work, and what won’t, you don’t just tell people you’re protected by the blood of Jesus. That’s a category error. Or you’re deciding whether gay people should be able to adopt. You can’t use a positivist epistemology (because there’s no instrument that can measure that) or a belief-based one (because it’s way too important to base it on something someone made up). You need to look in between at interpretivist approaches and gather data about what people’s experiences are about children of gay parents. And as it turns out, there’s no major difference. To insist on something being your way because you read it in a book somewhere is simply bizarre. I don’t need to do a routine on that because Patton Oswalt has already done that.

Critical realism and ontological hygiene

So what’s the proper epistemological approach? Well one of the things I learnt from physics is where you’ve got a binary choice, the answer is nearly always both are right. So is light a wave or a particle? It’s both. Same’s true here. I’m really suspicious of people who say “I’m a positivist” or “I’m an interpretivist”. Neither are appropriate all the time. There’s an epistemological approach called pragmatism, or realism, sometimes critical realism. It’s about adopting the correct epistemology for the domain that you’re looking at. So you have a physical science or chemistry or medicine, you have to take a positivist approach, you measure things and look at the numbers, and that gives you something ontologically that scores a 2 or maybe a 3 (or is disproved down to a 6). Or you’re looking at how people think or behave. You need interpretivism, because there’s no laws that predict how people behave, and that’s only going to be a 3 at best. That’s not as good as a 2, but it doesn’t have to be to be useful. Just let it go. At the other end you have all the stuff that has no evidence for it at all. But that’s ok too, science can stay up its own end. And as anything you can make up is ontologically a 6 and never a 7 that gives you a lot of wriggle room. “You know it’s possible God, or Severus Snape, or the Dalai Lama does exist, and believing that makes me feel happy, so I’m going to believe it.” The problem is when you start misapplying the made-up stuff to make decision about real things. Even then, I guess as long as your actions don’t harm someone else, feel free. But if someone else is going to be affected, you need enough evidence to score a 2 or a 3 on the ontological scale, or you’re being a complete dick.

It’s all about being aware of  where things are on the ontological spectrum and using them appropriately – what’s called ontological hygiene. Maintaining that ontological hygiene, and being able to switch between the different epistemologies, is where liminality comes in, but that’s another episode.

Edited 16.12.21 when writing the companion piece in the Pedagodzilla book I realised I’d switched Dawkins’s scale round.

Predicting virtual worlds #5

Augmented reality

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Augmented reality. One function of many mobile devices is that they can combine the camera images with an overlay of additional information. In the same way that a global position and orientation can be used to calculate the position of stars as seen from a particular viewpoint, these can also be used to determine at which geographical location the tablet is being pointed. These data can then be combined with a database of information to create an overlay of text to explain, for example, the historical background of a building, or the direction and distance of the nearest Underground station or Irish pub. Locations can be digitally tagged, either with additional information (such as in a learning exercise with students adding their own content to locations), artwork, or even graffiti[i]. As with the astronomy apps described above, this provides learning in situ, and provides a kinaesthetic element to the activity.

The potential of combining geotagged images onto the physical world is indicated by augmented reality games such as Paranormal Activity: Sanctuary[ii]. In this, images of ghosts are located at particular physical world co-ordinates, which can be seen with a dedicated iphone app that overlays these images onto a camera image. Players can create sanctuaries, or cast spells, at locations which then influence the experience of other players. The game therefore becomes a massive multiplayer roleplay game played in a blending of the physical and a virtual world.

Greater precision than that enabled by global positioning can be provided through Radio Frequency Identification (RFID) tags, the technology for recognising which will soon be available on mobile technology[iii]. By placing an RFID tag in clothing, or furniture, or on a person, information about that object or person (i.e. metadata) are then always available, whenever a device is pointed at them. For example, products could be linked directly to their user manual; simply hold your tablet PC over your oven and pop-up boxes appear over the knobs decoding the icons, or attend a conference and each person there could have information linked to them, such as name, institution and research interests, which is revealed by holding up your phone and tapping their image on the screen. Several museums and exhibitions already have augmented reality exhibits; when a room is looked at through an AR viewer, the physical objects in the room are overlain with animations or animated characters, bringing the static displays to life[iv]. A further enhancement of augmented reality is achieved by enabling the animated characters to address the attendee directly, with their gaze following the attendee around the room, as they are tracked through the use of an RFID bracelet[v]. The characters can address many attendees simultaneously since, from the perspective of each, the character is looking at them individually, a transformed social interaction known as non-zero sum mutual gaze[vi]. These interactions can be made more seamless by plans to create AR projections within glasses[vii]. Rather than clicking on a screen, input can be through the detection of hand movements[viii] or, for the mobility-impaired, deliberate blinking[ix].

If this is possible with pre-recorded characters, then it is only a short leap to enabling this to take place with avatars or bots in realtime, by layering the virtual world image onto the physical as it is created. This activity resembles the mixed reality performances created by Joff Chafer and Ian Upton; originally these performances used images from a virtual world projected onto a gauze, so that they could share the stage with physical world actors, and more recently Chafer and Upton have used 3D imaging to bring the virtual world images out from the screen and into a physical space[x]. Capturing the images of avatars in the virtual world, and geotagging them, would enable people with the appropriate AR viewer to see avatars moving and communicating all around them. As the sophistication of bots develop, then the use of them as companion agents, guiding learners through virtual learning scenarios, could be brought into the physical world as guides and mentors seen only by the learner through their AR viewer. With ways of imaging the avatars through something as immersive as AR glasses, physical world participants and avatars could interact on an equal footing.

For learning and teaching, the advantages of blending the functionality and flexibility of the virtual and the real are enormous. For the learners who see virtual learning as inauthentic, relating the virtual world learning directly to the physical may overcome many of their objections. The integration of an object and its metadata as well as data providing context for that object (called paradata) is easily done in a virtual world; AR in combination with RFID tagging enables this feature to be deployed in the physical world too, since information, ideas and artefacts can be intrinsically and easily linked. User generated content, which again is simply created and shared in the virtual, can also be introduced to the physical. Participation at a distance, on an equivalent footing with participation face-to-face, could be achieved by the appearance of avatars in the physical environment and RFID tagging the physically-present participants and objects.

[i] New Scientist, Augmented reality offers a new layer of intrigue, 25th May, 2012. http://www.newscientist.com/article/mg21428652.600-augmented-reality-offers-a-new-layer-of-intrigue.html,

[ii] ‘Ogmento Reality Reinvented, Paranormal Activity: Sanctuary’, 22nd May 2012. http://www.ogmento.com/games/paranormal-activity-sanctuary

[iii] Marketing Vox, ‘Married to RFID, What Can AR Do for Marketers?’, 4th March, 2010. http://www.marketingvox.com/married-to-rfid-what-can-ar-do-for-marketers-046365/

[iv] Canterbury Museum, ‘Augmented reality technology brings artefacts to life’, 28th September 2009. http://www.canterburymuseum.com/news/13/augmented-reality-technology-brings-artefacts-to-life,

[v] A. Smith, ‘In South Korea, Kinect and RFID power an augmented reality theme park’,  Springwise,  20th February, 2012. http://www.springwise.com/entertainment/south-korea-kinect-rfid-power-augmented-reality-theme-park/

[vi] J. Bailenson, A. Beall and M. Turk, ‘Transformed Social Interaction, p. 432

[vii] S. Reardon, ‘Google hints at new AR glasses in video’, New Scientist, 4th April, 2012. http://www.newscientist.com/blogs/onepercent/2012/04/google-hints-at-new-ar-glasses.html

[viii]C. de Lange, ‘What life in augmented reality could look like’,  New Scientist, 24th May, 2012. http://www.newscientist.com/blogs/nstv/2012/05/what-life-in-augmented-reality-will-be-like.html

[ix] Eduardo Iáñez, , Andrés Úbeda, José Azorín, Carlos Pérez, Assistive robot application based on a RFID control architecture and a wireless EOG interface Science Direct, Available Online 21st May, 2012. http://www.sciencedirect.com/science/article/pii/S0921889012000620

[x] Joff Chafer and Ian Upton, Insert / Extract: Mixed Reality Research Workshop, November 2011. http://vimeo.com/32502129

Prescience Factor: 0/10. Despite AR apps becoming more popular since 2013, AR is still not really a thing in that it’s not an embedded part of what we do. Linking AR and virtual worlds in the way I’ve described here isn’t any further along (as far as normal practice) than it was when I wrote the above.