Ontology, Epistemology, Positivism, Interpretivism and Belief

Ontology epistemology positivism interpretivism

Ontology – degrees of reality

Ontology is the discussion around what is real or not real – and also – if something is real how do we classify it? So we could do the Father Ted thing of having a list on the wall of real and not real and adding to them, but there’s a seven-point scale Richard Dawkins came up with on where to put ideas about how real things are. He meant it specifically for talking about god, because he seems to be particularly obsessed with that, but I think it applies to anything.

So on this scale we have at 7 – well I’m not sure what end is 7 and what is 1 but let’s call it 7 –  we have stuff that 100% absolutely exists.

The problem is that we can’t know with 100% confidence that anything exists. I don’t know that you exist, or this table exists, or even I exist. It could just be data that’s being pumped into my senses, and my thoughts might actually just be thoughts that make me think I’m alive, like Cat says to Rimmer in Red Dwarf 13. And at the other end we can’t know for 100% that something doesn’t exist. So we don’t have any evidence for unicorns, god, the tooth fairy, star wars existing. But absence of proof isn’t proof of absence. There might actually be a god, even though we have no proof that there is one. Or Star Wars could really have happened a long time ago in a galaxy far, far away.

So although we have a seven point scale, really we’re just looking at a scale that runs from 6 to 2. Like a grading system, it’s out of 100 but in reality we only give marks between 20 and 90.

So when we say something is real, we’re really looking at stuff around the 6 mark. “True” is just a shorthand for “this is the explanation that best fits our observations for the time being” everything is just an operating assumption. So you, me, the Big Bang, dark matter, the standard model, they’re all around the 6 mark, some maybe slightly higher, some maybe slightly lower. But we can’t get through the day constantly bearing in mind things might not exist. I’m going to assume you do and get on with things, although occasionally it’s worth remembering what we’re experiencing is only a 6 not a 7. Same at the other end. We don’t have to worry all the time about what god might think, or try and use the force to open doors. Chances are those things aren’t real, so it’d be wrong to be relying on them. Right in the middle we have the things that ontologically we’re totally unsure about. It’s completely 50/50.

Then just above that, we have the stuff that’s around 5. So maybe we’re leaning towards it being true, but there’s still some doubt. So, string theory for example. Multiple universes. Then on the other side there are all the things at 3, so unlikely but the jury’s still out. Maybe some of the more plausible conspiracy theories might go there.

Ontology – categorising reality

An example of an ontological argument about how to categorise reality could be taxonomies of living things. So for example birds have been reclassified as a type of reptile. The reason is because when people first started categorising living things they went by what they looked like, so feathers make you one type of thing, scales make you another. It’s a system based on morphology. As scientists have mapped more and more genomes though, they can see how closely related things are to other things, they can work out at what point in evolution they diverged. Everything that’s descended from a particular organism is called a clade. If you look at cladistics rather than morphology, birds and crocodiles are more closely related to each other than crocodiles are to lizards, so grouping the crocodiles and lizards together, but excluding birds makes no sense. It’s paraphyletic. It’s also why there’s no such thing as a fish. You can’t group them all together sensibly in a way that includes all “fish” but excludes all “non-fish”. Cladistically. Obviously if you’re adopting the old system of looking at what they look like, then you can.

Ontologicial questions about how to organise things then runs throughout our perception of reality, it can actually alter how we view reality. “This is part of this, but not part of that” can sometimes be absolutely crucial. Linnaeus may have been really keen on labelling plants and opisthokonts (ie fungi and animals) and that might have helped us understand the natural world, but he was well shite when it came to categorising humans, for example.

Epistemology – positivism

What gets you closer to the truth (or not) is a question of epistemology. So ontology is what’s real or not, epistemology is the approach by which we determine what’s real or not. There’s basically three types of epistemology. Finding things out by measuring things, finding things out by interpreting things, and making things up. So that’s positivism, interpretivism and belief.

So first off positivism. The positivist approach is to just look at things you can measure with instruments. The idea is that this is objectively getting at the truth by looking at numbers on dials, or scans, or whatever, what’s called sometimes instrumental reality. Positivism is the cornerstone of the scientific method, which works like this:

  1. you have these theories about how the world works.
  2. You test them with your experiments.
  3. The results match your theory so you think you’ve got to the truth.
  4. Then you carry on doing experiments until one of them doesn’t match the theory, so you need a better theory.
  5. When you’ve come up with a few you then do more experiments to confirm which one is the best.
  6. And then you start the whole cycle again.

People are pretty bullish about positivism because it’s been really effective at working out what’s actually going on.

There are problems with the approach though. One is that people sometimes forget nothing scores above a 6. They mistake their current best guess for what’s actually happening. It’s the best way to get closest to the truth, true. But you never quite get there. Like Zeno’s arrow.

The other problem is that sometimes the experiments give the wrong results. So for instance you fire neutrinos through the Earth and find out they’re travelling faster than light, but then later figure out that there’s a loose cable that has thrown off your timing. Or maybe it’s your analysis that’s wrong, like the dead fish experiment in neuroscience. If you do a brain scan you can see effects that look like there’s a causal relationship between showing someone pictures and the reaction in the brain, but you also get a reaction if you plug in a dead salmon at the other end. You need to account for random fluctuations. I think after that paper was published something like 25% of neuroscience papers were retracted.

Then there’s a lot of cultural bias. So for example, if you’re testing a theory, the one that gets the most funding is the one propounded by the most eminent of scientists, and they’re often old white guys. If there’s other theories, they can get held back for a while. Usually until all of that generation of old white guys are dead. You can see the social effects on the progress of science.

The thing is though, that the process is self-correcting for social bias. If a theory doesn’t work, you’ll have lots of people doing experiments, and coming up with theories and eventually one will look better than the rest to most people, and that’s the one that generally gets adopted. You get a consensus irrespective of culture. At the boundaries there’s contention, but in the main body of science there isn’t – the main body is more or less everything that happens after the first 10-35 seconds after the big bang up to now, everything bigger than a quark, anything smaller than the observable universe. The cultural bias doesn’t change the overall direction, it just slows it down.

Epistemology – interpretivism

The other approach is interpretivism. Interpretivism is more subjective, in that it’s interpreting what’s going on. You might not have anything you can actually measure with an instrument, so you need to ask a lot of people a lot of questions. This is a bit more systematic than a bunch of anecdotes, in that the idea is that you ask a large representative sample of people, and aren’t selective about which responses you look at. The criticism is that it’s still just a collection of opinions and it’s not reliable enough. As Roosta would say, you can’t scratch a window with it. Interpretivists would argue that positivism is so culturally biased that everything is interpretivist, which is just fashionable nonsense. Obviously if thousands of people from all over the world do an experiment and get the same result, which confirms the generally accepted theory, that’s not open to interpretation. To claim it is just seems like an inferiority complex on behalf of the interpretivists. Where the real strength of interpretivism is, is that it’s producing something like a version of the truth that can be useful where positivism couldn’t get you anything. Anything to do with how people behave socially has to be interpretivist, because people are way way more complicated than cosmology. You can’t put them in a laboratory and see how they perform in the real world, because once they’re in a lab they’re not in the real world any more. So all you can get is a mass of opinions to interpret. But that’s OK because it’s better than the alternative. Which is nothing.

And there’s a huge number of interpretivist approaches, feminism, postcolonialism, Marxism, basically anything with an ism on the end. They’re all a valid way of approaching the world to some extent, as long as they can accommodate all the data observed and are precise about what their limits are. The mistake is calling them theories. That’s a positivist word. There’s nothing predictive about interpretivist approaches. You can’t say “in this and this situation with people this will happen”. It’s too complex. And vague. What you’ve actually gotwith interpretivist approaches are different narratives, or lenses, through which to describe what’s going on. As Jitse said in a previous episode of Pedagodzilla, all models are wrong, some models are useful. The important thing is not can we prove it, but is it reproducible enough, and generalisable enough, and explain enough of the observations to be useful?

Epistemology – belief

Finally, we have making things up as an approach. There’s a lot of in-built elements to the way minds work that mean we tend to look for patterns that aren’t there – apophenia for one. We recognise simple messages rather than complex ones. When we make connections in our heads that make particular sense to us we get a dopamine hit. That leads to aberrant salience, things get connected that shouldn’t get connected. So for example, there’s a lot of intricate stuff about crystal healing and resonance, which makes no sense physically, but sounds good as a story. There’s no scientific rationale behind it at all, but it works as a placebo because it sounds plausible to the people that believe it.

One thing positivism and interpretivism are bad at is creating the sort of stories that have emotional truth for people. You can’t all get together and have a good time based on the standard model, or the general theory of relativity. The myths that are created by making things up hold communities together. They bring people comfort. So if you’ve moved to a new place and you’re wondering what church to join, for example, someone coming along and saying well you have no evidence for your faith so why bother? is completely the wrong epistemology. We talked about Buffy as if the show was real in a previous episode, it would be completely out of place to continually remind everyone it’s not real. I’ve used the phrase “science needs to stay up its own end” before, which I don’t think people would get unless they grew up on a working-class housing estate in the 60s. Basically, those spaces could be very territorial. You learnt where your patch was, and if you strayed into someone else’s you get told to stay up your own end. Too many epistemologies try and muscle in on someone else’s patch. Lots of epistemologies are dying out because of competition from other worldviews because of just this sort of intrusion – it’s called epistemicide. That seems like a bad idea because we’re losing other ways of perceiving the world.

But … the problem also works the other way when you start using your beliefs to make decisions about real things. So if you’re looking for a response to covid-19 you need to use a positivist approach and do clinical trials to find out what will work, you don’t just tell people you’re protected by the blood of Jesus. That’s a category error. Or you’re deciding whether gay people should be able to adopt. You can’t use a positivist epistemology (because there’s no instrument that can measure that) or a belief-based one (because it’s way too important to base it on something someone made up). You need to look in between at interpretivist approaches and gather data about what people’s experiences are about children of gay parents. And as it turns out, there’s no major difference. To insist on something being your way because you read it in a book somewhere is simply bizarre. I don’t need to do a routine on that because Patton Oswalt has already done that.

Critical realism and ontological hygiene

So what’s the proper epistemological approach? Well one of the things I learnt from physics is where you’ve got a binary choice, the answer is nearly always both are right. So is light a wave or a particle? It’s both. Same’s true here. I’m really suspicious of people who say “I’m a positivist” or “I’m an interpretivist”. Neither are appropriate all the time. There’s an epistemological approach called pragmatism, or realism, sometimes critical realism. It’s about adopting the correct epistemology for what you’re looking at. So you have a physical science or chemistry or medicine, you have to take a positivist approach, you measure things and look at the numbers, and that gives you something ontologically that scores a 6 or maybe a 5 (or is disproved down to a 2). Or you’re looking at how people think or behave. You need interpretivism, because there’s no laws that govern how people behave, and that’s only going to be a 5 at best. It’s not as good as a 6, but it doesn’t have to be to be useful. Just let it go. At the other end you have all the stuff that has no evidence for it at all. But that’s ok too, science can stay up its own end. And as anything you can think of is ontologically a 2 and never a 1 that gives you a lot of wriggle room. “You know maybe God does exist, and believing that makes me feel happy, so I’m going to believe it.” The problem is when you start misapplying the made-up stuff to make decision about real things. Even then, I guess as long as it doesn’t harm someone else, feel free. But if someone else is going to be affected, you need enough evidence to score a 5 or a 6 on the ontological scale, or you’re being a complete dick.

It’s all about being aware of  where things are on the ontological spectrum and using them appropriately – what’s called ontological hygiene. Maintaining that ontological hygiene and being able to switch between the different epistemologies, is where liminality comes in, but that’s another episode.




Predicting virtual worlds #5

Augmented reality

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Augmented reality. One function of many mobile devices is that they can combine the camera images with an overlay of additional information. In the same way that a global position and orientation can be used to calculate the position of stars as seen from a particular viewpoint, these can also be used to determine at which geographical location the tablet is being pointed. These data can then be combined with a database of information to create an overlay of text to explain, for example, the historical background of a building, or the direction and distance of the nearest Underground station or Irish pub. Locations can be digitally tagged, either with additional information (such as in a learning exercise with students adding their own content to locations), artwork, or even graffiti[i]. As with the astronomy apps described above, this provides learning in situ, and provides a kinaesthetic element to the activity.

The potential of combining geotagged images onto the physical world is indicated by augmented reality games such as Paranormal Activity: Sanctuary[ii]. In this, images of ghosts are located at particular physical world co-ordinates, which can be seen with a dedicated iphone app that overlays these images onto a camera image. Players can create sanctuaries, or cast spells, at locations which then influence the experience of other players. The game therefore becomes a massive multiplayer roleplay game played in a blending of the physical and a virtual world.

Greater precision than that enabled by global positioning can be provided through Radio Frequency Identification (RFID) tags, the technology for recognising which will soon be available on mobile technology[iii]. By placing an RFID tag in clothing, or furniture, or on a person, information about that object or person (i.e. metadata) are then always available, whenever a device is pointed at them. For example, products could be linked directly to their user manual; simply hold your tablet PC over your oven and pop-up boxes appear over the knobs decoding the icons, or attend a conference and each person there could have information linked to them, such as name, institution and research interests, which is revealed by holding up your phone and tapping their image on the screen. Several museums and exhibitions already have augmented reality exhibits; when a room is looked at through an AR viewer, the physical objects in the room are overlain with animations or animated characters, bringing the static displays to life[iv]. A further enhancement of augmented reality is achieved by enabling the animated characters to address the attendee directly, with their gaze following the attendee around the room, as they are tracked through the use of an RFID bracelet[v]. The characters can address many attendees simultaneously since, from the perspective of each, the character is looking at them individually, a transformed social interaction known as non-zero sum mutual gaze[vi]. These interactions can be made more seamless by plans to create AR projections within glasses[vii]. Rather than clicking on a screen, input can be through the detection of hand movements[viii] or, for the mobility-impaired, deliberate blinking[ix].

If this is possible with pre-recorded characters, then it is only a short leap to enabling this to take place with avatars or bots in realtime, by layering the virtual world image onto the physical as it is created. This activity resembles the mixed reality performances created by Joff Chafer and Ian Upton; originally these performances used images from a virtual world projected onto a gauze, so that they could share the stage with physical world actors, and more recently Chafer and Upton have used 3D imaging to bring the virtual world images out from the screen and into a physical space[x]. Capturing the images of avatars in the virtual world, and geotagging them, would enable people with the appropriate AR viewer to see avatars moving and communicating all around them. As the sophistication of bots develop, then the use of them as companion agents, guiding learners through virtual learning scenarios, could be brought into the physical world as guides and mentors seen only by the learner through their AR viewer. With ways of imaging the avatars through something as immersive as AR glasses, physical world participants and avatars could interact on an equal footing.

For learning and teaching, the advantages of blending the functionality and flexibility of the virtual and the real are enormous. For the learners who see virtual learning as inauthentic, relating the virtual world learning directly to the physical may overcome many of their objections. The integration of an object and its metadata as well as data providing context for that object (called paradata) is easily done in a virtual world; AR in combination with RFID tagging enables this feature to be deployed in the physical world too, since information, ideas and artefacts can be intrinsically and easily linked. User generated content, which again is simply created and shared in the virtual, can also be introduced to the physical. Participation at a distance, on an equivalent footing with participation face-to-face, could be achieved by the appearance of avatars in the physical environment and RFID tagging the physically-present participants and objects.

[i] New Scientist, Augmented reality offers a new layer of intrigue, 25th May, 2012. http://www.newscientist.com/article/mg21428652.600-augmented-reality-offers-a-new-layer-of-intrigue.html,

[ii] ‘Ogmento Reality Reinvented, Paranormal Activity: Sanctuary’, 22nd May 2012. http://www.ogmento.com/games/paranormal-activity-sanctuary

[iii] Marketing Vox, ‘Married to RFID, What Can AR Do for Marketers?’, 4th March, 2010. http://www.marketingvox.com/married-to-rfid-what-can-ar-do-for-marketers-046365/

[iv] Canterbury Museum, ‘Augmented reality technology brings artefacts to life’, 28th September 2009. http://www.canterburymuseum.com/news/13/augmented-reality-technology-brings-artefacts-to-life,

[v] A. Smith, ‘In South Korea, Kinect and RFID power an augmented reality theme park’,  Springwise,  20th February, 2012. http://www.springwise.com/entertainment/south-korea-kinect-rfid-power-augmented-reality-theme-park/

[vi] J. Bailenson, A. Beall and M. Turk, ‘Transformed Social Interaction, p. 432

[vii] S. Reardon, ‘Google hints at new AR glasses in video’, New Scientist, 4th April, 2012. http://www.newscientist.com/blogs/onepercent/2012/04/google-hints-at-new-ar-glasses.html

[viii]C. de Lange, ‘What life in augmented reality could look like’,  New Scientist, 24th May, 2012. http://www.newscientist.com/blogs/nstv/2012/05/what-life-in-augmented-reality-will-be-like.html

[ix] Eduardo Iáñez, , Andrés Úbeda, José Azorín, Carlos Pérez, Assistive robot application based on a RFID control architecture and a wireless EOG interface Science Direct, Available Online 21st May, 2012. http://www.sciencedirect.com/science/article/pii/S0921889012000620

[x] Joff Chafer and Ian Upton, Insert / Extract: Mixed Reality Research Workshop, November 2011. http://vimeo.com/32502129

Prescience Factor: 0/10. Despite AR apps becoming more popular since 2013, AR is still not really a thing in that it’s not an embedded part of what we do. Linking AR and virtual worlds in the way I’ve described here isn’t any further along (as far as normal practice) than it was when I wrote the above.

Predicting virtual worlds #4

Gone to mobiles every one

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Gone to mobiles every one. As noted above, the rate of take-up of virtual worlds anticipated by Gartner in 2007 has not been realised. Some predictions also state that the rate of development of the high end graphics technology required for virtual worlds will be slowed by the adoption of mobile technology. Essid[i] notes that the tablet PCs owned by students cannot run the viewers required for Second Life, and these are now the predominant technology with which students access online learning. In addition, many apps provide innovative and offline education, such as the use of Google Sky, Zenith or Sky Safari for learning astronomy. In these apps, the learner holds up their tablet PC and through global positioning and inbuilt sensors that detect orientation, the tablet displays the position of stars, planets and Messier objects as they appear in the sky in the direction in which the tablet is pointed. This provides learning that is interactive, kinaesthetic, and in situ. Essid’s prediction is that the predominant use of mobile technology as the new wave of learning will stall the uptake of virtual worlds. As Essid states in his blog post on the subject:

One does not wish to be on the wrong side of history, and I think SL evangelists are clearly on the wrong side, unless they are early in their careers and have a Plan B for research and teaching.

[i] J. Essid, ‘Mobile: Shiny? Yes. Hyped? Yes. Fad? No’, 3rd May, 2010, http://iggyo.blogspot.co.uk/2012/05/mobile-shiny-yes-hyped-yes-fad-no.html

Prescience factor: 8/10. To be fair, not my prediction really, but Joe Essid’s. The increasing usage of mobile devices has meant that learning can take place anywhere, but it has caused the development of some technologies to slow down because as a platform they are more limited,  in terms of the processing power when compared to PCs, but also in due to the speed of input (two thumbs are never as fast as 10 fingers) and the readability of the screen. It’s not 10 out of 10, because I think both Joe and I underestimated the capacity and functionality that smartphones would attain by 2018. Moore’s Law is severely difficult to anticipate because it’s a geometrical increase. This example shows why it’s impossible to get your head around geometrical increases.

Predicting virtual worlds #3

Moves to games consoles

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Move to games consoles. A move in the other direction, to more sophisticated technologies, is the repositioning of virtual worlds to run on games consoles such as the Playstation 3, or the Xbox 360. Games consoles have very sophisticated graphics processors, and the quality of the rendering of games is much higher than is available using most PCs. Many Massive Multiplayer Online Games are already available on games consoles, and shared virtual worlds such as Minecraft, previously running on PCs have made the transition to this technology. In the Minecraft case this has proved immensely popular[i]. The advantages of running virtual worlds on games consoles is due not to just the more sophisticated graphics available, but also the control devices. Many people find games controllers a more intuitive mechanism to control the movement of an avatar than keys on a keyboard. However, text chat and a drag and drop functionality are less well integrated.

The next generation of games controllers offer even more interactivity as they allow detect physical interaction by the users, through the use of cameras and motion detectors. Devices such as the Xbox 360 Kinect controller have already been used to animate avatars. There are two ways in which this can be done, either avatars can be animated inworld through physical actions triggering pre-set animations, (for example, the act of raising your hand triggers a hand-raising animation) or, as in the work of Fumi Iseki and a team at Tokyo University[ii], the animations are used to animate avatars in realtime, but in a local viewer only. Because avatars are animated inworld using preloaded animation files, there is no way with current technology to map motion capture to inworld movements of avatars in realtime.

This opens up the potential to a new, closer relationship between user and avatar. As Jelena Guga notes[iii], this will be the next step change in the developing degrees of immersion that have been enabled by the changes in technology. Although the sense of immersion may be increased, requiring the user to be physically active may also, simultaneously, make the user more aware of their physical body while interacting inworld, so their sense of embodiment may actually be reduced. The individual experience of virtual worlds varies enormously, and a likely discovery will be that whether physical operation of an avatar increases or reduces the sense of engagement inworld will be different depending on the person. Another consideration is that a one-to-one correspondence between physical action and resulting motion of the avatar is, as Stelarc points out,[iv] possibly the least interesting way in which to use motion recognition to animate avatars. In his performances, Stelarc uses his body to create inworld performances, but his gestures cause his avatar to fly, float, operate cyborg attachments and so on.

From a learning point of view, a move to games consoles could have advantages and disadvantages. The move would overcome some of the objections to virtual worlds with regard to the low resolution graphics, and technical issues such as slow rendering times and lag, however, they could marginalise activity even further, since few computer suites in universities have games consoles, and it cannot be guaranteed that all users will have access to them. Developing motion controlled interfaces would address some of the issues that some users find; that operating within a virtual world is too sedentary an experience. Offering the opportunity to operate avatars through physical motion may appeal to these users, though indications are that these users actually find the virtual nature of these experiences intrinsically problematic, equating the virtual with inauthentic. However, the use of a motion recognition system will have interesting opportunities for performance.

[i] M. Hawkins, ‘Minecrafton Xbox Live a smash success, MSNBC, May 12th, 2012http://www.ingame.msnbc.msn.com/technology/ingame/minecraftxboxlivesmashsuccess-766955,

[ii] Second Lie, ‘Kinect Hack Brings Real Time Animation To Second Life’, November 2011http://second-lie.blogspot.co.uk/2011/11/kinect-hack-brings-real-time-animation.html,

[iii] J. Guga, ‘Redefining Embodiment through Hyperterminality’, Virtual Futures 2.0, University of Warwick, 18th – 19th June, 2011.

[iv] Stelarc, Keynote, From Black Box to Second Life: Theatre and Performance in Virtual Worlds, University of Hull, Scarborough, May 20th, 2011

Prescience Factor 4/10. The only thing I nailed here was that consoles would become more of a platform for interacting in a social world way. Lots of RPGs now allow users to build spaces in a shared virtual environment, and not necessarily in service of the game directly, but just to settle a permanent online 3D space. The flexibility of the spaces and avatar interactions in games like, for example, Conan Exiles or Fortnite Creative is more limited than a full social virtual world, but you could potentially create a home and then invite someone round for a chat.

Predicting Virtual Worlds #2

A virtual world in your browser

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

A virtual world in your browser.  There are numerous legitimate reasons for using standard web browsers for access to virtual worlds. The first of these is that the processing power, particularly of a graphics card, required to run a virtual world viewer is beyond the capacity of the technology available to many people, and particularly of institutions. Secondly, the bureaucratic hurdles many practitioners face when requiring additional software to be downloaded and installed preclude the use of virtual worlds in many institutions, suffering as they do from the obstructive policies of their IT departments. Finally, enabling virtual worlds to be viewable from within a web browser means that the practice of accessing virtual worlds can be easily integrated into the majority of people’s normal internet usage, and so potentially widen the demographic of users. The initial effort required to begin using them in an educational situation would consequently be reduced.

It would be reasonable to anticipate that these factors would lead to the usage of virtual worlds becoming much more widespread. Making virtual worlds viewable through the web should have been very successful, in effect though, Lively only lasted for the second half of 2008. Newer virtual worlds, such as Kitely, although trying to widen the demographic of potential users by offering other platforms such as Facebook and Twitter for access, have returned to the use of the viewer-based technology rather than be browser-based.

The reasons for the failure of Lively are still being discussed. The direct experience of those contributing to this chapter, however, is that reducing the functionality of the virtual world in order to enable it to work within a browser removed the elements that made a virtual world worth pursuing. The sense of immersion was reduced, the opportunities to create and interact with virtual artefacts within the world were lessened, and consequently the rapid adoption by the marketplace, needed for the survival of any social medium, did not materialise. Lively disappeared before many people realised it had been launched, and new web-based viewers have not emerged to take its place.

Prescience Factor: 0/10. A total overestimation of the versatility and processing power of browsers now.

Predicting virtual worlds #1

The Metaverse Lives

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

The metaverse lives. Of the chapters in the book, four chapters use Second Life , one uses OpenSim, one World of Warcraft, one uses a 2D multimedia website and one began with Second Life and then, due to the price increases imposed by Linden Lab, moved to OpenSim. From this (admittedly small) sample, it appears that Second Life is still the strongest contender for a platform to host virtual world activity, but that educators are more becoming more likely to consider alternative, though similar, platforms, with OpenSim leading the way.

Educators’ dissatisfaction with, and the expense of, Second Life is beginning to cause fragmentation of the virtual world community. Whereas before it was almost guaranteed that educators would share a single grid, increasingly they are becoming spread across a range of different platforms. One saving grace of this diaspora is that many of the most popular of these virtual worlds use the same viewer. Whether one uses the Second Life viewer, Imprudence, Phoenix or Firestorm or any of a number of others, once a user has learned to interact with the world using that particular interface, then it is of little difficulty to switch to another one. This is particularly important with virtual world as a technology (moreso than, for example, with a word-processing package, or an online forum); since what is required for an effective learning opportunity is immediacy of experience rather than hypermediacy; any changes in the interface are extremely disruptive, since this makes the technology more visible and reduces the transparent nature of the interaction.

However, although they are operated in the same manner, the grids remain separate. The step that will reintegrate this fragmented community, and enable educators to once again easily share and visit their educational resources will be the successful employment of hypergridding. Hypergridding is the connecting of these separate virtual worlds to create a collection of linked worlds, an example of Stephenson’s metaverse. Once it becomes possible to move not only avatars, but also their inventories, from world to world, then these separate grids will perform as a single platform; so, for example, objects purchased from within Second Life (which has a thriving creators’ market) could be employed within OpenSim (which gives institutions greater control over privacy and ownership of the space). This would greatly expand the choices, and the flexibility of using virtual worlds for educators, and to a large extent enable far more effective collaboration. Simple and effective hypergridding is close to deployment, but, as of writing in 2012, has not been realised.

Prescience factor 0/10. Hypergridding is not a thing.

Sex with robots: the case against the case against Part two

Taking apart the interview, and the logic behind the argument, we get to these statements.

“Sex dolls and sex robots in the form of women and girls do something else. In the mind of someone buying and using them – they ARE women and girls. They are designed deliberately to resemble women and girls because they want the man buying and using the dolls to believe it is a woman or girl. These are markedly different things. Sex dolls and mechanical dolls in the form of women and girls play on the idea that women are orifices to be penetrated.

Imagery that dehumanises others in order to justify rule over them serves a political purpose. These sex dolls of women and girls are serving a political purpose to reinforce the idea that women and girls are sub-humans/orifices.”

“In the mind of someone buying and using them – they ARE women and girls.”

This doesn’t follow at all; it needs some evidence to back it up. The only thing we can say for sure is that someone having sex with a robot wants sex with a robot. Maybe it plays on the idea that they stand in for real women, but also it’s likely that that’s just play. There are a huge number of presumptions here, none of which are supported by research.

“Imagery that dehumanises others in order to justify rule over them serves a political purpose. ” True. This is what makes the argument such a problematic one. Dropping in valid political statements, that everyone can agree with, but then indicating a consequence that is no consequence is a standard bait-and-switch ploy. You agree with statement A and (you claim) A causes B, therefore you have to agree with B. Everyone can agree there is a systemic oppression of women in the patriarchal society. And that is formed by men with power in society. That sex dolls are contributing to this is not at all evident though. The power of this as a series of statements is that if you oppose B (because “therefore” is not proven) then somehow you are against A. It’s a specious and underhand way of carrying your argument.

What makes this “therefore” unlikely is that although men with power rule, men with sex dolls are rarely men with power. One of the areas I looked at with avatars is the role of the zeta males in many of the activities in virtual worlds. It is the men who have little or not power that compensate for this lack of power in their own lives by playing at being powerful in their fantasies. Their actions have no impact on wider society because nothing they do has impact.

OK generalisation there, which I admit. See how that works as a way to obfuscate relationships between concepts though? Zeta males have no power, only zeta males have sex with dolls, having sex with dolls therefore has no impact on society.

There may be a link. There may not. Acting on suspicions though is not really very ethical.

I suppose the bottom line for any ethical debate is do you deny a group of (some would call creepy) males’ expression of their sexuality out of caution that their actions may exacerbate the oppression all females, or not? It’s a classical deentological vs consequentialist dilemma. Do you take that chance of conducting a possibly (or even probably) unnecessary act of oppression on a minority group just to be on the safe side?  Or do you take the route of preserving all people’s rights, unless they are demonstrated to be dangerous?

While you’re considering that, I’ll remind you of another analogy. When the pigs finally get to run things in Animal Farm, they end up being just as bad as the people they replaced. Power is intoxicating, you get to control things so that you can make them the way you want them to be. When you’re in power you don’t have to worry about the consequences for disenfranchised people if you’re never likely to be one of them. Prof Richardson has a platform, the agalmatophiles do not; it is evident where the power lies in this debate.

“Four legs good. Two legs better.” should haunt anyone acquiring power; before you act check you’re not simply replicating the iniquities of those who’ve had the power before you.

A professor of ethics should know that.

Sex with robots; the case against the case against part one.

One of the sites I often read to get a good line on an ethical issue is Conatus News. It’s sort of generally progressively liberal, and usually well-argued. It offers a range of opinions, and doesn’t contest them, which is open-minded of them. Some of them, though, make my skin crawl. This article https://conatusnews.com/kathleen-richardson-sex-robots/ was one of them.

It’s an interview with Kathleen Richardson, Professor of Ethics and Culture of Robots and AI at the Centre for Computing and Social Responsibility (CCSR) at De Montfort University and spearhead of The Campaign Against Sex Robots. The rationale is that they exacerbate the objectification of women. I get the impression from the argument made that that’s not what’s going on.

The first alarm bells in the argument are some unsupported (and from what I know, plain wrong) statements. Here’s one:

“In the last twenty years, with the age of the ‘cyborg’ informed by anti-humanism and non-human distinctiveness, there has been this prevailing sense that humans and machines are equivalent. This implies that the only difference between a machine and a human is the ‘man who is creating it’ rather than some empirical and radical difference between a human and an artefact.”

In actual fact, if anything, the more people have looked at recreating consciousness, the more they’ve realised how essentially different the two are. While soft AI is being achieved, hard AI looks like an ever more distant, if impossible, goal. In The Emperor’s New Mind (26 years old now), Roger Penrose made some telling arguments about the differences; that no systematic machine-like process can replicate the organic creation of thought. The Turing test is being failed more often than it used to, because even though bots are being programmed better, the people judging are getting better at telling the difference. If anything, from the bits of research I’ve done, the increase is in more false positives, rather than false negatives. That is, rather than people mistaking bots for humans, people are mistaking humans for bots. Our standards for what makes something human-like are getting higher. Robots are falling behind.

Next one: “It has led to robotic scientists arguing that machines could be ‘social’ ”

This is not what social robotics is. Social robotics is looking at the elements that enable robots to fit into society, not at considering them to actually “have” society. This is a deliberate misrepresentation.

Now we come to the quite disturbing part of the argument.

“If a person felt like they were in a relationship with a machine, then they were. In this way, two seemingly different ways of understanding the world came together to support arguments for human relationships with machines. The first was the breakdown in distinction between humans and machines. The second was the egocentric, individualistic, patriarchal model (‘I think therefore I am’) – what I am thinking, feeling, and experience is the only thing that counts. I am an egocentric individual.”

One of the fascinating things about having worked in virtual worlds is that you come across a whole range of people. A lot of them are finding self-expression in ways that they couldn’t do in the physical world. A lot of them are finding ways to connect with parts of their identity that weren’t possible in the physical world. Sometimes it’s society, or it can be identity tourism. Quite a few were exploring their paraphilias.

Agalmatophilia is sexual attraction towards inanimate objects, dolls, mannequins … robots. It’s a thing. And real for the people who experience it. One of the major social movements of the last fifty years is the development of a more permissive outlook on sexuality. It’s complemented feminism, gay rights, more recently transgender rights. Even before gay rights legislation made discrimination on grounds of sexuality illegal, you’d hear homophobes say things like “well I don’t like it, but if they do it behind closed doors, then I don’t have a problem with it”. Not the best attitude, but underlies that an essential element of permissiveness is that if it’s between consenting adults, free and able to give their consent, then it’s not for us to get involved. Or to judge. If even some homophobes get that, we should be able to do even better.

“If a person feels like they are in a relationship with a machine, then they are.” “what I am thinking, feeling, and experience is the only thing that counts.” Those are positions Prof Richardson is critical of. If we are to respect all sexual expression (between consenting adults, free and able to give their consent), and we are, then we have to accept their own definition of identity, sexuality, gender, etc. That’s not patriarchal (in fact, the attitude has stood against the patriarchy in the past), it’s not egocentric (any more than respecting someone’s identity in terms of sexuality, gender, religion etc is). It’s respect.

It’s respect for people who think and feel and experience pleasure and sex differently, to think and feel differently. In ways we might feel uncomfortable in recognising. Which, I guess, is what makes it hard for the neopuritans, of whom Prof Richardson appears to be one. I assume she is otherwise why dismiss something that doesn’t meet with her recognition of legitimate human experience?

It must be tricky times for the neopuritans. Wanting to monitor and dictate what happens in private, between consenting adults (free and able to give their consent), but finding that homosexuality and transsexuality are now no longer legitimate targets. Who else is next? Let’s identify a remaining marginalised form of experience. Let’s go for the agalmatophiles. As Prof R. says later in her interview “I think, most people would agree they’re a bit creepy”. Yep like most people agreed gay people were a bit creepy a few decades ago? But if we target those that enjoy that sort of thing and dress up our distaste for what we’ve deemed are corrupt and perverse with words like patriarchy, that’ll make it look more liberal.

And if you’re thinking that wanting a relationship with a doll is a bit weird, so why stand up for agalmatophiles, there’s a poem by Martin Niemöller you need to re-read.

So yes, “two seemingly different ways of understanding the world” have come together in Prof Richardson’s argument, but those two things are luddism and neopuritanism, basically fear of technology and fear of other forms of sexuality.

There’s some more unethical opinions stated during the second part of the interview. I’ll leave them for the next post.

A Failure of Balance


The article, if you want to take a look at it, is about Lawson talking on the radio, lying about climate change. There’s of course been an uproar, quite rightly. And some moron at the BBC has said this:

“The BBC’s role is to hear different views so listeners are informed about all sides of debate and we are required to ensure controversial subjects are treated with due impartiality.”

What the absolute fuck? How on earth can a sane rational, and hopefully, educated person come out with that sort of shit? And look at him/herself in the mirror afterwards. It shows not only a basic lack of understanding about journalism, it indicates a complete failure to understand how reality works.

This is not balance:


OK that’s not entirely accurate, because there is nothing on the other side. You can only have two sides of a debate when there are two sides. When there is only one side, then to present both as equivalent is not impartial, it is highly biased towards the side which has no argument. If you want a balanced debate about climate change, have two scientists, SCIENTISTS, not has-been politicians, arguing about whether the increase is 1 degree or 2 degrees, not nobodies banging on about the latest stuff they’ve made up.

It is highly irresponsible to pass on fiction as fact. Not including lies is not being partisan, it is being a responsible member of society. Not giving them a platform is not censorship, (I’m not contradicting my previous post), because no-one’s rights are infringed by it. There is no right to pass on misinformation as truth.

And people have explained this to BBC journalists before. How do they still not get this very basic simple fact? That’s not a rhetorical question. I really don’t understand how they cannot learn how to do their jobs properly.


The Master’s Tools

I’ve been struggling with getting my ideas together for this post for a while, but today I read this quote from Audre Lourde “The master’s tools will never dismantle the master’s house”. For an analysis of the context for the quote (and I’m going to argue that context is crucial for anything discussed) here’s an interpretation here. https://www.micahmwhite.com/on-the-masters-tools/

For context for what I’m about to say, here’s where I’m coming from.

I am not particularly well-versed in political debate, I’ve not pursued it as an academic discipline, or read a lot around it. My entire philosophy or values aren’t really any more nuanced than “Be excellent to each other”. I grew up with messages that equality is important in and of itself, and diversity is stronger than monocultures (“The glory of creation is in its infinite diversity and the ways our differences combine to create meaning and beauty” to be precise.) When reading “The Selfish Gene” I was enormously relieved to discover altruism has a pro-survival underpinning, and is justifiable using game theory. So it’s not just a belief system. Though it makes a good one.

As far as standing up for people’s rights though, I’m pretty much simply an armchair activist, partly through being non-confrontational, partly through laziness. The most extreme stance on anything I’ve ever taken is unfriending schoolfriends for being racist (yes unfriending people was a thing back in the 70s, but it entailed not walking home with them rather than disconnecting on social media) or taking a liberal stance in conversations (not that hard when you’re not hanging out with illiberal people). I think the only occasion where I’ve actually put anything on the line was being asked about homosexuality while a teacher during the Section 28 days. I said it was OK, for which I could have lost my job back then. In theory. I don’t think anyone ever did and it was a short term contract anyway, so the risks were minimal.

So, tbh my credentials on this are a bit thin. But I’m going ahead anyway.

I was bullied a lot at school, so have a knee-jerk response to anything that looks like bullying (and tend to be very partisan on those issues) and also, growing up in the seventies when culture was under the thumb of the National Viewers’ and Listeners’ Association, have a knee-jerk response to censorship. When I say “Censorship is fascism” I’m not using hyperbole, I genuinely see any attempt to control what artists or creators make as part of a movement of oppression. There is nothing in art or culture that is so bad that it is worse than the act of suppressing it. If there was one lesson I would want the next generation to learn from my experiences, it would be to have that same knee-jerk revulsion at the idea of censorship that I have.

Side story. I used to teach media studies. One series of lessons was on media effects. I showed A Clockwork Orange (on a dodgy pirated VHS as it was still banned then) and an interview with the head of the NVALA Mary Whitehouse. After the movie they all sat around and discussed the issue of banning it dispassionately. Five minutes of listening to Whitehouse and her festering ideology they were kicking chairs around the classroom. There is a lesson in there somewhere.

Caveat here – I’m talking about art solely. Freedom of expression in being creative is important, being creative about facts isn’t on. Incitement to violence isn’t permissible. If you’re calling for final solutions and that sort of thing, that’s not OK either.

I’ve not been a part of any cults, but did hang out with the Cardiff Marxist-Leninists for a while (not out of any political conviction, but that’s another story). So I’ve seen up close how movements reinforce and isolate dialogue so it becomes bounded and simply reflective, and how ugly and scary virtue signalling can be when you’re part of a group that enacts it. I didn’t last long there.

Quick explanation of virtue signalling, though you can read the full one here. https://en.wikipedia.org/wiki/Signalling_theory  Signalling theory is the idea that animals all have codes to indicate that they are members of the same herd or tribe. It’s a safety mechanism to ensure that they can easily spot an intruder. So as with any pro-survival characteristic it’s probably hardwired into our genes. When it’s applied to whether you share the same values as others it’s called virtue signalling. When used individually it’s a fastrack way of identifying whether someone is aligned with your way of thinking, and so whether is going to be a threat to your ideology. When used by a group, it can sometimes look like a blood frenzy.

So for 40 years now I’ve been supporting diversity and equality in my own small ineffectual way. It’s a relief to see that (pre-2016 anyway) there’s been a gradual improvement on those grounds. Obviously, (I hope it’s obvious) there’s still a long way to go.

What has been leading up to this particular post though is seeing a subversion of this gradual increase in liberalism by groups of people within, mainly, social media. And it’s relevant on a blog on technology because I can see how social media have contributed to it.  This came up recently in a conversation between me, a niece and a stepson. We got onto the term “Social Justice Warrior”. If you look up the definition on Wikipedia or urban dictionary (both useful sources because they’re crowdsourced, so represent the general understanding of a phrase) now, I’ll wait. When coined, the term actually referred solely to people who used a liberal discourse as a means to attack people mainly through social media, through identifying some way in which they were falling short of what they perceived as a progressive liberal stance. So for example, a rocket engineer wears a shirt with female anime characters on it (made by a female friend who liked his penchant for flashy shirts) and gets accused of demeaning women, Stephen Fry calls his friend a bag lady and gets a similar backlash. Ricky Gervais mentions Caitlin Jenner in a comedy routine and is accused of transphobia. None of those accusations stack up on examination. All of them led to people being badgered online. One of those three were tough enough mentally to shrug off the abuse. Two weren’t. The implication of the term SJW being that the superficiality, misguidedness and/or vitriol of the attack indicated the attackers were doing it because they wanted to boost their own self-importance rather than out of a genuine concern for social justice.

However, the term SJW has now been thoroughly debased by extreme conservatives who don’t like any change. Who see the media as predominantly for white males under threat and don’t like them changing. Most ridiculous, I think, has been the backlash against the next iteration of Star Trek because it has a non-white female lead and second lead in the roles. Some fans are accusing the show of selling out to SJWs, not realising that the show has always had a social justice agenda – it’s always been about diversity and inclusivity (even when it failed, it was trying). So now, the term “social justice warrior” has been conflated with people who are genuinely concerned about social justice. So it’s essentially now counter-productive to use it.

Unfortunately (for someone who likes precision), any other term could go through the same attenuation, so making up another one doesn’t help. I might as well refer to people as Type A liberals and Type B liberals, and everyone else as Type C (for Conservatives). And yes this will be a generalisation, so I will attempt to interject the word “most” whenever I remember.

Partly this suppressing, bullying effect is amplified by social media. Any one person can object to something, or raise genuine concerns about something, in a tweet or a blog. And that’s OK. But when social media enables that to be echoed and retweeted, and grouped using a hashtag, then suddenly rather than it being a single voice it becomes a torrent. The fear of being on the end of that torrent can make people highly self-censoring, and even more prone to virtue signalling to deflect any likelihood of being on the receiving end of it. It didn’t begin with social media, the same effect happened around witch hunts (both literally and figuratively). If you’re attending the house of un-American activities, or in a courtroom in Salem, or in a room above a pub in Cardiff, you soon learn to denounce the incorrect statements with the absolutely correct condemnation, otherwise you’ll be next. I think, though, that social media have made that activity widespread and quotidian.

Social media have also enabled people to find each other, and reinforce their opinions. This has been a positive thing in some aspects; look at the Labour resurgence at the last general election. People found that there were other like-minded people, who were fed up with the politics of avarice and exploitation, and wanted a change. On your own, you’re likely to give up. When you find lots of people who think similarly it gives you the confidence to continue.

There’s a downside too, though. For a long time, people have been marginalised by the system have not had a voice, and have been oppressed by others. If you’re in a society run by tall white straight affluent able-bodied southern men, you’re more likely to succeed if you’re a tall white straight affluent able-bodied southern man, and the more of those boxes you can tick, the better you will do. There can be endless debate about which of those factors will benefit you the most. They’re never productive.

Social media have now given people a share in that power to some extent. For the type B liberals, who want social justice, who want to see a more pluralist society, who want more diversity, they have presented the opportunity to push for change, and to be visible enough for that change to be brought about. For the Type A liberals, it’s also presented to the opportunity to get in on the oppression and turn it around.

One of the latest campaigns has been to try to prevent the creators of Game of Thrones from making a TV show about the South winning the Civil War around the hashtag #NoConfederate. Irrespective of whether you think the idea is offensive or not, to call for it not to be made is censorship. It is saying there are some things that cannot be made, or said. People who were oppressed are trying to employ the tool that has been the means by which they have been oppressed for millennia. WTF?

Recently there was an apology from Guelph Central Student Association for including “Walk on the Wild Side” in a playlist because it could be perceived as transphobic. Instead of saying “fuck off” at the accusations, they apologised. That’s a fear response if ever there was one.

Fear. Censorship. That’s not what a liberal progressive agenda can include if it wants to continue to be liberal and progressive.

“The master’s tools will never dismantle the master’s house.”

Obviously, (I hope it’s obvious), the oppression from the right is greater still than the oppression from the left. They have more power, there is far more representation of tall white straight affluent able-bodied southern men than of anyone else in our media and a tendency to remove other representations. But when you see tactics from your own side being used to bully, censor, intimidate and shut down others it’s even more distressing, because it requires us type B liberals to actively distance ourselves from the Type As, when really there aren’t enough of us all to go round as it is. And the middle ground is not a difficult one to find; there is a nice wide path to follow between whitewashing on one side and whitehousing on the other.

My niece asked me a very good question which is “how do you tell the people who genuinely want social justice from those who are just using the discourse to boost their own egos?” I had had too much vodka to answer the question coherently at the time, but I’ve been thinking about it since. And these are the ways.

  • Is the likelihood of the post/blog/tweet more likely to increase the level of fear in society (by intimidating the person who made it) or reduce it (by acting to protect rights of the oppressed)?
  • Are you calling for a viewpoint to be censored or just challenging it (or supporting the rights of all views to be heard)?
  • Are you just reacting to a particular term or expression someone has used or taking time to understand the context?
  • Are you hectoring an individual for your interpretation of their views or giving them the benefit of the doubt? (If someone generally is a reasonable person and they slip up, they deserve a break. Obviously, if they’re an ass most of the time, go for it).

The thing in common with all of these is I think, compassion. Be excellent to each other. If you’re responding to a key phrase you object to, or are reading something objectionable into what’s being said, rather than taking the time to work out what was actually meant, then you’re not acting with compassion. To claim that you’re supporting social justice, while acting unjustly, indicates you don’t really mean it. You’re just doing it to boost your own status. Ask yourself the question, “who has the power here in this dynamic?” If it’s you, then exercise some caution in how you apply it. Give the rocket engineer a break.

I’m not going to blame social media for encouraging this. Social media are just tools for communication. The use of them is still in its early days – they’ve been in common use for only a decade, so part of the problem is we’re still learning how to use them. Entire systems of thought have been associated with single hashtags, and rebuttals for arguments reduced to the same. And each associated with the type A or type C politics. So we get hashtags like #blacklivesmatter and #alllivesmatter bounced around like they are polar opposites, and as if each represent a particular ideology. If we were to take both of those statements literally, they are both true, and both complement each other. They’re not mutually exclusive, which is what the automatic gainsaying of the other hashtag would imply. Black people are three times more likely to be shot by the police as white people. Twice as many white people are shot by the police as black. Those two truths should both be the concern of a liberal ideology. Instead of finding common ground, the type Cs lump As and Bs together, the type As lump Bs and Cs together, and in amongst the gainsaying there is little room in the middle for reasoned debate.

Twitter itself does not help. The most erudite, humane and reasoned authors can end up sounding like complete dawks when reduced to 140 characters. The alternative is to split your argument across 10 different ones in a rowling series of tweets. Both of those misspellings are sic btw. We can’t avoid using twitter for communication, but perhaps we can avoid piling on the acrimony by simply copying and pasting the latest trending hashtag, and trying to catch people out through the terminology they use. If it’s been said once, we don’t need to jump in to prove that we’re just as right on. And maybe we should be more fearless about calling people out for behaving in a type A way, rather than being afraid of them labelling us as a Type C. Perhaps if we do that, then everyone on the left can all focus our attention against the ideologies that genuinely do oppress us.

Because there really are enough of those out there still.