Predicting virtual worlds #4

Gone to mobiles every one

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Gone to mobiles every one. As noted above, the rate of take-up of virtual worlds anticipated by Gartner in 2007 has not been realised. Some predictions also state that the rate of development of the high end graphics technology required for virtual worlds will be slowed by the adoption of mobile technology. Essid[i] notes that the tablet PCs owned by students cannot run the viewers required for Second Life, and these are now the predominant technology with which students access online learning. In addition, many apps provide innovative and offline education, such as the use of Google Sky, Zenith or Sky Safari for learning astronomy. In these apps, the learner holds up their tablet PC and through global positioning and inbuilt sensors that detect orientation, the tablet displays the position of stars, planets and Messier objects as they appear in the sky in the direction in which the tablet is pointed. This provides learning that is interactive, kinaesthetic, and in situ. Essid’s prediction is that the predominant use of mobile technology as the new wave of learning will stall the uptake of virtual worlds. As Essid states in his blog post on the subject:

One does not wish to be on the wrong side of history, and I think SL evangelists are clearly on the wrong side, unless they are early in their careers and have a Plan B for research and teaching.

[i] J. Essid, ‘Mobile: Shiny? Yes. Hyped? Yes. Fad? No’, 3rd May, 2010, http://iggyo.blogspot.co.uk/2012/05/mobile-shiny-yes-hyped-yes-fad-no.html

Prescience factor: 8/10. To be fair, not my prediction really, but Joe Essid’s. The increasing usage of mobile devices has meant that learning can take place anywhere, but it has caused the development of some technologies to slow down because as a platform they are more limited,  in terms of the processing power when compared to PCs, but also in due to the speed of input (two thumbs are never as fast as 10 fingers) and the readability of the screen. It’s not 10 out of 10, because I think both Joe and I underestimated the capacity and functionality that smartphones would attain by 2018. Moore’s Law is severely difficult to anticipate because it’s a geometrical increase. This example shows why it’s impossible to get your head around geometrical increases.

Advertisement

Predicting virtual worlds #3

Moves to games consoles

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Move to games consoles. A move in the other direction, to more sophisticated technologies, is the repositioning of virtual worlds to run on games consoles such as the Playstation 3, or the Xbox 360. Games consoles have very sophisticated graphics processors, and the quality of the rendering of games is much higher than is available using most PCs. Many Massive Multiplayer Online Games are already available on games consoles, and shared virtual worlds such as Minecraft, previously running on PCs have made the transition to this technology. In the Minecraft case this has proved immensely popular[i]. The advantages of running virtual worlds on games consoles is due not to just the more sophisticated graphics available, but also the control devices. Many people find games controllers a more intuitive mechanism to control the movement of an avatar than keys on a keyboard. However, text chat and a drag and drop functionality are less well integrated.

The next generation of games controllers offer even more interactivity as they allow detect physical interaction by the users, through the use of cameras and motion detectors. Devices such as the Xbox 360 Kinect controller have already been used to animate avatars. There are two ways in which this can be done, either avatars can be animated inworld through physical actions triggering pre-set animations, (for example, the act of raising your hand triggers a hand-raising animation) or, as in the work of Fumi Iseki and a team at Tokyo University[ii], the animations are used to animate avatars in realtime, but in a local viewer only. Because avatars are animated inworld using preloaded animation files, there is no way with current technology to map motion capture to inworld movements of avatars in realtime.

This opens up the potential to a new, closer relationship between user and avatar. As Jelena Guga notes[iii], this will be the next step change in the developing degrees of immersion that have been enabled by the changes in technology. Although the sense of immersion may be increased, requiring the user to be physically active may also, simultaneously, make the user more aware of their physical body while interacting inworld, so their sense of embodiment may actually be reduced. The individual experience of virtual worlds varies enormously, and a likely discovery will be that whether physical operation of an avatar increases or reduces the sense of engagement inworld will be different depending on the person. Another consideration is that a one-to-one correspondence between physical action and resulting motion of the avatar is, as Stelarc points out,[iv] possibly the least interesting way in which to use motion recognition to animate avatars. In his performances, Stelarc uses his body to create inworld performances, but his gestures cause his avatar to fly, float, operate cyborg attachments and so on.

From a learning point of view, a move to games consoles could have advantages and disadvantages. The move would overcome some of the objections to virtual worlds with regard to the low resolution graphics, and technical issues such as slow rendering times and lag, however, they could marginalise activity even further, since few computer suites in universities have games consoles, and it cannot be guaranteed that all users will have access to them. Developing motion controlled interfaces would address some of the issues that some users find; that operating within a virtual world is too sedentary an experience. Offering the opportunity to operate avatars through physical motion may appeal to these users, though indications are that these users actually find the virtual nature of these experiences intrinsically problematic, equating the virtual with inauthentic. However, the use of a motion recognition system will have interesting opportunities for performance.

[i] M. Hawkins, ‘Minecrafton Xbox Live a smash success, MSNBC, May 12th, 2012http://www.ingame.msnbc.msn.com/technology/ingame/minecraftxboxlivesmashsuccess-766955,

[ii] Second Lie, ‘Kinect Hack Brings Real Time Animation To Second Life’, November 2011http://second-lie.blogspot.co.uk/2011/11/kinect-hack-brings-real-time-animation.html,

[iii] J. Guga, ‘Redefining Embodiment through Hyperterminality’, Virtual Futures 2.0, University of Warwick, 18th – 19th June, 2011.

[iv] Stelarc, Keynote, From Black Box to Second Life: Theatre and Performance in Virtual Worlds, University of Hull, Scarborough, May 20th, 2011

Prescience Factor 4/10. The only thing I nailed here was that consoles would become more of a platform for interacting in a social world way. Lots of RPGs now allow users to build spaces in a shared virtual environment, and not necessarily in service of the game directly, but just to settle a permanent online 3D space. The flexibility of the spaces and avatar interactions in games like, for example, Conan Exiles or Fortnite Creative is more limited than a full social virtual world, but you could potentially create a home and then invite someone round for a chat.

Predicting Virtual Worlds #2

A virtual world in your browser

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

A virtual world in your browser.  There are numerous legitimate reasons for using standard web browsers for access to virtual worlds. The first of these is that the processing power, particularly of a graphics card, required to run a virtual world viewer is beyond the capacity of the technology available to many people, and particularly of institutions. Secondly, the bureaucratic hurdles many practitioners face when requiring additional software to be downloaded and installed preclude the use of virtual worlds in many institutions, suffering as they do from the obstructive policies of their IT departments. Finally, enabling virtual worlds to be viewable from within a web browser means that the practice of accessing virtual worlds can be easily integrated into the majority of people’s normal internet usage, and so potentially widen the demographic of users. The initial effort required to begin using them in an educational situation would consequently be reduced.

It would be reasonable to anticipate that these factors would lead to the usage of virtual worlds becoming much more widespread. Making virtual worlds viewable through the web should have been very successful, in effect though, Lively only lasted for the second half of 2008. Newer virtual worlds, such as Kitely, although trying to widen the demographic of potential users by offering other platforms such as Facebook and Twitter for access, have returned to the use of the viewer-based technology rather than be browser-based.

The reasons for the failure of Lively are still being discussed. The direct experience of those contributing to this chapter, however, is that reducing the functionality of the virtual world in order to enable it to work within a browser removed the elements that made a virtual world worth pursuing. The sense of immersion was reduced, the opportunities to create and interact with virtual artefacts within the world were lessened, and consequently the rapid adoption by the marketplace, needed for the survival of any social medium, did not materialise. Lively disappeared before many people realised it had been launched, and new web-based viewers have not emerged to take its place.

Prescience Factor: 0/10. A total overestimation of the versatility and processing power of browsers now.

Predicting virtual worlds #1

The Metaverse Lives

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

The metaverse lives. Of the chapters in the book, four chapters use Second Life , one uses OpenSim, one World of Warcraft, one uses a 2D multimedia website and one began with Second Life and then, due to the price increases imposed by Linden Lab, moved to OpenSim. From this (admittedly small) sample, it appears that Second Life is still the strongest contender for a platform to host virtual world activity, but that educators are more becoming more likely to consider alternative, though similar, platforms, with OpenSim leading the way.

Educators’ dissatisfaction with, and the expense of, Second Life is beginning to cause fragmentation of the virtual world community. Whereas before it was almost guaranteed that educators would share a single grid, increasingly they are becoming spread across a range of different platforms. One saving grace of this diaspora is that many of the most popular of these virtual worlds use the same viewer. Whether one uses the Second Life viewer, Imprudence, Phoenix or Firestorm or any of a number of others, once a user has learned to interact with the world using that particular interface, then it is of little difficulty to switch to another one. This is particularly important with virtual world as a technology (moreso than, for example, with a word-processing package, or an online forum); since what is required for an effective learning opportunity is immediacy of experience rather than hypermediacy; any changes in the interface are extremely disruptive, since this makes the technology more visible and reduces the transparent nature of the interaction.

However, although they are operated in the same manner, the grids remain separate. The step that will reintegrate this fragmented community, and enable educators to once again easily share and visit their educational resources will be the successful employment of hypergridding. Hypergridding is the connecting of these separate virtual worlds to create a collection of linked worlds, an example of Stephenson’s metaverse. Once it becomes possible to move not only avatars, but also their inventories, from world to world, then these separate grids will perform as a single platform; so, for example, objects purchased from within Second Life (which has a thriving creators’ market) could be employed within OpenSim (which gives institutions greater control over privacy and ownership of the space). This would greatly expand the choices, and the flexibility of using virtual worlds for educators, and to a large extent enable far more effective collaboration. Simple and effective hypergridding is close to deployment, but, as of writing in 2012, has not been realised.

Prescience factor 0/10. Hypergridding is not a thing.

Sex with robots: the case against the case against Part two

Taking apart the interview, and the logic behind the argument, we get to these statements.

“Sex dolls and sex robots in the form of women and girls do something else. In the mind of someone buying and using them – they ARE women and girls. They are designed deliberately to resemble women and girls because they want the man buying and using the dolls to believe it is a woman or girl. These are markedly different things. Sex dolls and mechanical dolls in the form of women and girls play on the idea that women are orifices to be penetrated.

Imagery that dehumanises others in order to justify rule over them serves a political purpose. These sex dolls of women and girls are serving a political purpose to reinforce the idea that women and girls are sub-humans/orifices.”

“In the mind of someone buying and using them – they ARE women and girls.”

This doesn’t follow at all; it needs some evidence to back it up. The only thing we can say for sure is that someone having sex with a robot wants sex with a robot. Maybe it plays on the idea that they stand in for real women, but also it’s likely that that’s just play. There are a huge number of presumptions here, none of which are supported by research.

“Imagery that dehumanises others in order to justify rule over them serves a political purpose. ” True. This is what makes the argument such a problematic one. Dropping in valid political statements, that everyone can agree with, but then indicating a consequence that is no consequence is a standard bait-and-switch ploy. You agree with statement A and (you claim) A causes B, therefore you have to agree with B. Everyone can agree there is a systemic oppression of women in the patriarchal society. And that is formed by men with power in society. That sex dolls are contributing to this is not at all evident though. The power of this as a series of statements is that if you oppose B (because “therefore” is not proven) then somehow you are against A. It’s a specious and underhand way of carrying your argument.

What makes this “therefore” unlikely is that although men with power rule, men with sex dolls are rarely men with power. One of the areas I looked at with avatars is the role of the zeta males in many of the activities in virtual worlds. It is the men who have little or not power that compensate for this lack of power in their own lives by playing at being powerful in their fantasies. Their actions have no impact on wider society because nothing they do has impact.

OK generalisation there, which I admit. See how that works as a way to obfuscate relationships between concepts though? Zeta males have no power, only zeta males have sex with dolls, having sex with dolls therefore has no impact on society.

There may be a link. There may not. Acting on suspicions though is not really very ethical.

I suppose the bottom line for any ethical debate is do you deny a group of (some would call creepy) males’ expression of their sexuality out of caution that their actions may exacerbate the oppression all females, or not? It’s a classical deentological vs consequentialist dilemma. Do you take that chance of conducting a possibly (or even probably) unnecessary act of oppression on a minority group just to be on the safe side?  Or do you take the route of preserving all people’s rights, unless they are demonstrated to be dangerous?

While you’re considering that, I’ll remind you of another analogy. When the pigs finally get to run things in Animal Farm, they end up being just as bad as the people they replaced. Power is intoxicating, you get to control things so that you can make them the way you want them to be. When you’re in power you don’t have to worry about the consequences for disenfranchised people if you’re never likely to be one of them. Prof Richardson has a platform, the agalmatophiles do not; it is evident where the power lies in this debate.

“Four legs good. Two legs better.” should haunt anyone acquiring power; before you act check you’re not simply replicating the iniquities of those who’ve had the power before you.

A professor of ethics should know that.

Sex with robots; the case against the case against part one.

One of the sites I often read to get a good line on an ethical issue is Conatus News. It’s sort of generally progressively liberal, and usually well-argued. It offers a range of opinions, and doesn’t contest them, which is open-minded of them. Some of them, though, make my skin crawl. This article https://conatusnews.com/kathleen-richardson-sex-robots/ was one of them.

It’s an interview with Kathleen Richardson, Professor of Ethics and Culture of Robots and AI at the Centre for Computing and Social Responsibility (CCSR) at De Montfort University and spearhead of The Campaign Against Sex Robots. The rationale is that they exacerbate the objectification of women. I get the impression from the argument made that that’s not what’s going on.

The first alarm bells in the argument are some unsupported (and from what I know, plain wrong) statements. Here’s one:

“In the last twenty years, with the age of the ‘cyborg’ informed by anti-humanism and non-human distinctiveness, there has been this prevailing sense that humans and machines are equivalent. This implies that the only difference between a machine and a human is the ‘man who is creating it’ rather than some empirical and radical difference between a human and an artefact.”

In actual fact, if anything, the more people have looked at recreating consciousness, the more they’ve realised how essentially different the two are. While soft AI is being achieved, hard AI looks like an ever more distant, if impossible, goal. In The Emperor’s New Mind (26 years old now), Roger Penrose made some telling arguments about the differences; that no systematic machine-like process can replicate the organic creation of thought. The Turing test is being failed more often than it used to, because even though bots are being programmed better, the people judging are getting better at telling the difference. If anything, from the bits of research I’ve done, the increase is in more false positives, rather than false negatives. That is, rather than people mistaking bots for humans, people are mistaking humans for bots. Our standards for what makes something human-like are getting higher. Robots are falling behind.

Next one: “It has led to robotic scientists arguing that machines could be ‘social’ ”

This is not what social robotics is. Social robotics is looking at the elements that enable robots to fit into society, not at considering them to actually “have” society. This is a deliberate misrepresentation.

Now we come to the quite disturbing part of the argument.

“If a person felt like they were in a relationship with a machine, then they were. In this way, two seemingly different ways of understanding the world came together to support arguments for human relationships with machines. The first was the breakdown in distinction between humans and machines. The second was the egocentric, individualistic, patriarchal model (‘I think therefore I am’) – what I am thinking, feeling, and experience is the only thing that counts. I am an egocentric individual.”

One of the fascinating things about having worked in virtual worlds is that you come across a whole range of people. A lot of them are finding self-expression in ways that they couldn’t do in the physical world. A lot of them are finding ways to connect with parts of their identity that weren’t possible in the physical world. Sometimes it’s society, or it can be identity tourism. Quite a few were exploring their paraphilias.

Agalmatophilia is sexual attraction towards inanimate objects, dolls, mannequins … robots. It’s a thing. And real for the people who experience it. One of the major social movements of the last fifty years is the development of a more permissive outlook on sexuality. It’s complemented feminism, gay rights, more recently transgender rights. Even before gay rights legislation made discrimination on grounds of sexuality illegal, you’d hear homophobes say things like “well I don’t like it, but if they do it behind closed doors, then I don’t have a problem with it”. Not the best attitude, but underlies that an essential element of permissiveness is that if it’s between consenting adults, free and able to give their consent, then it’s not for us to get involved. Or to judge. If even some homophobes get that, we should be able to do even better.

“If a person feels like they are in a relationship with a machine, then they are.” “what I am thinking, feeling, and experience is the only thing that counts.” Those are positions Prof Richardson is critical of. If we are to respect all sexual expression (between consenting adults, free and able to give their consent), and we are, then we have to accept their own definition of identity, sexuality, gender, etc. That’s not patriarchal (in fact, the attitude has stood against the patriarchy in the past), it’s not egocentric (any more than respecting someone’s identity in terms of sexuality, gender, religion etc is). It’s respect.

It’s respect for people who think and feel and experience pleasure and sex differently, to think and feel differently. In ways we might feel uncomfortable in recognising. Which, I guess, is what makes it hard for the neopuritans, of whom Prof Richardson appears to be one. I assume she is otherwise why dismiss something that doesn’t meet with her recognition of legitimate human experience?

It must be tricky times for the neopuritans. Wanting to monitor and dictate what happens in private, between consenting adults (free and able to give their consent), but finding that homosexuality and transsexuality are now no longer legitimate targets. Who else is next? Let’s identify a remaining marginalised form of experience. Let’s go for the agalmatophiles. As Prof R. says later in her interview “I think, most people would agree they’re a bit creepy”. Yep like most people agreed gay people were a bit creepy a few decades ago? But if we target those that enjoy that sort of thing and dress up our distaste for what we’ve deemed are corrupt and perverse with words like patriarchy, that’ll make it look more liberal.

And if you’re thinking that wanting a relationship with a doll is a bit weird, so why stand up for agalmatophiles, there’s a poem by Martin Niemöller you need to re-read.

So yes, “two seemingly different ways of understanding the world” have come together in Prof Richardson’s argument, but those two things are luddism and neopuritanism, basically fear of technology and fear of other forms of sexuality.

There’s some more unethical opinions stated during the second part of the interview. I’ll leave them for the next post.

A Failure of Balance

http://www.bbc.co.uk/news/science-environment-40899188

The article, if you want to take a look at it, is about Lawson talking on the radio, lying about climate change. There’s of course been an uproar, quite rightly. And some moron at the BBC has said this:

“The BBC’s role is to hear different views so listeners are informed about all sides of debate and we are required to ensure controversial subjects are treated with due impartiality.”

What the absolute fuck? How on earth can a sane rational, and hopefully, educated person come out with that sort of shit? And look at him/herself in the mirror afterwards. It shows not only a basic lack of understanding about journalism, it indicates a complete failure to understand how reality works.

This is not balance:

balance

OK that’s not entirely accurate, because there is nothing on the other side. You can only have two sides of a debate when there are two sides. When there is only one side, then to present both as equivalent is not impartial, it is highly biased towards the side which has no argument. If you want a balanced debate about climate change, have two scientists, SCIENTISTS, not has-been politicians, arguing about whether the increase is 1 degree or 2 degrees, not nobodies banging on about the latest stuff they’ve made up.

It is highly irresponsible to pass on fiction as fact. Not including lies is not being partisan, it is being a responsible member of society. Not giving them a platform is not censorship, (I’m not contradicting my previous post), because no-one’s rights are infringed by it. There is no right to pass on misinformation as truth.

And people have explained this to BBC journalists before. How do they still not get this very basic simple fact? That’s not a rhetorical question. I really don’t understand how they cannot learn how to do their jobs properly.

 

The Master’s Tools

I’ve been struggling with getting my ideas together for this post for a while, but today I read this quote from Audre Lourde “The master’s tools will never dismantle the master’s house”. For an analysis of the context for the quote (and I’m going to argue that context is crucial for anything discussed) here’s an interpretation here. https://www.micahmwhite.com/on-the-masters-tools/

For context for what I’m about to say, here’s where I’m coming from.

I am not particularly well-versed in political debate, I’ve not pursued it as an academic discipline, or read a lot around it. My entire philosophy or values aren’t really any more nuanced than “Be excellent to each other”. I grew up with messages that equality is important in and of itself, and diversity is stronger than monocultures (“The glory of creation is in its infinite diversity and the ways our differences combine to create meaning and beauty” to be precise.) When reading “The Selfish Gene” I was enormously relieved to discover altruism has a pro-survival underpinning, and is justifiable using game theory. So it’s not just a belief system. Though it makes a good one.

As far as standing up for people’s rights though, I’m pretty much simply an armchair activist, partly through being non-confrontational, partly through laziness. The most extreme stance on anything I’ve ever taken is unfriending schoolfriends for being racist (yes unfriending people was a thing back in the 70s, but it entailed not walking home with them rather than disconnecting on social media) or taking a liberal stance in conversations (not that hard when you’re not hanging out with illiberal people). I think the only occasion where I’ve actually put anything on the line was being asked about homosexuality while a teacher during the Section 28 days. I said it was OK, for which I could have lost my job back then. In theory. I don’t think anyone ever did and it was a short term contract anyway, so the risks were minimal.

So, tbh my credentials on this are a bit thin. But I’m going ahead anyway.

I was bullied a lot at school, so have a knee-jerk response to anything that looks like bullying (and tend to be very partisan on those issues) and also, growing up in the seventies when culture was under the thumb of the National Viewers’ and Listeners’ Association, have a knee-jerk response to censorship. When I say “Censorship is fascism” I’m not using hyperbole, I genuinely see any attempt to control what artists or creators make as part of a movement of oppression. There is nothing in art or culture that is so bad that it is worse than the act of suppressing it. If there was one lesson I would want the next generation to learn from my experiences, it would be to have that same knee-jerk revulsion at the idea of censorship that I have.

Side story. I used to teach media studies. One series of lessons was on media effects. I showed A Clockwork Orange (on a dodgy pirated VHS as it was still banned then) and an interview with the head of the NVALA Mary Whitehouse. After the movie they all sat around and discussed the issue of banning it dispassionately. Five minutes of listening to Whitehouse and her festering ideology they were kicking chairs around the classroom. There is a lesson in there somewhere.

Caveat here – I’m talking about art solely. Freedom of expression in being creative is important, being creative about facts isn’t on. Incitement to violence isn’t permissible. If you’re calling for final solutions and that sort of thing, that’s not OK either.

I’ve not been a part of any cults, but did hang out with the Cardiff Marxist-Leninists for a while (not out of any political conviction, but that’s another story). So I’ve seen up close how movements reinforce and isolate dialogue so it becomes bounded and simply reflective, and how ugly and scary virtue signalling can be when you’re part of a group that enacts it. I didn’t last long there.

Quick explanation of virtue signalling, though you can read the full one here. https://en.wikipedia.org/wiki/Signalling_theory  Signalling theory is the idea that animals all have codes to indicate that they are members of the same herd or tribe. It’s a safety mechanism to ensure that they can easily spot an intruder. So as with any pro-survival characteristic it’s probably hardwired into our genes. When it’s applied to whether you share the same values as others it’s called virtue signalling. When used individually it’s a fastrack way of identifying whether someone is aligned with your way of thinking, and so whether is going to be a threat to your ideology. When used by a group, it can sometimes look like a blood frenzy.

So for 40 years now I’ve been supporting diversity and equality in my own small ineffectual way. It’s a relief to see that (pre-2016 anyway) there’s been a gradual improvement on those grounds. Obviously, (I hope it’s obvious) there’s still a long way to go.

What has been leading up to this particular post though is seeing a subversion of this gradual increase in liberalism by groups of people within, mainly, social media. And it’s relevant on a blog on technology because I can see how social media have contributed to it.  This came up recently in a conversation between me, a niece and a stepson. We got onto the term “Social Justice Warrior”. If you look up the definition on Wikipedia or urban dictionary (both useful sources because they’re crowdsourced, so represent the general understanding of a phrase) now, I’ll wait. When coined, the term actually referred solely to people who used a liberal discourse as a means to attack people mainly through social media, through identifying some way in which they were falling short of what they perceived as a progressive liberal stance. So for example, a rocket engineer wears a shirt with female anime characters on it (made by a female friend who liked his penchant for flashy shirts) and gets accused of demeaning women, Stephen Fry calls his friend a bag lady and gets a similar backlash. Ricky Gervais mentions Caitlin Jenner in a comedy routine and is accused of transphobia. None of those accusations stack up on examination. All of them led to people being badgered online. One of those three were tough enough mentally to shrug off the abuse. Two weren’t. The implication of the term SJW being that the superficiality, misguidedness and/or vitriol of the attack indicated the attackers were doing it because they wanted to boost their own self-importance rather than out of a genuine concern for social justice.

However, the term SJW has now been thoroughly debased by extreme conservatives who don’t like any change. Who see the media as predominantly for white males under threat and don’t like them changing. Most ridiculous, I think, has been the backlash against the next iteration of Star Trek because it has a non-white female lead and second lead in the roles. Some fans are accusing the show of selling out to SJWs, not realising that the show has always had a social justice agenda – it’s always been about diversity and inclusivity (even when it failed, it was trying). So now, the term “social justice warrior” has been conflated with people who are genuinely concerned about social justice. So it’s essentially now counter-productive to use it.

Unfortunately (for someone who likes precision), any other term could go through the same attenuation, so making up another one doesn’t help. I might as well refer to people as Type A liberals and Type B liberals, and everyone else as Type C (for Conservatives). And yes this will be a generalisation, so I will attempt to interject the word “most” whenever I remember.

Partly this suppressing, bullying effect is amplified by social media. Any one person can object to something, or raise genuine concerns about something, in a tweet or a blog. And that’s OK. But when social media enables that to be echoed and retweeted, and grouped using a hashtag, then suddenly rather than it being a single voice it becomes a torrent. The fear of being on the end of that torrent can make people highly self-censoring, and even more prone to virtue signalling to deflect any likelihood of being on the receiving end of it. It didn’t begin with social media, the same effect happened around witch hunts (both literally and figuratively). If you’re attending the house of un-American activities, or in a courtroom in Salem, or in a room above a pub in Cardiff, you soon learn to denounce the incorrect statements with the absolutely correct condemnation, otherwise you’ll be next. I think, though, that social media have made that activity widespread and quotidian.

Social media have also enabled people to find each other, and reinforce their opinions. This has been a positive thing in some aspects; look at the Labour resurgence at the last general election. People found that there were other like-minded people, who were fed up with the politics of avarice and exploitation, and wanted a change. On your own, you’re likely to give up. When you find lots of people who think similarly it gives you the confidence to continue.

There’s a downside too, though. For a long time, people have been marginalised by the system have not had a voice, and have been oppressed by others. If you’re in a society run by tall white straight affluent able-bodied southern men, you’re more likely to succeed if you’re a tall white straight affluent able-bodied southern man, and the more of those boxes you can tick, the better you will do. There can be endless debate about which of those factors will benefit you the most. They’re never productive.

Social media have now given people a share in that power to some extent. For the type B liberals, who want social justice, who want to see a more pluralist society, who want more diversity, they have presented the opportunity to push for change, and to be visible enough for that change to be brought about. For the Type A liberals, it’s also presented to the opportunity to get in on the oppression and turn it around.

One of the latest campaigns has been to try to prevent the creators of Game of Thrones from making a TV show about the South winning the Civil War around the hashtag #NoConfederate. Irrespective of whether you think the idea is offensive or not, to call for it not to be made is censorship. It is saying there are some things that cannot be made, or said. People who were oppressed are trying to employ the tool that has been the means by which they have been oppressed for millennia. WTF?

Recently there was an apology from Guelph Central Student Association for including “Walk on the Wild Side” in a playlist because it could be perceived as transphobic. Instead of saying “fuck off” at the accusations, they apologised. That’s a fear response if ever there was one.

Fear. Censorship. That’s not what a liberal progressive agenda can include if it wants to continue to be liberal and progressive.

“The master’s tools will never dismantle the master’s house.”

Obviously, (I hope it’s obvious), the oppression from the right is greater still than the oppression from the left. They have more power, there is far more representation of tall white straight affluent able-bodied southern men than of anyone else in our media and a tendency to remove other representations. But when you see tactics from your own side being used to bully, censor, intimidate and shut down others it’s even more distressing, because it requires us type B liberals to actively distance ourselves from the Type As, when really there aren’t enough of us all to go round as it is. And the middle ground is not a difficult one to find; there is a nice wide path to follow between whitewashing on one side and whitehousing on the other.

My niece asked me a very good question which is “how do you tell the people who genuinely want social justice from those who are just using the discourse to boost their own egos?” I had had too much vodka to answer the question coherently at the time, but I’ve been thinking about it since. And these are the ways.

  • Is the likelihood of the post/blog/tweet more likely to increase the level of fear in society (by intimidating the person who made it) or reduce it (by acting to protect rights of the oppressed)?
  • Are you calling for a viewpoint to be censored or just challenging it (or supporting the rights of all views to be heard)?
  • Are you just reacting to a particular term or expression someone has used or taking time to understand the context?
  • Are you hectoring an individual for your interpretation of their views or giving them the benefit of the doubt? (If someone generally is a reasonable person and they slip up, they deserve a break. Obviously, if they’re an ass most of the time, go for it).

The thing in common with all of these is I think, compassion. Be excellent to each other. If you’re responding to a key phrase you object to, or are reading something objectionable into what’s being said, rather than taking the time to work out what was actually meant, then you’re not acting with compassion. To claim that you’re supporting social justice, while acting unjustly, indicates you don’t really mean it. You’re just doing it to boost your own status. Ask yourself the question, “who has the power here in this dynamic?” If it’s you, then exercise some caution in how you apply it. Give the rocket engineer a break.

I’m not going to blame social media for encouraging this. Social media are just tools for communication. The use of them is still in its early days – they’ve been in common use for only a decade, so part of the problem is we’re still learning how to use them. Entire systems of thought have been associated with single hashtags, and rebuttals for arguments reduced to the same. And each associated with the type A or type C politics. So we get hashtags like #blacklivesmatter and #alllivesmatter bounced around like they are polar opposites, and as if each represent a particular ideology. If we were to take both of those statements literally, they are both true, and both complement each other. They’re not mutually exclusive, which is what the automatic gainsaying of the other hashtag would imply. Black people are three times more likely to be shot by the police as white people. Twice as many white people are shot by the police as black. Those two truths should both be the concern of a liberal ideology. Instead of finding common ground, the type Cs lump As and Bs together, the type As lump Bs and Cs together, and in amongst the gainsaying there is little room in the middle for reasoned debate.

Twitter itself does not help. The most erudite, humane and reasoned authors can end up sounding like complete dawks when reduced to 140 characters. The alternative is to split your argument across 10 different ones in a rowling series of tweets. Both of those misspellings are sic btw. We can’t avoid using twitter for communication, but perhaps we can avoid piling on the acrimony by simply copying and pasting the latest trending hashtag, and trying to catch people out through the terminology they use. If it’s been said once, we don’t need to jump in to prove that we’re just as right on. And maybe we should be more fearless about calling people out for behaving in a type A way, rather than being afraid of them labelling us as a Type C. Perhaps if we do that, then everyone on the left can all focus our attention against the ideologies that genuinely do oppress us.

Because there really are enough of those out there still.

The Morality of Faith Schools

About this podcast http://www.bbc.co.uk/programmes/b08y1bzf

Interesting that BBC Radio 4 is running a debate on The Morality of Faith Schools. In it they’ll be raising a series of moral dilemmas – it is an episode of  The Moral Maze after all. I suppose. In it they’ll be raising the following questions, but they all seem to have such obvious answers I can’t see how it will last 43 minutes. I can speed it up for them if they like, by giving the answers in bold.

A long-running legal battle between Ofsted and the Al-Hijrah Islamic state school in Birmingham has reached the Court of Appeal. The principle at stake is whether segregating boys and girls – for all classes, breaks and trips – amounts to unlawful sex discrimination in a mixed-sex setting. Ofsted’s lawyers argue that it is “a kind of apartheid”, leaving girls “unprepared for life in modern Britain”. The school maintains that gender segregation is one of its defining characteristics and that the policy is clear – parents can make an informed choice. The case is based on the Equality Act, which means the implications of the ruling will be far-reaching and will apply to all schools, not just state schools. Should gender segregation be allowed in co-educational faith schools? No. If it is as abhorrent as segregating children according to their race, why is the great British tradition of single-sex education not the subject of similar scrutiny? Cultural inertia. The case also raises wider moral concerns about what we as a society will allow to go on in faith schools, whether they are publicly-funded or not. Is the promotion of one dominant world view – taught as “truth” – desirable? No. Are faith schools a vital component of multiculturalism or a threat to it? Threat. Should a truly integrated society be judged on the diversity within its schools, lest they become cultural or religious ghettos? Yes. To do away with faith-based education entirely would be to do away with some of the best and most over-subscribed schools in the country. Would that be a price worth paying for a more cohesive society, or a monstrous display of religious intolerance? A price worth paying. The morality of faith schools.

 

“Faith schools” is itself an oxymoron. You can’t claim to educate children while simultaneously lying to them, and teaching them that thinking should be subservient to belief. As I’ve said somewhere before, to teach children about faith is commendable, to tell them that you have faith is acceptable, and that’s the extent to which religion should play in education. To tell them that God exists (when he very clearly doesn’t) is an abuse of authority and banning people from doing that is not religious intolerance, it’s child protection. Every educator should be working as much as they can to oppose the continuance of this anachronism, or get out of the profession.

 

Higher Education from the sidelines

This week I sent off my 13th job application since finishing my last HE post. I’m still working as a consultant developing online course content, and as an external examiner, but for three months now I’ve been able to look at the role of TEL from an outside perspective – and 13 applications means 13 institutions analysed in some depth.

Several things have struck me. Obviously even a sample base of 13 isn’t enough to form any generalisations about the sector, but I’m going to anyway.

The first is the general contempt HR departments display towards people. Not one of the 13 institutions contacted to let me know that my application had been rejected. The one I got to interview with informed me I had an interview, and then didn’t contact me again afterwards. Considering that an application takes half-a-day at least to complete, sometimes two if it’s a lengthy online form with a huge person spec, then it’s beholden on the institution to at least spend the twenty seconds it takes to cut and paste an email address to a bulk email. There simply is no excuse; and seems to be a practice solely in place to demean applicants and re-assert the power difference between employer and potential employee.

The second is how the terminology for job roles seems to be becoming more specific. At one time the role of TEL adviser could fall under any name, but now lecturer in TEL or learning developer, or academic adviser seems to cover it all. The most recent post I applied for asked for someone who was experienced in working with content developers. The idea of someone having a specialism in how people learn, and someone else having a specialism in putting materials together in a user-friendly and aesthetic way being two different skillsets seems to now be generally accepted. I had noticed this a couple of years ago the last time I was in the market for looking for a new contract; it’s reassuring to see it’s embedded.

Having said that, a worrying trend seems to be that institutions are increasingly expecting people with TEL to be able to teach about the technology itself. I’ve seen person specs which want the TEL adviser to also be able to teach programming, or teach how to use the software (I mean, just look on youtube, or RTFM), or even have a background in research into software design. I think of all the things changing in the field this is the most worrying. If you look at the leaders in the discipline they come from all over; fine arts, biology, English literature, psychology, you name it. TEL is a means to an end, but that end can be any form of teaching. It’s not that computer scientists shouldn’t can’t use TEL well in their teaching (I worked with some at Coventry), but if you’re going to advise other people and support them in using technology in their teaching, then the biggest barrier you’ll face is the idea that it’s for technical people. If you’re seen as the tech-guy (male specifically) then you’re immediately set yourself up to make that barrier even more difficult to break down. Ideally (and this is my excuse) you should be seen as someone who struggles with technology too, and is sceptical about it, but still gains something from using it in their teaching. The logic of having to understand how it works in order to be able to show people how to use it in teaching escapes me, it’s like expecting anyone teaching about lecturing to be a geologist because you’re using chalk.

Another drift I’ve seen in the field since 2015 is what (if I was a software engineer) I’d call feature creep. The qualifications required to get the jobs keeps expanding. Most obviously is Fellow of the HEA. Well I anticipated this, and am now an SFHEA. Sorted, I thought, but now added to the list for some jobs is QTS. I qualified as a teacher in 1989. I didn’t go for QTS status, even though I’d have qualified for it as I taught for five years, because it wasn’t necessary for the job back then. I’ve been teaching in HE since 2005 – roughly. My line manager at Warwick extended my role about half way through my time there (before that I was only researching) because she could see the way things were going, but I’m not sure of the precise date. So I’ve been doing that for 12 years. QTS has never cropped up before. So why now? I’m guessing it’s because it’s a means to trim applications – just pick something random that most people aren’t likely to have and you speed up the sifting process. That doesn’t stop it being frustrating however. If I’ve been doing the job for 12 years without the qualification, and doing it well, it seems like it’s not really necessary in order to do it.

A final observation is how poor the search criteria are on all the job databases. I probably get about 100 job notices a week from various databases. Of those one is relevant about every other week. What boosts the average to 1 a week is the ALT mailbase. Sifting through them isn’t that time-consuming, it’s just bewildering. I cannot imagine how my criteria apply to some of the notifications I’m receiving, though it’s occasionally good to get the completely off-the-wall ones, I suppose.