Predicting virtual worlds #5

Augmented reality

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Augmented reality. One function of many mobile devices is that they can combine the camera images with an overlay of additional information. In the same way that a global position and orientation can be used to calculate the position of stars as seen from a particular viewpoint, these can also be used to determine at which geographical location the tablet is being pointed. These data can then be combined with a database of information to create an overlay of text to explain, for example, the historical background of a building, or the direction and distance of the nearest Underground station or Irish pub. Locations can be digitally tagged, either with additional information (such as in a learning exercise with students adding their own content to locations), artwork, or even graffiti[i]. As with the astronomy apps described above, this provides learning in situ, and provides a kinaesthetic element to the activity.

The potential of combining geotagged images onto the physical world is indicated by augmented reality games such as Paranormal Activity: Sanctuary[ii]. In this, images of ghosts are located at particular physical world co-ordinates, which can be seen with a dedicated iphone app that overlays these images onto a camera image. Players can create sanctuaries, or cast spells, at locations which then influence the experience of other players. The game therefore becomes a massive multiplayer roleplay game played in a blending of the physical and a virtual world.

Greater precision than that enabled by global positioning can be provided through Radio Frequency Identification (RFID) tags, the technology for recognising which will soon be available on mobile technology[iii]. By placing an RFID tag in clothing, or furniture, or on a person, information about that object or person (i.e. metadata) are then always available, whenever a device is pointed at them. For example, products could be linked directly to their user manual; simply hold your tablet PC over your oven and pop-up boxes appear over the knobs decoding the icons, or attend a conference and each person there could have information linked to them, such as name, institution and research interests, which is revealed by holding up your phone and tapping their image on the screen. Several museums and exhibitions already have augmented reality exhibits; when a room is looked at through an AR viewer, the physical objects in the room are overlain with animations or animated characters, bringing the static displays to life[iv]. A further enhancement of augmented reality is achieved by enabling the animated characters to address the attendee directly, with their gaze following the attendee around the room, as they are tracked through the use of an RFID bracelet[v]. The characters can address many attendees simultaneously since, from the perspective of each, the character is looking at them individually, a transformed social interaction known as non-zero sum mutual gaze[vi]. These interactions can be made more seamless by plans to create AR projections within glasses[vii]. Rather than clicking on a screen, input can be through the detection of hand movements[viii] or, for the mobility-impaired, deliberate blinking[ix].

If this is possible with pre-recorded characters, then it is only a short leap to enabling this to take place with avatars or bots in realtime, by layering the virtual world image onto the physical as it is created. This activity resembles the mixed reality performances created by Joff Chafer and Ian Upton; originally these performances used images from a virtual world projected onto a gauze, so that they could share the stage with physical world actors, and more recently Chafer and Upton have used 3D imaging to bring the virtual world images out from the screen and into a physical space[x]. Capturing the images of avatars in the virtual world, and geotagging them, would enable people with the appropriate AR viewer to see avatars moving and communicating all around them. As the sophistication of bots develop, then the use of them as companion agents, guiding learners through virtual learning scenarios, could be brought into the physical world as guides and mentors seen only by the learner through their AR viewer. With ways of imaging the avatars through something as immersive as AR glasses, physical world participants and avatars could interact on an equal footing.

For learning and teaching, the advantages of blending the functionality and flexibility of the virtual and the real are enormous. For the learners who see virtual learning as inauthentic, relating the virtual world learning directly to the physical may overcome many of their objections. The integration of an object and its metadata as well as data providing context for that object (called paradata) is easily done in a virtual world; AR in combination with RFID tagging enables this feature to be deployed in the physical world too, since information, ideas and artefacts can be intrinsically and easily linked. User generated content, which again is simply created and shared in the virtual, can also be introduced to the physical. Participation at a distance, on an equivalent footing with participation face-to-face, could be achieved by the appearance of avatars in the physical environment and RFID tagging the physically-present participants and objects.

[i] New Scientist, Augmented reality offers a new layer of intrigue, 25th May, 2012. http://www.newscientist.com/article/mg21428652.600-augmented-reality-offers-a-new-layer-of-intrigue.html,

[ii] ‘Ogmento Reality Reinvented, Paranormal Activity: Sanctuary’, 22nd May 2012. http://www.ogmento.com/games/paranormal-activity-sanctuary

[iii] Marketing Vox, ‘Married to RFID, What Can AR Do for Marketers?’, 4th March, 2010. http://www.marketingvox.com/married-to-rfid-what-can-ar-do-for-marketers-046365/

[iv] Canterbury Museum, ‘Augmented reality technology brings artefacts to life’, 28th September 2009. http://www.canterburymuseum.com/news/13/augmented-reality-technology-brings-artefacts-to-life,

[v] A. Smith, ‘In South Korea, Kinect and RFID power an augmented reality theme park’,  Springwise,  20th February, 2012. http://www.springwise.com/entertainment/south-korea-kinect-rfid-power-augmented-reality-theme-park/

[vi] J. Bailenson, A. Beall and M. Turk, ‘Transformed Social Interaction, p. 432

[vii] S. Reardon, ‘Google hints at new AR glasses in video’, New Scientist, 4th April, 2012. http://www.newscientist.com/blogs/onepercent/2012/04/google-hints-at-new-ar-glasses.html

[viii]C. de Lange, ‘What life in augmented reality could look like’,  New Scientist, 24th May, 2012. http://www.newscientist.com/blogs/nstv/2012/05/what-life-in-augmented-reality-will-be-like.html

[ix] Eduardo Iáñez, , Andrés Úbeda, José Azorín, Carlos Pérez, Assistive robot application based on a RFID control architecture and a wireless EOG interface Science Direct, Available Online 21st May, 2012. http://www.sciencedirect.com/science/article/pii/S0921889012000620

[x] Joff Chafer and Ian Upton, Insert / Extract: Mixed Reality Research Workshop, November 2011. http://vimeo.com/32502129

Prescience Factor: 0/10. Despite AR apps becoming more popular since 2013, AR is still not really a thing in that it’s not an embedded part of what we do. Linking AR and virtual worlds in the way I’ve described here isn’t any further along (as far as normal practice) than it was when I wrote the above.

Advertisements

Predicting virtual worlds #4

Gone to mobiles every one

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Gone to mobiles every one. As noted above, the rate of take-up of virtual worlds anticipated by Gartner in 2007 has not been realised. Some predictions also state that the rate of development of the high end graphics technology required for virtual worlds will be slowed by the adoption of mobile technology. Essid[i] notes that the tablet PCs owned by students cannot run the viewers required for Second Life, and these are now the predominant technology with which students access online learning. In addition, many apps provide innovative and offline education, such as the use of Google Sky, Zenith or Sky Safari for learning astronomy. In these apps, the learner holds up their tablet PC and through global positioning and inbuilt sensors that detect orientation, the tablet displays the position of stars, planets and Messier objects as they appear in the sky in the direction in which the tablet is pointed. This provides learning that is interactive, kinaesthetic, and in situ. Essid’s prediction is that the predominant use of mobile technology as the new wave of learning will stall the uptake of virtual worlds. As Essid states in his blog post on the subject:

One does not wish to be on the wrong side of history, and I think SL evangelists are clearly on the wrong side, unless they are early in their careers and have a Plan B for research and teaching.

[i] J. Essid, ‘Mobile: Shiny? Yes. Hyped? Yes. Fad? No’, 3rd May, 2010, http://iggyo.blogspot.co.uk/2012/05/mobile-shiny-yes-hyped-yes-fad-no.html

Prescience factor: 8/10. To be fair, not my prediction really, but Joe Essid’s. The increasing usage of mobile devices has meant that learning can take place anywhere, but it has caused the development of some technologies to slow down because as a platform they are more limited,  in terms of the processing power when compared to PCs, but also in due to the speed of input (two thumbs are never as fast as 10 fingers) and the readability of the screen. It’s not 10 out of 10, because I think both Joe and I underestimated the capacity and functionality that smartphones would attain by 2018. Moore’s Law is severely difficult to anticipate because it’s a geometrical increase. This example shows why it’s impossible to get your head around geometrical increases.

Predicting virtual worlds #3

Moves to games consoles

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

Move to games consoles. A move in the other direction, to more sophisticated technologies, is the repositioning of virtual worlds to run on games consoles such as the Playstation 3, or the Xbox 360. Games consoles have very sophisticated graphics processors, and the quality of the rendering of games is much higher than is available using most PCs. Many Massive Multiplayer Online Games are already available on games consoles, and shared virtual worlds such as Minecraft, previously running on PCs have made the transition to this technology. In the Minecraft case this has proved immensely popular[i]. The advantages of running virtual worlds on games consoles is due not to just the more sophisticated graphics available, but also the control devices. Many people find games controllers a more intuitive mechanism to control the movement of an avatar than keys on a keyboard. However, text chat and a drag and drop functionality are less well integrated.

The next generation of games controllers offer even more interactivity as they allow detect physical interaction by the users, through the use of cameras and motion detectors. Devices such as the Xbox 360 Kinect controller have already been used to animate avatars. There are two ways in which this can be done, either avatars can be animated inworld through physical actions triggering pre-set animations, (for example, the act of raising your hand triggers a hand-raising animation) or, as in the work of Fumi Iseki and a team at Tokyo University[ii], the animations are used to animate avatars in realtime, but in a local viewer only. Because avatars are animated inworld using preloaded animation files, there is no way with current technology to map motion capture to inworld movements of avatars in realtime.

This opens up the potential to a new, closer relationship between user and avatar. As Jelena Guga notes[iii], this will be the next step change in the developing degrees of immersion that have been enabled by the changes in technology. Although the sense of immersion may be increased, requiring the user to be physically active may also, simultaneously, make the user more aware of their physical body while interacting inworld, so their sense of embodiment may actually be reduced. The individual experience of virtual worlds varies enormously, and a likely discovery will be that whether physical operation of an avatar increases or reduces the sense of engagement inworld will be different depending on the person. Another consideration is that a one-to-one correspondence between physical action and resulting motion of the avatar is, as Stelarc points out,[iv] possibly the least interesting way in which to use motion recognition to animate avatars. In his performances, Stelarc uses his body to create inworld performances, but his gestures cause his avatar to fly, float, operate cyborg attachments and so on.

From a learning point of view, a move to games consoles could have advantages and disadvantages. The move would overcome some of the objections to virtual worlds with regard to the low resolution graphics, and technical issues such as slow rendering times and lag, however, they could marginalise activity even further, since few computer suites in universities have games consoles, and it cannot be guaranteed that all users will have access to them. Developing motion controlled interfaces would address some of the issues that some users find; that operating within a virtual world is too sedentary an experience. Offering the opportunity to operate avatars through physical motion may appeal to these users, though indications are that these users actually find the virtual nature of these experiences intrinsically problematic, equating the virtual with inauthentic. However, the use of a motion recognition system will have interesting opportunities for performance.

[i] M. Hawkins, ‘Minecrafton Xbox Live a smash success, MSNBC, May 12th, 2012http://www.ingame.msnbc.msn.com/technology/ingame/minecraftxboxlivesmashsuccess-766955,

[ii] Second Lie, ‘Kinect Hack Brings Real Time Animation To Second Life’, November 2011http://second-lie.blogspot.co.uk/2011/11/kinect-hack-brings-real-time-animation.html,

[iii] J. Guga, ‘Redefining Embodiment through Hyperterminality’, Virtual Futures 2.0, University of Warwick, 18th – 19th June, 2011.

[iv] Stelarc, Keynote, From Black Box to Second Life: Theatre and Performance in Virtual Worlds, University of Hull, Scarborough, May 20th, 2011

Prescience Factor 4/10. The only thing I nailed here was that consoles would become more of a platform for interacting in a social world way. Lots of RPGs now allow users to build spaces in a shared virtual environment, and not necessarily in service of the game directly, but just to settle a permanent online 3D space. The flexibility of the spaces and avatar interactions in games like, for example, Conan Exiles is more limited than a full social virtual world, but you could potentially create a home and then invite someone round for a chat.

Predicting Virtual Worlds #2

A virtual world in your browser

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

A virtual world in your browser.  There are numerous legitimate reasons for using standard web browsers for access to virtual worlds. The first of these is that the processing power, particularly of a graphics card, required to run a virtual world viewer is beyond the capacity of the technology available to many people, and particularly of institutions. Secondly, the bureaucratic hurdles many practitioners face when requiring additional software to be downloaded and installed preclude the use of virtual worlds in many institutions, suffering as they do from the obstructive policies of their IT departments. Finally, enabling virtual worlds to be viewable from within a web browser means that the practice of accessing virtual worlds can be easily integrated into the majority of people’s normal internet usage, and so potentially widen the demographic of users. The initial effort required to begin using them in an educational situation would consequently be reduced.

It would be reasonable to anticipate that these factors would lead to the usage of virtual worlds becoming much more widespread. Making virtual worlds viewable through the web should have been very successful, in effect though, Lively only lasted for the second half of 2008. Newer virtual worlds, such as Kitely, although trying to widen the demographic of potential users by offering other platforms such as Facebook and Twitter for access, have returned to the use of the viewer-based technology rather than be browser-based.

The reasons for the failure of Lively are still being discussed. The direct experience of those contributing to this chapter, however, is that reducing the functionality of the virtual world in order to enable it to work within a browser removed the elements that made a virtual world worth pursuing. The sense of immersion was reduced, the opportunities to create and interact with virtual artefacts within the world were lessened, and consequently the rapid adoption by the marketplace, needed for the survival of any social medium, did not materialise. Lively disappeared before many people realised it had been launched, and new web-based viewers have not emerged to take its place.

Prescience Factor: 0/10. A total overestimation of the versatility and processing power of browsers now.

Predicting virtual worlds #1

The Metaverse Lives

In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.

This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.

The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.

The metaverse lives. Of the chapters in the book, four chapters use Second Life , one uses OpenSim, one World of Warcraft, one uses a 2D multimedia website and one began with Second Life and then, due to the price increases imposed by Linden Lab, moved to OpenSim. From this (admittedly small) sample, it appears that Second Life is still the strongest contender for a platform to host virtual world activity, but that educators are more becoming more likely to consider alternative, though similar, platforms, with OpenSim leading the way.

Educators’ dissatisfaction with, and the expense of, Second Life is beginning to cause fragmentation of the virtual world community. Whereas before it was almost guaranteed that educators would share a single grid, increasingly they are becoming spread across a range of different platforms. One saving grace of this diaspora is that many of the most popular of these virtual worlds use the same viewer. Whether one uses the Second Life viewer, Imprudence, Phoenix or Firestorm or any of a number of others, once a user has learned to interact with the world using that particular interface, then it is of little difficulty to switch to another one. This is particularly important with virtual world as a technology (moreso than, for example, with a word-processing package, or an online forum); since what is required for an effective learning opportunity is immediacy of experience rather than hypermediacy; any changes in the interface are extremely disruptive, since this makes the technology more visible and reduces the transparent nature of the interaction.

However, although they are operated in the same manner, the grids remain separate. The step that will reintegrate this fragmented community, and enable educators to once again easily share and visit their educational resources will be the successful employment of hypergridding. Hypergridding is the connecting of these separate virtual worlds to create a collection of linked worlds, an example of Stephenson’s metaverse. Once it becomes possible to move not only avatars, but also their inventories, from world to world, then these separate grids will perform as a single platform; so, for example, objects purchased from within Second Life (which has a thriving creators’ market) could be employed within OpenSim (which gives institutions greater control over privacy and ownership of the space). This would greatly expand the choices, and the flexibility of using virtual worlds for educators, and to a large extent enable far more effective collaboration. Simple and effective hypergridding is close to deployment, but, as of writing in 2012, has not been realised.

Prescience factor 0/10. Hypergridding is not a thing.

Sex with robots: the case against the case against Part two

Taking apart the interview, and the logic behind the argument, we get to these statements.

“Sex dolls and sex robots in the form of women and girls do something else. In the mind of someone buying and using them – they ARE women and girls. They are designed deliberately to resemble women and girls because they want the man buying and using the dolls to believe it is a woman or girl. These are markedly different things. Sex dolls and mechanical dolls in the form of women and girls play on the idea that women are orifices to be penetrated.

Imagery that dehumanises others in order to justify rule over them serves a political purpose. These sex dolls of women and girls are serving a political purpose to reinforce the idea that women and girls are sub-humans/orifices.”

“In the mind of someone buying and using them – they ARE women and girls.”

This doesn’t follow at all; it needs some evidence to back it up. It’s just as likely that someone having sex with a robot just wants sex with a robot. Maybe it plays on the idea, but also it’s likely that that’s just play. There are a huge number of presumptions here, none of which are supported by research.

“Imagery that dehumanises others in order to justify rule over them serves a political purpose. ” True. This is what makes the argument such a problematic one. Dropping in valid political statements, that everyone can agree with, but then indicating a consequence that is no consequence is a standard bait-and-switch ploy. You agree with statement A and A causes B, therefore you have to agree with B. Everyone can agree there is a systemic oppression of women in the patriarchal society. And that is formed by men with power in society. That sex dolls are contributing to this is not at all evident though. The power of this as a series of statements though is that if you oppose B (because “therefore” is not proven) then somehow you are against A. It’s a specious and underhand way of carrying your argument.

What makes this “therefore” unlikely is that although men with power rule over women, men with sex dolls are rarely men with power. One of the areas I looked at with avatars is the role of the zeta males in many of the activities in virtual worlds. It is the men who have little or not power that compensate for this lack of power in their own lives by playing at being powerful in their fantasies. Their actions have no impact on wider society because nothing they do has impact.

OK generalisation there, which I admit. See how that works as a way to obfuscate relationships between concepts though? Zeta males have no power, only zeta males have sex with dolls, having sex with dolls therefore has no impact on society.

There may be a link. There may not. Acting on suspicions though is not really very ethical.

I suppose the bottom line for any ethical debate is do you deny a group of (some would call creepy) males’ expression of their sexuality out of caution that their actions may exacerbate the oppression all females, or not? It’s a classical deentological vs consequentialist dilemma. Do you take that chance of conducting a possibly unnecessary act of oppression on a minority group as a member of a group owning more power than those you oppress? If you’re a woman looking at the Prof Richardson v. agalmatophiles debate, whose side do you take; those of the powerful majority, or the powerless minority?

While you’re considering that, I’ll remind you of another analogy. When the pigs finally get to run things in Animal Farm, they end up being just as bad as the people they replaced. Power is intoxicating, you get to control things so that you can make them the way you want them to be. The consequences for a few disenfranchised people needn’t worry you if you’re not one of them. Prof Richardson has a platform, the agalmatophiles do not; it is evident where the power lies in this debate.

“Four legs good. Two legs better.” should haunt anyone acquiring power; adopt the ethical stance of checking you’re not simply replicating the iniquities of those who’ve had the power before you.

Sex with robots; the case against the case against part one.

One of the sites I often read to get a good line on an ethical issue is Conatus News. It’s sort of generally progressively liberal, and usually well-argued. It offers a range of opinions, and doesn’t contest them, which is open-minded of them. Some of them, though, make my skin crawl. This article https://conatusnews.com/kathleen-richardson-sex-robots/ was one of them.

It’s an interview with Kathleen Richardson, Professor of Ethics and Culture of Robots and AI at the Centre for Computing and Social Responsibility (CCSR) at De Montfort University and spearhead of The Campaign Against Sex Robots. The rationale is that they exacerbate the objectification of women. I get the impression from the argument made that that’s not what’s going on.

The first alarm bells in the argument are some unsupported (and from what I know, plain wrong) statements. Here’s one:

“In the last twenty years, with the age of the ‘cyborg’ informed by anti-humanism and non-human distinctiveness, there has been this prevailing sense that humans and machines are equivalent. This implies that the only difference between a machine and a human is the ‘man who is creating it’ rather than some empirical and radical difference between a human and an artefact.”

In actual fact, if anything, the more people have looked at recreating consciousness, the more they’ve realised how essentially different the two are. While soft AI is being achieved, hard AI looks like an ever more distant, if impossible, goal. In The Emperor’s New Mind (26 years old now), Roger Penrose made some telling arguments about the differences; that no systematic machine-like process can replicate the organic creation of thought. The Turing test is being failed more often than it used to, because even though bots are being programmed better, the people judging are getting better at telling the difference. If anything, from the bits of research I’ve done, the increase is in more false positives, rather than false negatives. That is, rather than people mistaking bots for humans, people are mistaking humans for bots. Our standards for what makes something human-like are getting higher. Robots are falling behind.

Next one: “It has led to robotic scientists arguing that machines could be ‘social’ ”

This is not what social robotics is. Social robotics is looking at the elements that enable robots to fit into society, not at considering them to actually “have” society. This is a deliberate misrepresentation.

Now we come to the quite disturbing part of the argument.

“If a person felt like they were in a relationship with a machine, then they were. In this way, two seemingly different ways of understanding the world came together to support arguments for human relationships with machines. The first was the breakdown in distinction between humans and machines. The second was the egocentric, individualistic, patriarchal model (‘I think therefore I am’) – what I am thinking, feeling, and experience is the only thing that counts. I am an egocentric individual.”

One of the fascinating things about having worked in virtual worlds is that you come across a whole range of people. A lot of them are finding self-expression in ways that they couldn’t do in the physical world. A lot of them are finding ways to connect with parts of their identity that weren’t possible in the physical world. Sometimes it’s society, or it can be identity tourism. Quite a few were exploring their paraphilias.

Agalmatophilia is sexual attraction towards inanimate objects, dolls, mannequins … robots. It’s a thing. And real for the people who experience it. One of the major social movements of the last fifty years is the development of a more permissive outlook on sexuality. It’s complemented feminism, gay rights, more recently transgender rights. Even before gay rights legislation made discrimination on grounds of sexuality illegal, you’d hear homophobes say things like “well I don’t like it, but if they do it behind closed doors, then I don’t have a problem with it”. Not the best attitude, but underlies that an essential element of permissiveness is that if it’s between consenting adults, free and able to give their consent, then it’s not for us to get involved. Or to judge. If even some homophobes get that, we should be able to do even better.

“If a person feels like they are in a relationship with a machine, then they are.” “what I am thinking, feeling, and experience is the only thing that counts.” Those are positions Prof Richardson is critical of. If we are to respect all sexual expression (between consenting adults, free and able to give their consent), and we are, then we have to accept their own definition of identity, sexuality, gender, etc. That’s not patriarchal (in fact, the attitude has stood against the patriarchy in the past), it’s not egocentric (any more than respecting someone’s identity in terms of sexuality, gender, religion etc is). It’s respect.

It’s respect for people who think and feel and experience pleasure and sex differently, to think and feel differently. In ways we might feel uncomfortable in recognising. Which, I guess, is what makes it hard for the neopuritans, of whom Prof Richardson appears to be one. I assume she is otherwise why dismiss something that doesn’t meet with her recognition of legitimate human experience?

It must be tricky times for the neopuritans. Wanting to monitor and dictate what happens in private, between consenting adults (free and able to give their consent), but finding that homosexuality and transsexuality are now no longer legitimate targets. Who else is next? Let’s identify a remaining marginalised form of experience. Let’s go for the agalmatophiles. As Prof R. says later in her interview “I think, most people would agree they’re a bit creepy”. Yep like most people agreed gay people were a bit creepy a few decades ago? But if we target those that enjoy that sort of thing and dress up our distaste for what we’ve deemed are corrupt and perverse with words like patriarchy, that’ll make it look more liberal.

And if you’re thinking that wanting a relationship with a doll is a bit weird, so why stand up for agalmatophiles, there’s a poem by Martin Niemöller you need to re-read.

So yes, “two seemingly different ways of understanding the world” have come together in Prof Richardson’s argument, but those two things are luddism and neopuritanism, basically fear of technology and fear of other forms of sexuality.

There’s some more unethical opinions stated during the second part of the interview. I’ll leave them for the next post.