Moves to games consoles
In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.
This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.
The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.
Move to games consoles. A move in the other direction, to more sophisticated technologies, is the repositioning of virtual worlds to run on games consoles such as the Playstation 3, or the Xbox 360. Games consoles have very sophisticated graphics processors, and the quality of the rendering of games is much higher than is available using most PCs. Many Massive Multiplayer Online Games are already available on games consoles, and shared virtual worlds such as Minecraft, previously running on PCs have made the transition to this technology. In the Minecraft case this has proved immensely popular[i]. The advantages of running virtual worlds on games consoles is due not to just the more sophisticated graphics available, but also the control devices. Many people find games controllers a more intuitive mechanism to control the movement of an avatar than keys on a keyboard. However, text chat and a drag and drop functionality are less well integrated.
The next generation of games controllers offer even more interactivity as they allow detect physical interaction by the users, through the use of cameras and motion detectors. Devices such as the Xbox 360 Kinect controller have already been used to animate avatars. There are two ways in which this can be done, either avatars can be animated inworld through physical actions triggering pre-set animations, (for example, the act of raising your hand triggers a hand-raising animation) or, as in the work of Fumi Iseki and a team at Tokyo University[ii], the animations are used to animate avatars in realtime, but in a local viewer only. Because avatars are animated inworld using preloaded animation files, there is no way with current technology to map motion capture to inworld movements of avatars in realtime.
This opens up the potential to a new, closer relationship between user and avatar. As Jelena Guga notes[iii], this will be the next step change in the developing degrees of immersion that have been enabled by the changes in technology. Although the sense of immersion may be increased, requiring the user to be physically active may also, simultaneously, make the user more aware of their physical body while interacting inworld, so their sense of embodiment may actually be reduced. The individual experience of virtual worlds varies enormously, and a likely discovery will be that whether physical operation of an avatar increases or reduces the sense of engagement inworld will be different depending on the person. Another consideration is that a one-to-one correspondence between physical action and resulting motion of the avatar is, as Stelarc points out,[iv] possibly the least interesting way in which to use motion recognition to animate avatars. In his performances, Stelarc uses his body to create inworld performances, but his gestures cause his avatar to fly, float, operate cyborg attachments and so on.
From a learning point of view, a move to games consoles could have advantages and disadvantages. The move would overcome some of the objections to virtual worlds with regard to the low resolution graphics, and technical issues such as slow rendering times and lag, however, they could marginalise activity even further, since few computer suites in universities have games consoles, and it cannot be guaranteed that all users will have access to them. Developing motion controlled interfaces would address some of the issues that some users find; that operating within a virtual world is too sedentary an experience. Offering the opportunity to operate avatars through physical motion may appeal to these users, though indications are that these users actually find the virtual nature of these experiences intrinsically problematic, equating the virtual with inauthentic. However, the use of a motion recognition system will have interesting opportunities for performance.
[i] M. Hawkins, ‘‘Minecraft‘ on Xbox Live a smash success, MSNBC, May 12th, 2012http://www.ingame.msnbc.msn.com/technology/ingame/minecraft–xbox–live–smash–success-766955,
[ii] Second Lie, ‘Kinect Hack Brings Real Time Animation To Second Life’, November 2011http://second-lie.blogspot.co.uk/2011/11/kinect-hack-brings-real-time-animation.html,
[iii] J. Guga, ‘Redefining Embodiment through Hyperterminality’, Virtual Futures 2.0, University of Warwick, 18th – 19th June, 2011.
[iv] Stelarc, Keynote, From Black Box to Second Life: Theatre and Performance in Virtual Worlds, University of Hull, Scarborough, May 20th, 2011
Prescience Factor 4/10. The only thing I nailed here was that consoles would become more of a platform for interacting in a social world way. Lots of RPGs now allow users to build spaces in a shared virtual environment, and not necessarily in service of the game directly, but just to settle a permanent online 3D space. The flexibility of the spaces and avatar interactions in games like, for example, Conan Exiles or Fortnite Creative is more limited than a full social virtual world, but you could potentially create a home and then invite someone round for a chat.