Augmented reality
In 2013 I wrote the concluding chapter for Experiential Learning in Virtual Worlds (edited by me and Greg Withnail). I predicted what would happen in the development of virtual worlds over the following five years. I made six different predictions. The best I did was I got one of them half-right. The rest were almost entirely wrong.
This year, I’m developing a course on Educational Futures in which I’m looking at what makes an effective, or a poor, prediction. Rather than make someone else look like an idiot, I’m looking at the predictions I made. The idea is for students to look at the text and work out how I got it so badly wrong in most of the cases.
The following is not entirely the text from the book, but I’ve only tweaked it so it will work on its own rather than as part of a concluding chapter. I’ve also added a prescience factor at the end, to sum up how well I did.
Augmented reality. One function of many mobile devices is that they can combine the camera images with an overlay of additional information. In the same way that a global position and orientation can be used to calculate the position of stars as seen from a particular viewpoint, these can also be used to determine at which geographical location the tablet is being pointed. These data can then be combined with a database of information to create an overlay of text to explain, for example, the historical background of a building, or the direction and distance of the nearest Underground station or Irish pub. Locations can be digitally tagged, either with additional information (such as in a learning exercise with students adding their own content to locations), artwork, or even graffiti[i]. As with the astronomy apps described above, this provides learning in situ, and provides a kinaesthetic element to the activity.
The potential of combining geotagged images onto the physical world is indicated by augmented reality games such as Paranormal Activity: Sanctuary[ii]. In this, images of ghosts are located at particular physical world co-ordinates, which can be seen with a dedicated iphone app that overlays these images onto a camera image. Players can create sanctuaries, or cast spells, at locations which then influence the experience of other players. The game therefore becomes a massive multiplayer roleplay game played in a blending of the physical and a virtual world.
Greater precision than that enabled by global positioning can be provided through Radio Frequency Identification (RFID) tags, the technology for recognising which will soon be available on mobile technology[iii]. By placing an RFID tag in clothing, or furniture, or on a person, information about that object or person (i.e. metadata) are then always available, whenever a device is pointed at them. For example, products could be linked directly to their user manual; simply hold your tablet PC over your oven and pop-up boxes appear over the knobs decoding the icons, or attend a conference and each person there could have information linked to them, such as name, institution and research interests, which is revealed by holding up your phone and tapping their image on the screen. Several museums and exhibitions already have augmented reality exhibits; when a room is looked at through an AR viewer, the physical objects in the room are overlain with animations or animated characters, bringing the static displays to life[iv]. A further enhancement of augmented reality is achieved by enabling the animated characters to address the attendee directly, with their gaze following the attendee around the room, as they are tracked through the use of an RFID bracelet[v]. The characters can address many attendees simultaneously since, from the perspective of each, the character is looking at them individually, a transformed social interaction known as non-zero sum mutual gaze[vi]. These interactions can be made more seamless by plans to create AR projections within glasses[vii]. Rather than clicking on a screen, input can be through the detection of hand movements[viii] or, for the mobility-impaired, deliberate blinking[ix].
If this is possible with pre-recorded characters, then it is only a short leap to enabling this to take place with avatars or bots in realtime, by layering the virtual world image onto the physical as it is created. This activity resembles the mixed reality performances created by Joff Chafer and Ian Upton; originally these performances used images from a virtual world projected onto a gauze, so that they could share the stage with physical world actors, and more recently Chafer and Upton have used 3D imaging to bring the virtual world images out from the screen and into a physical space[x]. Capturing the images of avatars in the virtual world, and geotagging them, would enable people with the appropriate AR viewer to see avatars moving and communicating all around them. As the sophistication of bots develop, then the use of them as companion agents, guiding learners through virtual learning scenarios, could be brought into the physical world as guides and mentors seen only by the learner through their AR viewer. With ways of imaging the avatars through something as immersive as AR glasses, physical world participants and avatars could interact on an equal footing.
For learning and teaching, the advantages of blending the functionality and flexibility of the virtual and the real are enormous. For the learners who see virtual learning as inauthentic, relating the virtual world learning directly to the physical may overcome many of their objections. The integration of an object and its metadata as well as data providing context for that object (called paradata) is easily done in a virtual world; AR in combination with RFID tagging enables this feature to be deployed in the physical world too, since information, ideas and artefacts can be intrinsically and easily linked. User generated content, which again is simply created and shared in the virtual, can also be introduced to the physical. Participation at a distance, on an equivalent footing with participation face-to-face, could be achieved by the appearance of avatars in the physical environment and RFID tagging the physically-present participants and objects.
[i] New Scientist, Augmented reality offers a new layer of intrigue, 25th May, 2012. http://www.newscientist.com/article/mg21428652.600-augmented-reality-offers-a-new-layer-of-intrigue.html,
[ii] ‘Ogmento Reality Reinvented, Paranormal Activity: Sanctuary’, 22nd May 2012. http://www.ogmento.com/games/paranormal-activity-sanctuary
[iii] Marketing Vox, ‘Married to RFID, What Can AR Do for Marketers?’, 4th March, 2010. http://www.marketingvox.com/married-to-rfid-what-can-ar-do-for-marketers-046365/
[iv] Canterbury Museum, ‘Augmented reality technology brings artefacts to life’, 28th September 2009. http://www.canterburymuseum.com/news/13/augmented-reality-technology-brings-artefacts-to-life,
[v] A. Smith, ‘In South Korea, Kinect and RFID power an augmented reality theme park’, Springwise, 20th February, 2012. http://www.springwise.com/entertainment/south-korea-kinect-rfid-power-augmented-reality-theme-park/
[vi] J. Bailenson, A. Beall and M. Turk, ‘Transformed Social Interaction, p. 432
[vii] S. Reardon, ‘Google hints at new AR glasses in video’, New Scientist, 4th April, 2012. http://www.newscientist.com/blogs/onepercent/2012/04/google-hints-at-new-ar-glasses.html
[viii]C. de Lange, ‘What life in augmented reality could look like’, New Scientist, 24th May, 2012. http://www.newscientist.com/blogs/nstv/2012/05/what-life-in-augmented-reality-will-be-like.html
[ix] Eduardo Iáñez, , Andrés Úbeda, José Azorín, Carlos Pérez, Assistive robot application based on a RFID control architecture and a wireless EOG interface Science Direct, Available Online 21st May, 2012. http://www.sciencedirect.com/science/article/pii/S0921889012000620
[x] Joff Chafer and Ian Upton, Insert / Extract: Mixed Reality Research Workshop, November 2011. http://vimeo.com/32502129
Prescience Factor: 0/10. Despite AR apps becoming more popular since 2013, AR is still not really a thing in that it’s not an embedded part of what we do. Linking AR and virtual worlds in the way I’ve described here isn’t any further along (as far as normal practice) than it was when I wrote the above.