When I spent a lot of time in virtual worlds, it would sometimes surprise me that I’d be standing with a group of other avatars (ie figures that were being operated by people) and there’d be an agent or bot there (ie a figure that was operated by a software program) and the avatars would carry on chatting to the bot, completely oblivious to the fact that it wasn’t a person. Recently I needed to contact eBay about shipping costs from the US and all the time while chatting to their helpdesk was wondering whether I was chatting to a human or a machine. I couldn’t really tell, which is both an indication of how good the programming is, or how badly the guy I was talking to was doing at sounding human. I was talking to my brother about this a week or so ago and he said it never occurred to him to think about it. And if it did, would it matter? I guess it doesn’t. I got the answer I wanted, but it still felt a bit unnerving to not actually know.
One of the things I ended up looking for when looking at user experiences in virtual worlds was this tendency and ability to apply the Turing test (which gets explained below) – I called the process turing, so from a back construction (pretending it’s a verb to start with) you get people who tend to ture, and people who don’t. And people who are bad at it.
I started to write a paper about this, but never got far with it, one thing that I’d like to do and haven’t had the opportunity to is actually run some experiments on this. So instead of struggling on with the paper, I’ve posted it here.
I like big bots and I cannot lie
The need to be able to tell the difference between agents and avatars may not be important for people. For example, Nowak and Biocca found that, when participants were asked about their perceptions of copresence with avatars or agents, the degrees of copresence were equally as high for both.
“Given that the means in all conditions were well above the middle of the scale (representing relatively high levels of presence), it seems that users felt they had access to another mind and that the mind was attending to them and that they felt present in the virtual environment regardless of whether they interacted with an agent or avatar. (2003; 490).”
This presumes that for people’s interactions with agents to be highly effective they must be similar to that with avatars. Evidence supporting this assumption can be seen in experiments such as Kaptein et al (2011; 270) where it was found that social praise initiated by an agent contributed to the user liking the agent, but did not increase feelings of copresence since this praise was occasionally mistimed.
Draude specifically identifies trust as promoting a bond between agents and users (2011; 322), which when one considers the central role that trust plays in teamworking and collaboration may be a better indicator of effectiveness of human-agent relationships (Ring and Van de Ven, 1994; 93). In fact, trust in, or at least self-confidence in the presence of, agents may be stronger than that in avatars. In a study reported by Blascovitch and Bailenson, users were asked to perform easy and hard tasks in front of no audience, in front of an audience of agents, and in front of an audience of avatars. The participants performed equally well at the easy tasks in all three circumstances, but on hard tasks they performed significantly worse when performing in front of avatars. The conclusion was that the agents were not seen to be judging the performance, whereas the avatars were, and this inhibited the participants’ performance (Blascovitch and Bailenson, 2011; 92).
These factors influencing the relationship between humans and agents is not simply a function of the behaviour of the avatar, however, as individual differences between participants also have a bearing on this interaction. Bayne (2008: 204) notes the differing degrees to which uncanniness affects students, and no matter where the design of the agent falls on the Uncanny Valley curve, some participants will always report feelings of unease at the thought of communicating with an artificial intelligence (Gemma Tombs, personal correspondence). For others, the relationship they have with the agent may not differ substantially from that they have with humans; Morgan and Morgan, (2007; 334) report the statements of Reeves and Nass that “suggest that participants respond to computers socially, or in ways that are similar to their responses to other humans” and Kiesler that people “keep promises to computer in the same way that they do to real life human beings”, and as stated above, some will actually feel more confident in the presence of agents than in front of humans.
In a final addition to the complexity in human-agent compared to human-avatar relationships, participants also differ in their tendency, or even their ability, to determine whether a character’s agency is human, or artificial in its origin; characteristics which has been referred to as a turing tendency and a turing ability (Childs, 2010; 72). The Turing test was first proposed by Alan Turing in 1950 (Donath, 2000; 300) as a means to determine whether an artificial intelligence was thinking as a human. The essential element of the test was that a person would communicate through text with either a person or a computer, and if it was not possible to distinguish between the two, then the computer could be displaying intelligence. In some studies, agents taking part in online conversations have successfully mimicked human behaviour sufficiently to pass as human for a short while (Murray, 1997; 219-226, Donath, 2000; 302). Some participants, however, may make an inaccurate categorisation in the other direction. In a study reported by Slater and Steed (2002, 153) a participant:
“Formed the belief that the cartoon-like avatars were not embodying real people but were “robots”, and as a result she cut down her communication with them. It was only when they laughed (“something a robot cannot do”) that she believed they were real.”
In the studies by Newman (2007; 98) in which participants were asked to converse with a teddy bear named Albert (actually Newman’s research assistant) through a variety of media, several of the participants assumed that they were interacting with a non-player character in a game and “registered surprise when they realised that Albert was responding to them with human intelligence” (Newman, 2007; 98).
From reading the transcripts of these interactions it seems that participants were employing a form of Turing test, to varying degrees of accuracy, i.e. have a high turing tendency, but low turing ability.
That’s as far as I got, but I’d be interested to hear your responses — do you ture? And how good are you at it?
References
Bayne, S. (2008) Uncanny spaces for higher education: teaching and learning in virtual worlds, ALT-J, Research in Learning Technology, Vol. 16, No. 3, September 2008, 197–205
Blascovitch, J. and Bailenson, J. (2011) Infinite Reality, HarperCollins: New York
Childs, M. (2010) Learners’ Experiences of Presence in Virtual Worlds, PhD Thesis, University of Warwick, http://go.warwick.ac.uk/ep-edrfap/
Donath, J. (2000) Being Real; Questions of Tele-Identity, in Goldberg, K. (ed.) The Robot in the Garden; Telerobotics and Telepistemology in the Age of the Internet (296 – 311) Cambridge, MA: MIT Press
Draude, C. (2011) Intermediaries: reflections on virtual humans, gender, and the Uncanny Valley, AI & Soc (2011) 26:319–327
Kaptein, M., Markopoulos, P., de Ruyter, B, and Aarts, E. (2011) Two acts of social intelligence: the effects of mimicry and social praise on the evaluation of an artificial agent, AI & Soc (2011) 26:261–273
Morgan, K. and Morgan, M. (2007) The Challenges of Gender, Age and Personality in E-Learning, in R. Andrews and C. Haythornthwaite, (Eds.) The SAGE Handbook of E-learning Research, UK: London, Sage, 328-346
Murray, J.H. (1997) Hamlet on the Holodeck: The Future of Narrative in Cyberspace, New York: The Free Press
Newman, K. (2007) PhD Thesis, An Investigation of Narrative and Role-playing Activities in Online Communication Environments, Griffith University, Queensland
Nowak, K.L. and Biocca, F. (2003) The Effect of the Agency and Anthropomorphism on Users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments, Presence, Vol. 12, No. 5, October 2003, 481–494
Ring, P.S and Van de Ven, A.H. (1994) ‘Developmental processes of cooperative interorganizational relationships’, Academy of Management Review, 19 (1): 90-118
Slater, M. and Steed, A. (2002) Meeting People Virtually: Experiments in Shared Virtual Environments, in Schroeder, R. The Social Life of Avatars, Springer-Verlag London