Home » Uncategorized » Do you ture? Telling the difference between agents and avatars.

Do you ture? Telling the difference between agents and avatars.

When I spent a lot of time in virtual worlds, it would sometimes surprise me that I’d be standing with a group of other avatars (ie figures that were being operated by people) and there’d be an agent or bot there (ie a figure that was operated by a software program) and the avatars would carry on chatting to the bot, completely oblivious to the fact that it wasn’t a person. Recently I needed to contact eBay about shipping costs from the US and all the time while chatting to their helpdesk was wondering whether I was chatting to a human or a machine. I couldn’t really tell, which is both an indication of how good the programming is, or how badly the guy I was talking to was doing at sounding human. I was talking to my brother about this a week or so ago and he said it never occurred to him to think about it. And if it did, would it matter? I guess it doesn’t. I got the answer I wanted, but it still felt a bit unnerving to not actually know.

One of the things I ended up looking for when looking at user experiences in virtual worlds was this tendency and ability to apply the Turing test (which gets explained below) – I called the process turing, so from a back construction (pretending it’s a verb to start with) you get people who tend to ture, and people who don’t. And people who are bad at it.

I started to write a paper about this, but never got far with it, one thing that I’d like to do and haven’t had the opportunity to is actually run some experiments on this. So instead of struggling on with the paper, I’ve posted it here.

I like big bots and I cannot lie

The need to be able to tell the difference between agents and avatars may not be important for people. For example, Nowak and Biocca found that, when participants were asked about their perceptions of copresence with avatars or agents, the degrees of copresence were equally as high for both.

“Given that the means in all conditions were well above the middle of the scale (representing relatively high levels of presence), it seems that users felt they had access to another mind and that the mind was attending to them and that they felt present in the virtual environment regardless of whether they interacted with an agent or avatar. (2003; 490).”

This presumes that for people’s interactions with agents to be highly effective they must be similar to that with avatars. Evidence supporting this assumption can be seen in experiments such as Kaptein et al (2011; 270) where it was found that social praise initiated by an agent contributed to the user liking the agent, but did not increase feelings of copresence since this praise was occasionally mistimed.

Draude specifically identifies trust as promoting a bond between agents and users (2011; 322), which when one considers the central role that trust plays in teamworking and collaboration may be a better indicator of effectiveness of human-agent relationships (Ring and Van de Ven, 1994; 93). In fact, trust in, or at least self-confidence in the presence of, agents may be stronger than that in avatars. In a study reported by Blascovitch and Bailenson, users were asked to perform easy and hard tasks in front of no audience, in front of an audience of agents, and in front of an audience of avatars. The participants performed equally well at the easy tasks in all three circumstances, but on hard tasks they performed significantly worse when performing in front of avatars. The conclusion was that the agents were not seen to be judging the performance, whereas the avatars were, and this inhibited the participants’ performance (Blascovitch and Bailenson, 2011; 92).

These factors influencing the relationship between humans and agents is not simply a function of the behaviour of the avatar, however, as individual differences between participants also have a bearing on this interaction. Bayne (2008: 204) notes the differing degrees to which uncanniness affects students, and no matter where the design of the agent falls on the Uncanny Valley curve, some participants will always report feelings of unease at the thought of communicating with an artificial intelligence (Gemma Tombs, personal correspondence). For others, the relationship they have with the agent may not differ substantially from that they have with humans; Morgan and Morgan, (2007; 334) report the statements of Reeves and Nass that “suggest that participants respond to computers socially, or in ways that are similar to their responses to other humans” and Kiesler that people “keep promises to computer in the same way that they do to real life human beings”, and as stated above, some will actually feel more confident in the presence of agents than in front of humans.

In a final addition to the complexity in human-agent compared to human-avatar relationships, participants also differ in their tendency, or even their ability, to determine whether a character’s agency is human, or artificial in its origin; characteristics which has been referred to as a turing tendency and a turing ability (Childs, 2010; 72). The Turing test was first proposed by Alan Turing in 1950 (Donath, 2000; 300) as a means to determine whether an artificial intelligence was thinking as a human. The essential element of the test was that a person would communicate through text with either a person or a computer, and if it was not possible to distinguish between the two, then the computer could be displaying intelligence. In some studies, agents taking part in online conversations have successfully mimicked human behaviour sufficiently to pass as human for a short while (Murray, 1997; 219-226, Donath, 2000; 302). Some participants, however, may make an inaccurate categorisation in the other direction. In a study reported by Slater and Steed (2002, 153) a participant:

“Formed the belief that the cartoon-like avatars were not embodying real people but were “robots”, and as a result she cut down her communication with them. It was only when they laughed (“something a robot cannot do”) that she believed they were real.”

In the studies by Newman (2007; 98) in which participants were asked to converse with a teddy bear named Albert (actually Newman’s research assistant) through a variety of media, several of the participants assumed that they were interacting with a non-player character in a game and “registered surprise when they realised that Albert was responding to them with human intelligence” (Newman, 2007; 98).

 From reading the transcripts of these interactions it seems that participants were employing a form of Turing test, to varying degrees of accuracy, i.e. have a high turing tendency, but low turing ability.

That’s as far as I got, but I’d be interested to hear your responses — do you ture? And how good are you at it?

References

Bayne, S. (2008) Uncanny spaces for higher education: teaching and learning in virtual worlds, ALT-J, Research in Learning Technology, Vol. 16, No. 3, September 2008, 197–205

Blascovitch, J. and Bailenson, J. (2011) Infinite Reality, HarperCollins: New York

Childs, M. (2010) Learners’ Experiences of Presence in Virtual Worlds, PhD Thesis, University of Warwick, http://go.warwick.ac.uk/ep-edrfap/

Donath, J. (2000) Being Real; Questions of Tele-Identity, in Goldberg, K. (ed.) The Robot in the Garden; Telerobotics and Telepistemology in the Age of the Internet (296 – 311) Cambridge, MA: MIT Press

Draude, C. (2011) Intermediaries: reflections on virtual humans, gender, and the Uncanny Valley, AI & Soc (2011) 26:319–327

Kaptein, M., Markopoulos, P., de Ruyter, B, and Aarts, E. (2011) Two acts of social intelligence: the effects of mimicry and social praise on the evaluation of an artificial agent, AI & Soc (2011) 26:261–273

Morgan, K. and Morgan, M. (2007) The Challenges of Gender, Age and Personality in E-Learning, in R. Andrews and C. Haythornthwaite, (Eds.) The SAGE Handbook of E-learning Research, UK: London, Sage, 328-346

Murray, J.H. (1997) Hamlet on the Holodeck: The Future of Narrative in Cyberspace, New York: The Free Press

Newman, K. (2007) PhD Thesis, An Investigation of Narrative and Role-playing Activities in Online Communication Environments, Griffith University, Queensland

Nowak, K.L. and Biocca, F. (2003) The Effect of the Agency and Anthropomorphism on Users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments, Presence, Vol. 12, No. 5, October 2003, 481–494

Ring, P.S and Van de Ven, A.H. (1994) ‘Developmental processes of cooperative interorganizational relationships’, Academy of Management Review, 19 (1): 90-118

Slater, M. and Steed, A. (2002) Meeting People Virtually: Experiments in Shared Virtual Environments, in Schroeder, R. The Social Life of Avatars, Springer-Verlag London

6 thoughts on “Do you ture? Telling the difference between agents and avatars.

  1. Sharing your liking for knowing, rather than not, I usually find a few random asides generate the answer: a bad pun, something the other’s name reminds you of, anything off the wall. Ignored completely, it’s either a robot or someone who might as well be one… I’ve never, to my knowledge, had a fun conversation with a robot. Has anyone?

    • Good tip. Actually one of those bits of research I mentioned had an example where the research assistant missed a pun the child he was chatting to made (it was a project to keep children entertained while they were in hospital). They’d said “broom” and meant both the sweeping tool and the car sound effect. the researcher didn’t get it. After that point, nothing he could do or say would convince the child he was a real person. Humour seems to be the defining characteristic. When they create funny robots is when we’re doomed as a species.

  2. I realise that remark missed the point somewhat 🙂 Is it “low turing ability” or just that most of the interactions with agents are designed to get a simple-ish task done, and if it works – like your brother – who cares it was a computer …? (Only those of us who like still to think we’re smarter than a computer?!)

    • Well I think tendency to ture is probably more to the point there than ability. Some people won’t if they just want to just get a task done, I agree, but I think being aware of the likelihood that whatever you’re communicating with could be a bot has a lot to do with it. I’m not sure, in fact, why mine is so high, but in a lot of interactions online it’s at the back of my mind. If I hadn’t met you at various family get-togethers I might even be suspicious here too 🙂

      A lot of the more successful bots only work when they’re used in situations which require a limited range of conversational elements. In Hamlet on the Holodeck, Janet Murray mentions a few of the early ones, and those that people struggled the most with were those that emulated obsessive behaviours (selling cars, talking about a sport, etc.). As long as it’s emulating someone who’s likely to be stuck in a linguistic rut anyway, they work.

      As an interesting sideline, each year fewer programs pass the Turing test. They are getting better at mimicking human conversations, but so too are the judges getting better at spotting the difference. It’s a sort of linguistic arms race. But as it progresses, I guess those of us who have a single track mind may have to work a bit harder at appearing human.

      • Have you perhaps read “Le Ton Beau de Marot” by Doug Hofstadter – he of “Gödel, Escher, Bach”? It’s at least as irritating as the latter, being both highly-organised in structure (various background models, like the fugue idea in GEB, governing its large-scale construction; page-turns carerfully-managed, …) and quite disorganised in detail, in the sense that – to get those page-turns – Hofstadter allows himself to ramble pretty much as he wants! But its central point (or one of them) is about machine-translation & artificial intelligence; exploring how poetry, and poetry-in-translation even more, is perhaps the ultimate Turing test.

        As someone who translates poetry badly, i.e. into prose which is sometimes a bit poetic but often just prose(!), I think he has a point: poetry is all about selecting a word not for its meaning (alone) but for its halo of other meanings and images and half-remembered associations – – which is a really tough AI programming task, perhaps unachievable: the ultimate Turing test !?!

      • I haven’t. I will put it on my Christmas wishlist. I’ve read GEB, and thoroughly loved it, although probably because of the ramblings as much as anything. Come to think of it, everyone who quotes it rambles a bit too, most recently heard quoted at a Hayseed Dixie concert (heavy rock played in a Hillbilly style).

        I went a bit into translation theory in one of my books. I’d been to a talk on translation, and in my usual way went off on a whole ramble drawing parallels between various things that probably only seem like they’re equivalent to me because I know so little about them. I saw what was being done in virtual worlds as a translation from the physical space to the virtual space. In translation theory to translate something requires an abstraction of a word into their various meanings, and then a concretisation into another language, and the process of abstraction itself reveals a lot about the meaning – exactly what you were saying, I realise. It’s not always possible though, and so a translator is stuck between making the best of a bad job, or peppering the text with a lot of footnotes. In the book I use the example of my copy of “The Metamorphosis”, which takes the latter route, and has half a page translating ungeheures ungeziefer, whereas most translators would just say monstrous insect and move on. Fascinating, and you get the whole weight of what Kafka meant (and the poetry of just the sound of the words) but it takes you out of the story. The relevance to virtual worlds is that you can supplement what the virtual world can’t portray through having lots of paradata dotted around, which adds to the understanding, but detracts from the sense of being drawn into what’s going on.

        You can see what I mean about rambling now 🙂

Leave a reply to markchilds Cancel reply