One of the issues with MOOCs and just a whole mass of OER in general, is that if you have thousands of people looking at the materials, who’s going to help give you the individual steer through them that many learners need. Bots are one of the things that may help with this. Bots or companion agents, or AI tutors – they can be called any of these things (but NOT avatars, avatars are specifically online representations of humans, don’t get them mixed up) are standalone programs, which can be purely text-based, but are usually these days a head and shoulders or even a 3D representation (in which case they are embodied companion agents). In virtual worlds, they are indistinguishable from avatars, until you start to talk with them). Even then I’ve run workshops where one or more of the attendees have had long and increasingly frustrated conversations with a bot. There is a sort of intellectual arms race between humans and bots called the Turing test. The idea is that a person will try to work out by having a conversation whether something is human or computer driven (a process called turing, i.e. they ture, they are turing, they have tured – actually only i call it that, but I’m trying to get it taken up by everyone else and eventually adopted by the OED). Although the programs are getting better, people are getting better at turing, so the bar is rising faster than the programmers can match. At the moment.
In the work I’ve been doing with avatars, there’s a strong link between the affinity people feel with their avatar and their perception of how effective their learning is. In the project I’ve been doing with Ravensbourne College and Elzware, I started with the same hypothesis, if the learner feels more affinity with the bot that’s leading them through the content, will they experience their learning as more effective?
We’re not at that stage yet, but in the first phases – since the ethos of the project is that it is a user-centred design – we began with a series of workshops to identify which of a series of bot designs the learners would feel a greater affinity towards, and why.
The students selected a bot design that was not anthropomorphic, though narrowly beating one that was. The reasons for this were various, but was down to three major reasons:
Bots that were realistic and too anthropomorphic were too creepy and too distracting.
Bots that were cartoony and too anthopomorphic weren’t creey but were still distracting.
Bots that were realistic but not anthropomorphic were just right.
Bots that were cartoony and not anthropormorphic were unengaging.
“Realistic” in this sense, is a very specific usage, meaning engaging the breadth and/or depth of senses, and is the sense that people like Naimark and Steuer use it. So it could be 3D rendering, higher number of bits, more detail and so on. It also means behavioural realism, and it was this aspect, having a personality (and not necessarily a pleasant one) that students felt made the “realistic” but non-anthropomorphic the best tutors for them.
We still haven’t been able to put this to the test – the actual I in the AI is still being worked on, but we have hopefully put in place a design that will make the bot something the students want to learn from.