Social presence and bots

cog

One of the issues with MOOCs and just a whole mass of OER in general, is that if you have thousands of people looking at the materials, who’s going to help give you the individual steer through them that many learners need. Bots are one of the things that may help with this. Bots or companion agents, or AI tutors – they can be called any of these things (but NOT avatars, avatars are specifically online representations of humans, don’t get them mixed up) are standalone programs, which can be purely text-based, but are usually these days a head and shoulders or even a 3D representation (in which case they are embodied companion agents). In virtual worlds, they are indistinguishable from avatars, until you start to talk with them). Even then I’ve run workshops where one or more of the attendees have had long and increasingly frustrated conversations with a bot. There is a sort of intellectual arms race between humans and bots called the Turing test. The idea is that a person will try to work out by having a conversation whether something is human or computer driven (a process called turing, i.e. they ture, they are turing, they have tured – actually only i call it that, but I’m trying to get it taken up by everyone else and eventually adopted by the OED). Although the programs are getting better, people are getting better at turing, so the bar is rising faster than the programmers can match. At the moment.

In the work I’ve been doing with avatars, there’s a strong link between the affinity people feel with their avatar and their perception of how effective their learning is. In the project I’ve been doing with Ravensbourne College and Elzware, I started with the same hypothesis, if the learner feels more affinity with the bot that’s leading them through the content, will they experience their learning as more effective?

emo

We’re not at that stage yet, but in the first phases – since the ethos of the project is that it is a user-centred design – we began with a series of workshops to identify which of a series of bot designs the learners would feel a greater affinity towards, and why.

The students selected a bot design that was not anthropomorphic, though narrowly beating one that was. The reasons for this were various, but was down to three major reasons:

Bots that were realistic and too anthropomorphic were too creepy and too distracting.

Bots that were cartoony and too anthopomorphic weren’t creey but were still distracting.

Bots that were realistic but not anthropomorphic were just right.

Bots that were cartoony and not anthropormorphic were unengaging.

goop

“Realistic” in this sense, is a very specific usage, meaning engaging the breadth and/or depth of senses, and is the sense that people like Naimark and Steuer use it. So it could be 3D rendering, higher number of bits, more detail and so on. It also means behavioural realism, and it was this aspect, having a personality (and not necessarily a pleasant one) that students felt made the “realistic” but non-anthropomorphic the best tutors for them.

We still haven’t been able to put this to the test – the actual I in the AI is still being worked on, but we have hopefully put in place a design that will make the bot something the students want to learn from.

Advertisements

Recent inspirations

Well what am I working on at the moment? Three things this weekend. Yesterday I met with people from mediacore and The Flipped Institute and I hope to be doing more work with them. The Flipped Institute is an onsite focus for all of the discussions around the flipped classroom; the idea of which is to do all of the associative transmission mode stuff outside of the class, so the actual time spent in class can be spent discussing it, building on it, and getting the students to do activities around it. In other words using teachers for what they’re best at. Finally activity-based learning is becoming mainstream (it left me quite Dewey-eyed … see what I did there). I first came across the concept around ’97 / ’98 when the director of a VLE project I was working on (anyone remember Broadnet? It’s now Learnwise) Steve Molyneux produced an online module for his students to learn from, and then used the lecture time to answer their queries about it, and provide one-to-one advice. I think the word “flip” wasn’t around then.

Another thing is a project on creating a bot as an intelligent tutor, and looking at how its design actually will encourage students to engage with it, and how appearance and behaviour influence the affinity the students feel for it; the hypothesis being the greater the affinity the more effective the learning. My job atm? To design the evaluation, which the bot is to conduct itself. :-/

Third thing. Also at the moment I’m working on some stuff for the performance artist Stelarc, and two colleagues, Joff Chafer and Ian Upton. Last year they worked on a performance and installation in Coventry called Extract / Insert and I’m writing it up for a chapter in my most recent book (Making Sense of Space, by Iryna Kuksa and me, coming soon …. ish). The work they did really challenged the distinction between real and virtual, and it was fascinating the way it connected with so many people (and didn’t connect with some too).