Robots, AI and Ethics – An interview with Dr Christoph Bartneck

Gerry Gaffney Robots 4 Comments

Download (mp3: 33MB, 34:24)

Christoph Bartneck on robots, racism and ethics

Share this episode


Gerry Gaffney

This is Gerry Gaffney with the User Experience podcast. My guest today is an associate professor and director of postgraduate studies at the Human Interaction Technology or HIT lab at the University of Canterbury in New Zealand. He has a background in industrial design and human computer interaction. He has researched and written extensively on the subject of robots and human-robot interaction. He has also worked at Lego!

Dr. Christoph Bartneck, welcome to the User Experience podcast.

Christoph Bartneck

Thank you for having me.


Now let’s jump into the deep end. Do you think robots will ever be awarded equal status with humans?


Well, that is speculation and we can only hypothesize. Hard to tell. The European Union recently, I think started to introduce a discussion at least in the European Parliament about, starting a debate about some form of personhood that should or could be awarded to AIs. So a discussion like this has started, where it will end I’m not quite sure.

Another viewpoint is that interestingly from the Geneva Convention, if you are interested in fighting a just war, you must have your combatants held responsible for their actions, for their soldiers. So by that logic, if you have an autonomous killing machine going around, then this machine must be accountable for its action, which implies some form of personhood.

So there are I guess some angles that one can look at this but of course this is still very much science fiction because we don’t have anything even close to a strong AI that would justify any kind of personhood.


A few years back, Reeves and Nass published a fascinating book entitled The Media Equation and their research showed that people tended to bring the same societal norms to their interactions with computers and even with dumb media like television. So they treated them as if there were humans. In fact. Does your research tend to support this observation?


Very much so. That book was actually one of the inspirations for me when I started off my academic career. And yeah, I mean if you’re completely rational about it, if you sit in a cinema and there’s a movie and you start crying because it’s such a sad scene, it doesn’t make any sense at all. You know, these are actors, they’re not really suffering, this is just a story. This is just light on a surface, projected, you know, nobody’s harmed. So there’s no real point of being sat about anything at all if you think about it rationally.

But indeed people do tend to treat media as if it was a social actor. And this does extend to robots as well. Even more so. Because with robots, you do not only have, let’s say a graphical representation of something like you could have, for example, in a computer game, a character that would be on the screen. But if this character suddenly actually gets its own body and is part of your real world and is even able to manipulate and operate in your real world, then more this effect of people treating it as if it was a social actor, as if it was alive becomes even stronger .


Some of their research is really interesting. I remember one instance was where they just had a label on two television sets and one was called news and one was called entertainment and even when the exact same information was presented on both televisions, the one that was labelled news was considered to be authoritative.


Yeah, they’ve done lots of interesting studies in the original book. The one that stuck in my brain was when they had two web browsers that, you know… The people were given a certain task to search for some information, but one web browser was artificially slowed … It had some sort of cog wheels and it was, you know, putting on a big show like it was working very hard and afterwards the participants in this study were asked or actually the computer would ask for the participants’ help for calibrating and screen and there was an open-ended task where the user could perform this kind of judging colours as long as they wanted. And as it so happens, the people who were like previously working with is kind of slowed-down web browser that has all these cog wheels turning, they spent much more time helping the computer to calibrate its screen. So the, the idea of course that in human-human interaction that one hand washes the other, this kind of principle is then extended to human-computer interaction. And we’ve done studies on reciprocity as well with robots. So we at that time went into different types of game theory and looked at whether people reciprocate robotic behaviour in the same way as they were towards humans.


The terms robot and Ai Christoph have been, you know, I think somewhat maybe devalued by being applied very, very broadly. Can you, would you dare to give us a definition of what we been by both of those terms?


Normally I’m not a big fan of semantics because it’s, you know, to some degree [it’s] word games, at least what Wittgenstein taught us. So I’m quite hesitant because whatever kind of definition that would come up with that will always be a counter example. So it’s your washing machine a robot? Well, strictly speaking, yes, it has a microcontroller. It has moving parts, it has sensors, actuators. So yes it is a robot, but we know by that definition pretty much anything is a robot. So that’s not really, really helpful. I guess what is more interesting to look at is certain machines are designed were a biological, say, pattern is the starting point. So this could be a dog or it could be some form of other living thing or even a humanoid. So when it’s modelled after a human being, now these kind of robots are often designed to specifically interact with humans.

So these are not your industrial robotic arms that are used in manufacturing. These robots are made to interact with people and those are the kind of robots that I consider most interesting.

With AI, well, AI is currently, let’s say the boundaries of human-robot interaction because we can do a lot of things with the body so robots can do a lot of things, they can lift things, they can, you know, and they can be animated, but the bottleneck really is the brain of the robots and this is where AI comes in. So we want, in my research, we always want the robots to be able to do all sorts of things and very often that’s not quite yet possible. Or if we wanted to make it possible, it would take an enormous amount of work. So we often end up doing Wizard of Oz experiments, wherever essentially fake AI. And we’ve become pretty good at faking AIs. It’s almost like a research topic in itself. Like how to make something look smart when it really isn’t.


One of the things that you’ve had some news coverage about recently is you did some work with a colleague of yours at Monash University in Melbourne about racism and robots. Can you tell us a little bit about that?


Yes, of course. Well, Rob Sparrow, he’s a very, very interesting philosopher over at Monash University. Some years ago I saw one of his presentations and he was talking about, or some of his topics, but he started off with claiming that he was looking for, for this presentation for some brown robots and he couldn’t find any. So if you go into Google right now, take your web browser, go to Google image search, type in android or human-like robots and what you’ll see is white, white, white, a little bit of grey every once in a while, a bit of metallic. But that’s pretty much it. And that seemed odd. And from this observation we started to be curious about where the concept of race is something that people would also ascribe to robots, similar to The Media Equation that we treat them as, if they’re social actors, you could think about a robot as also having a race.

Now we’re of course then entering the phase of what does it mean robot having a race? What do we mean by it? I mean, of course you could consider robots to be their own race in comparison, let’s say to humans, you know, robots might be their own species. But that would of course assume that robots would have some form of reproduction or would have some form of genes. And of course right now they don’t. So, rationally speaking, this kind of definition doesn’t necessarily mean much. You could also think about in terms of whether maybe there are certain categories of races within robots. So you could have say of the industrial robots which are a race or maybe the the animal-like robots are a race. But again, this was something that we’re not particularly interested in. Um, w

What we were looking really after is how people and if at all, people ascribe human race or the idea of human race to robots and if they do so, will that change the behaviour that people have towards these robots.


And what is your finding?


Well, to study these kinds of racial biases or perception of race, you have to go onto some slightly more sophisticated experiments. So if we could just, you know, ask people plain in the face, you know: Are you a racist? Or: Do you think that black Americans are criminals? Then of course everybody will say, no, I’m not racist. No, of course you know, black people are just fine, you know, so you can’t really approach the problem this way unless you interview members of the Ku Klux Klan, I guess. [Laughs.] But that might be a very specific sub group. So what you have to do is that you have to come up with what we call implicit tests.

And implicit tests work in the way that participants have to do a certain task in a split second. So they may be at 850 milliseconds to make a certain decision and that forces them to… or no, it prevents them from using their rational brain, so you can’t rationalize and think about it if you only have a split second, you just have to react. So in our particular case we used an implicit test called the shooter bias study or which is also known as a police officer dilemma. And the way this experiment works is that the people or the participants in the experiment I’ve put in the role of the police officer and the police officer is being confronted with images of people in the original study. And the people would either hold a gun, in which case you have to decide to shoot or they would hold sort of soda can. In that case you are have to decide not to shoot. And in the original study, you know, these people were either white Americans or black Americans. And in the original study it came clear that people have a clear bias. So they were much faster at shooting black people than white people. Now, so this is kind of like a very established research methodology of looking at racial biases. And what is really great here is that even people who would like normally claim like, no, I’m absolutely not a racist, but once you put them into this kind of implicit test, what you see, well, even those people have these kind of tendencies. So we took this original experiment and expanded it by adding a robotic condition to it. So we did everything like the original study, plus we added in white robots and black robots to the game.

And what we found is that people exhibited to the same bias towards black Americans as they would exhibit to black robots. So the shooter bias exists for humans and it does exists for robots. So the prejudices that we have towards certain groups of people do actually transfer over to robots. And this is again within the line of The Media Equation in a way. And of course, if you start rationalizing and thinking about it, it doesn’t necessarily make a lot of sense because most people in the experiment will never have interacted with a robot. They have no experience. They don’t even know whether brown robots would be any different from white robots. It’s, you know, there’s no rational foundation for it. It is purely the prejudices that people have against other races and other ethnicities that are just being transferred over.


It’s fascinating and quite disturbing really isn’t it?


Well, racism in itself is disturbing. And racism in itself is a real problem. And you know, robots might not care so much about being discriminated, but for a lot of people in the world, when you are suffering from the consequences of racism, when your opportunities to get a good job, when you opportunities to be a productive member of society are being limited simply by the, by you know, your race or your ethnicity. And this is a real, real, real issue. And, the disturbing part of it of course is that all these engineers that build all these robots, I don’t necessarily think that these engineers are particularly racist. We know they probably don’t have all membership and the Ku Klux Klan, but they still, everybody builds white robots and in societies that are quite diverse and have, you know, many, many races and ethnicities as part of the society, there’s no real need for this. And particularly if you are developing robots that I talked about before that have the intention or the purpose of interacting with human story… I’m not talking about the robotic arms, I’m really talking about humanoid robots. Then there’s really no need for them, all of them to be white. And I think this actually does give… or it sets quite a bad example of how we create our environment, how we design our environment. We had this discussion almost, I think, well a year ago I guess were we had this big discussion about the emoticons that were all yellow. And then people were arguing, well, why are there no brown emoticons. And then okay, Apple and others started to offer more racial variability in their emoticons, and I think that’s the right thing to do, but again, why did it not occur to them to start with that? Why do we have to have such an uproar first before they actually start considering that some people might not feel comfortable with themselves being represented as yellow.


Indeed. I guess it’s reminiscent of Sara Wachter-Boettcher’s work. She was, she was on UXpod a while back. Her book, Technically Wrong, suggested that the bias inherent in tech firms and I guess Silicon Valley start-ups in particular was pervading AI and algorithms in general.


Yeah. So I looked up this book prior to this interview and I didn’t get a chance to read it, so I, I can’t really say much about it, but then again, in a Silicon Valley, I mean, I used to live there for a while and they were plenty of people from all sorts of countries, from all sorts of backgrounds. It was not completely whitewashed. But even in that environment, the design that came out of, it’s a predominantly white. So, I think if we imagine a future where robots will more and more become prevalent in our environment and our society and in our homes, then there should be at least an option. You should at least have the opportunity to select or purchase a robot that you think, suits you are or you think should be a good member of your family or of your community. And if the only option you have ever have is white, then that’s just bizarre.


Totally off topic, but Spike Lee’s new movie is called BlacKkKlansman. It’s about an undercover cop who joins the KKK. It’s a true story or based on a true story at least. And um, he was, became quite a respected senior member of the KKK before they realized that in fact he was a black man.


[Laughter.] What did he do to conceal his skin colour? Just the hood?


He joined up by, you know, he was on the phone, this is going back to I guess… pre mobile phone days anyway. So most of his interactions were by phone and they actually had a, a white cop impersonate him for in-person meetings apparently. So I’m looking forward to the movie. I haven’t seen it yet.


Yeah, well I mean I suppose that that particular sub group of people is not extremely well known for its intelligence. So therefore, you know, such pranks or such infiltrations quick could happen I suppose.


In the future it seems likely that we’re going to increasingly interact using the speech with our machines and I know you’ve done some research indicating that robots actually may change our vocabulary. This seems to me to be counter-intuitive. Can you tell me a little bit about that work?


Alright. Yeah. So what we were looking at is the phenomenon of language in general. So language changes all the time. It’s a very organic, very live thing that you know, adapt all the time. We get new words, words change their meaning on some words just fall out of fashion. So it’s a very dynamic construct.

And of course people are using language to influence public opinion, for example. So there’s a particular reason why let’s say, the people that are against abortion, call themselves pro-life. That’s no coincidence, right? So the way how you call things, you know, can have an impact or does certainly have an impact on how you think about it. So, what we were looking at is that if more and more devices are being speech enabled and that of course includes robots, but at this point in time, of course the biggest chunk of, of machines that are speech enabled our phones. So we have Siri on our phones or other kinds of assistants that listen to us, but they also talk back to us, right? And the question is what kind of vocabulary do they use? And if you look, for example, just as Apple, I think it was about a year ago that somebody leaked the employee manual for, for people working at the Genius Bar. And they had clear instructions to never say that an iPhone crashes, that was just, no, you don’t say an iPhone crashes, iPhone never crashes, an iPhone stops responding, right? That’s how you say it. And so when you control the vocabulary, you can control meaning and you can control thinking. I mean, that all goes back to 1984 I suppose. And we did several studies in which we had robots. And these robots were interacting with humans. And in these f interactions, the robots would make a very strong point in terms of being very persistent of using one synonym over another. And we looked at, well, do people actually then afterwards give in and also use that sending them and in the cases is, yeah, that’s true actually, you know, just by being extremely persistent, you can make people change the vocabulary they use.

And then after that we went one step further and we were then looking at models, or simulation of the adoption of new words. So there’s again a very established research methodology where you have essentially a network of people who talk to each other and they have to agree on certain terminologies.

And what we looked at is okay, if we have this existing network which was based on actual real data, so they actually have a host, they analyze interactions in a school for like a whole day. So it was based on, the simulation is based on the real world and we added the robot to it, actually several robots to it and what, of course, what robots can do, what humans cannot do is that they can, within instants that can synchronize their vocabulary. So if they decide to call a cup a mug and they decided, okay, from now on, everybody just call this thing a mug and not a cup, then they can exchange this information over the Internet. Within an instant. So within an instant, a whole huge number of speakers can suddenly decide to use one word over the other. Of course this happens in the, in the human world as well. I mean, since the arrival of mass media, you know, we can also have this sort of accelerated adoption of words, but it’s nothing compared to, you know, an Internet of Things where everything can be updated instantaneously.

And so we looked at, well how many robots is it actually take, you know, in terms of percentage that would say push, you know, a certain synonym before, let’s say the whole network of people actually gives in and everybody starts using it. And we were, I think around nine percent. So if nine percent of the people would own a robot and these robots would be interconnected and all the robots would agree on one word and would use it consistently, then essentially all the humans would essentially adapt to the robots and start using that. And of course the really, really, really ethical question then is like, well who’s controlling the robots and what vocabulary they use? And if you, for example, have purchased all Apple robots and Apple robots don’t crash, they only stop responding, then of course this becomes really really ethical. And of course there’s already a big discussion going on about how chatbots may or may not be used on Facebook or other social media to influence political decisions or our political opinions. That has been going on. And, if we see this… the thing that’s happening, what’s different with, with doing it through speech is that it’s much more subtle, so it’s not so obviously in your face that somebody like a Facebook page comes up and says: Donald Trump is the best president ever. Yeah, that’s a bit obvious.

But if you simply start to use different words, and this is almost a on a subliminal level that you can slowly nudge people in a certain direction. It can be used for good as always, you know, with manipulation, it can be good, it can be used for good and for evil. But the question is really we need to be transparent about how this comes about. Who is in control of these vocabularies? Who is deciding what the robots say? What do they, what words do they use? And right now, this is of course completely unclear, partly because of course for robots, especially there is no big company, there’s nothing compared to mobile phones, I mean most people have a mobile phone that has potentially speech abilities. And I think for them, for this, for thes kind of devices, this will become much more urgent, much more quickly


As Deckard says in Blade Runner, replicants are like any other technology. They are either a benefit or a hazard which kind of leads me onto and brings us full circle in a way…

Philip K. Dick was probably one of the most articulate explorers of what it means to be human amidst a society increasingly populated by sophisticated androids and the dividing line often shifted intensely in his books. And you refer to Rob Sparrow a few moments ago that his paper, Robots, Rape, and Representation talks about the scene in Blade Runner, which is based on a Philip K Dick book of course. When Deckard forces himself upon the Android, Rachel. Putting aside the issue of whether Decard himself is a replicant, this really points to a very muddy, ethical space, doesn’t it?


Yeah. I’m not quite sure if Rob was actually referring to Blade Runner, actually I made a little YouTube video where I pointed the scene out and then referred back to his paper. So I recently watched Blade Runner, the old one, again, simply because the new one came out and I thought like, well maybe it’s a good time to, you know, watch the old one again, it’s been quite a few years since I last watched it. And when I saw this scene it really ruined the movie for me, I have to admit, it’s unbelievable. And so just for the listeners who are not aware of it, the scene is that Rachel and Deckard are at home and essentially Deckard is making his move, trying to seduce Rachel and she dashes off. She doesn’t want to. She runs to the door. Decrd is blocking her way, closes the door, forces her against the wall and then literally says: Tell me you love me. And starts kissing her. He’s really forcing himself onto her.


It’s a very disturbing scene now. Whereas, you know, I recall seeing the movie when it first came out and I don’t remember having that type of reaction to it.


Yeah, exactly. So fortunately probably the times have changed, but it’s really these days of course problematic. And the argument that Rob Sparrow was essentially going on about is in principle, so, you know, if you have, let’s say an android like Rachel, and you could easily program it, for example, to have intimate relationships with people and you can also program it in a way that it would not give consent. And then people could essentially live out their rape fantasies with the robot. And in his paper he is looking at, well, how well, what can we think or what can we do about it? How does it stand in an ethically, because of course there are many different views on it. The robot doesn’t necessarily have emotions. It’s just a machine. You could argue that like, well, you know, if this person is such a pervert, you know, he better that that person does it to the robot than to a real woman.

You know, there’s all sorts of reasons one could bring up. But Rob Sparrow made a very strong case for saying, no, this is completely unacceptable because these kind of robots, are representations of real women and treating them in such a way is actually completely unacceptable. But then again, we… I mean I was recently talking to a developer of sex robots, Real Robotics over in California and they have a very long history of, of making sex dolls and they are just now making the move over to robots. And the most interesting part for me, I think is that they started with the head. So they start animating the head, that was the first thing that the animates. And if you think about it, why that is, it is because of the relationships. So people that purchase these kinds of products are predominantly interested in relationships because very often these people have social anxiety or are isolated or if other issues, and for them being able to interact with something, you know, is better than being completely alone.

So in that sense, having robots that can form relationships, even intimate relationships can potentially be helpful for some people. But the idea that, uh, you would have a non-consensual sex situation with robots does seem quite disturbing.


And of course, talking about relationships, there’s that fantastic movie. Is it Lars and the Real Girl, I think might’ve been the title of it.


And there was Her, of course. I mean it was Her, the movie Her. That was when this person fell in love with his mobile phone. And to be honest, looking around these days, I see… I go to a restaurant and I see so many couples at eat and have the phone next to themselves. They don’t talk to each other, they just eat and pay attention to that phone.

It fills my heart with sadness, how much damage mobile technology has done to our relationships.


Indeed, I think this is getting a bit deep for us, Christoph!


As robots become more… I do agree with you entirely mind you.

As robots become more common and more mainstream, it’s likely that UX professionals will need to understand human-robot interaction. How should they develop their skillset in this area do you think?


That’s a good one. Well, what makes robots different or the interaction with robots different than let’s say standard GUI 2D interface is that here we’re talking about acting. This is almost like theatre. In a way it is real theatre. It’s an advanced form of puppeteering I suppose. And the skills you need there is you need to think about, well, how, how would you do this with a human? Right? If you have a set interaction you can easily prototype with, okay, how would I do this with a human? And then you can start thinking about, okay, can we enable the robot to roughly do the same thing. Unfortunately very often robots, as I mentioned before, are quite constrained in their ability. So therefore you have to find all kinds of work-arounds to make things happen. So for example, speech recognition.

Yeah, it works but not the same way as you know, a human interaction would work and not necessarily on, let’s say the pure recognition part, but more on the language understanding. So understanding really what is meant with an utterance is something really different than, you know, just being able to recognize the words. So designing these kind of dialogues, designing the interactions is quite, quite difficult. You also have the whole additional component of embodiment.

So if you interact with the computer, it’s just it just standing there. But a robot can touch you. A robot can take you by the arm, a robot can pat you on the shoulder, it can take you by the hand and guide you somewhere, right? You have the whole part of embodiment that is normally not present in computers that much. I mean, unless you consider hacking on the keyboard as a particularly meaningful interaction, it isn’t. I mean, it’s only symbolic for, the meaning of what you type. So the embodiment is something really new and yeah, I suppose… Well, as a matter of fact, we’re currently writing the first textbook on human-robot interaction which will be published by Cambridge University Press later this year. So that might be a good starting point for people who want to get into this. Um, But other than that, I thoroughly enjoy, for example, uh, collaborating with people from the theatre studies.

So we’ve done various projects in the past and that was very kind of insightful for both directions. I mean, for the engineers that, you know, their eyes are being opened to, well, how do you actually communicate something because, you know if you look at good movies, like let’s say we take WALL-E, like the first 20 minutes, not a single word is being said, everything is done by body movement. Right? And that works. And this kind of ability to, to just hint at things, is amazing what you can do even without language. So if just the robot was, let’s say you want to communicate that the robot is low on battery, well it can just lower his head. Well that’s enough. And then the human probably like, oh, what’s up? Why are you sad? Or what’s going on? It’s like, Oh yeah, know my battery’s running low.

Right? So there are a lot of things you can do there are very, very subtle. And which is a blessing and a curse because the problem is, of course we humans are so trained to look at other humans and if anything is ever wrong, you know, if we just have a slight limp, you know, it, it shows up immediately and it draws our attention. So the difficulties is, is quite there. And another real big important aspect is always to match the appearance of the robot with its ability. So for example, if you use, if your robot has a speaker and it talks or let’s say plays back some audio files, then people will automatically assume that the robot will also be able to listen, not only to listen, but also to understand, but that’s not a given, right? So the, as soon as you introduce certain features, people will start projecting or thinking about what the robot can do. And since we have this quite strong limitations in AI, very often people end up being quite disappointed. So these days you know the most realistic embodiment for robot is probably at best on the level of an animal because that’s at best what robots can do at this point in time and all of the androids are really more of a research vehicle at this point in time.


Indeed. Well unfortunately we’ve run out of time. I do suggest that listeners check out Christoph’s website.

Dr. Christoph Bartneck, thanks so much for joining me today on the User Experience podcast.


Thank you so much for having me.

Gerry GaffneyRobots, AI and Ethics – An interview with Dr Christoph Bartneck

Comments 4

  1. Pingback: Interview at UXPod with Gerry Gaffney | Christoph Bartneck, Ph.D.

  2. Pingback: 10 UX and UI Podcasts for Merchants –

  3. Pingback: 10 UX and UI Podcasts for Merchants | My Household Shop

  4. Pingback: 10 UX and UI Podcasts for Merchants – My Blog

Leave a Reply

Your email address will not be published. Required fields are marked *