Explain the Uncanny Valley

A friend alerted me on Facebook to this Youtube video which is on the IEEE Spectrum page. Roboticists struggling to define the Uncanny Valley in one minute. (Thanks Astrid!)

Somehow it is really amazing which role the Uncanny Valley has taken in pop culture – and in research as well. I remember when I started to get interested in the phenomenon it was neither well known, not taken very seriously. A colleague who I had asked said “… well it is not really something that is much taken serious among roboticists — perhaps you joke about it after a few glasses of Sake.”

I was not so sure about whether this is true because it was interesting that there seemed to be a taboo to create robots that were too human … obviously the early work of David Hanson and of course Hiroshi Ishiguro were the exceptions. Otherwise robots would have blank plates as faces or would display only the most symbolic facial features.

So listening to the video I am fascinated that people do not primarily focus on the fact that the Uncanny Valley is first and foremost an hypothesis which was presented by Masahiro Mori in 1970. It is thus a theoretical construct, not a thing, or a phenomenon. Whether the phenomenon exists, for whom, for how long and all of that are empirical questions. At this point the evidence is mixed. What do you think?

(trying to get a poll here …)

Advertisement

Empathy with Military Robots?

US Navy 090207-N-9610C-007 A remote-controlled...

US Navy 090207-N-9610C-007 A remote-controlled robot used by Explosive Ordnance Disposal Expeditionary Support Unit (EODESU) 1 called (Photo credit: Wikipedia)

The Interwebz are buzzing with reports regarding the PhD work of Dr. Julie Carpenter at the College of Education at the University of Washington. She has interviewed Explosive Ordnance Disposal (EOD)  personnel who work with robots to clear dangerous explosives. These machines are very purpose-built for difficult terrain and manipulating particular types of objects.

Image

Talon IV (QinetiQ)

Not very cute, not very pretty – and yet, according to Dr. Carpenter’s work, handlers may feel quite strongly about their machines. They might assign them gender, they give them names – and they feel not well when they are destroyed in the call of duty. Specifically, there appears to be a trend to attribute human traits to these machines (some more details here).

English: INDIAN HEAD, Md. (Feb. 26, 2009) Warr...

English: INDIAN HEAD, Md. (Feb. 26, 2009) Warren Tibbs, a robot operator for Navy Explosive Ordnance Disposal (EOD) Technical Division, Indian Head, Md., shows the many different robots that are developed on the base and used by EOD technicians and civilian police department SWAT team members. (U.S. Navy photo by Mass Communication Specialist 2nd Class Jhi L. Scott/Released) (Photo credit: Wikipedia)

I still have not read the thesis itself, as the news regarding this just broke, and I do not want to go into details regarding the potential military implications if the handlers of such machines feel empathy towards them – there is apparently a discussion already out there. Instead, I want to focus briefly on the importance of time spent and experience with machines and the development and expression of empathic sentiments and the implications for research on social relationships with robots.

Many studies on the uncanny valley, or others that deal with the relationship of humans and machines confront relatively unprepared participants with devices of various complexity in the laboratory, or laboratory-like contexts. Perhaps this is really not a great idea. Possibly, uncanny valley effects disappear quickly with machines that approach human perfection. This is an idea that Astrid Rosenthal von der Pütten, who is working on empathic responses to robots, proposed when we discussed such phenomena. Perhaps, the other side of the coin is that very non-human machines may become very close to us if we work with them day-in, day-out. If this is so, then we should take results of research that studies exclusively responses to unfamiliar stimuli with a grain of salt. With this I mean either responses of people who are generally unfamiliar with robots, or even with people who have some familiarity with robots but not a particular type of robot. I do not want to question the usefulness of such studies in general – in my own laboratory, for example, we are investigating responses to artificial entities of various kinds in the laboratory and measure behavioral, physiological, and subjective responses of student participants who have little experience with the types of stimuli we use. However, we and other researchers in this field should keep the influence of experience and familiarity in mind.

Repliee Q2. Taken at Index Osaka Note: The mod...

Repliee Q2. Taken at Index Osaka Note: The model of Repliee Q2 is probably same as Repliee Q1expo, Ayako Fujii, announcer of NHK. (Photo credit: Wikipedia)

Remember how people responded first to cars, or the famous report of how people responded to early films of the brothers Lumiere? Apparently, people were shocked, scared, ran away (the extreme versions of this story are likely to be urban legend). These responses are difficult for us to imagine – so much have we gotten used to the highly mechanized and mediatized environments we grow up in. Possibly, if we would grow up in a robotized environment, there would be no odd sensation to almost-human machines and we might feel deeply for machines that are very much not humanoid. For me the fictional version of this is the relationship of the character Freeman Lowell played by Bruce Dern in the movie Silent Running with the three little robot drones Huey, Dewey, and Louie, As he spends more and more time with them, his emotional bond becomes stronger and stronger – and this means not just feeling warm and fuzzy, but displaying a whole range of emotions toward them and reacting very strongly to their actions.

Image

Poster from the movie Silent Running, claiming fair use (does not detract from original work).

Anthropomorphic machines … from the 19th century

 

 

The idea to create anthropomorphic robots is today often associated with the uncanny geminoids of Hiroshi Ishiguro. However, the idea to build anthropomorphic automatons is of course much older. Most famous is probably the chess player automaton The Turk by von Kempelen in 1769.

An illustration of the workings of the model. ...

An illustration of the workings of the model. The various parts were directed by a human via interior levers and machinery. This is a distorted measurement based on Racknitz’s calculations, showing an impossible design in relation to the actual dimensions of the machine. Standage, 88 (Photo credit: Wikipedia)

Less known is the French tradition to use anthropomorphic automatons for advertising purposes. A recent BBC piece shows some rare and interesting clips from an exhibition at the Musée de l’automate de Souillac. Recommended! Here is a different version of the video.

 

… and if you cannot get enough of the early automatons … check this site out.

 

 

Artificial Emotions

Nautilus

Nautilus (Photo credit: Lebatihem)

Artificial Emotions

Today an intriguing new magazine saw the light of day: Nautilus.

Nautilus is a different kind of science magazine. We deliver big-picture science by reporting on a single monthly topic from multiple perspectives.

The first issue poses the question What makes you so special? It deals with what it might mean to be human — and how we might or might not be unique. One of the stories in the first issue deals with Artificial Emotions – As it happens, I had a long conversation with the author of the story, Neil Savage. Some of that ended up in the article, among many quotes by some of my colleagues, reflecting on issues central to affective computing. And this is why it fits wonderfully with the topic of this blog.

Nautilus

Nautilus (Photo credit: Guilli F P)

Every time I discuss issues of artificial emotions with people, regardless of whether they are scientists, represent the media, or others who are simply curious about affective computing, there are certain topics that tend to come up again and again. One of my goals is to raise some of these over the course of the next weeks in this blog. For example

  • that we might require much less emotion than we think to feel that machines are/feel emotional.
  • How psychological theories might be misleading efforts to make machines emotion-savvy.
  • How the concept of emotions as such might not always be very useful at all in this context.

In the meanwhile – check out Neil’s article, it is an interesting read .— and I will check out the rest of Nautilus in the meanwhile