Anthropomorphism: What Science Fiction Can Tell Us About Yelling at Computers and Naming Roombas

For less humanoid systems, people respond idiosyncratically. For example, some people name their Roomba vacuums and speak of them almost like a pet but others do not. So what aspects of a system might provide those triggers?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
HOLLYWOOD - APRIL 26: General view of atmosphere at the world premiere of Paramount Pictures and Marvel Entertainment's 'Iron Man 2� held at El Capitan Theatre on April 26, 2010 in Hollywood, California. (Photo by Kevin Winter/Getty Images)
HOLLYWOOD - APRIL 26: General view of atmosphere at the world premiere of Paramount Pictures and Marvel Entertainment's 'Iron Man 2� held at El Capitan Theatre on April 26, 2010 in Hollywood, California. (Photo by Kevin Winter/Getty Images)

The following is an excerpt from the new book, "Make It So: Interaction Design Lessons from Science Fiction" from Rosenfeld Media.

Humanness Is Transferable to Nonhuman Systems

This is a common phenomenon in both sci-fi and interface design. Most sci-fi films and TV shows in the survey include examples of anthropomorphized technology.

In the film Moon, the lone inhabitant of a mining base on the Moon, Sam, knows that the computer system, GERTY, isn't human. So do we, the audience. Yet Sam treats GERTY as a companion, not merely a set of subroutines. Likewise, R2-D2 is one of the most beloved characters in the Star Wars films, yet this little droid neither looks nor sounds human. In the real world, we speak to our cars as if to coax them into making it to the gas station before they run out of gas, and we curse our computers when they behave unexpectedly or when their programming doesn't make sense to us. Like Sam, we know these systems aren't alive and don't understand us. Yet we treat them as if they were living beings.

The first thing to understand about anthropomorphism is that people can and do anthropomorphize almost everything -- from hurricanes, teddy bears, and pets, to furniture, tools, and machines. It seems we have evolved specific mental equipment to understand other humans that we bring to bear in our relationships with most everything around us.

Anthropomorphism is a fundamental psychological bias about which many books have been written. For our purposes, we only want to look at the ways in which this principle applies to technology. In their work at Stanford University, Clifford Nass and Byron Reeves have shown in controlled experiments that people, whether they realize it or not, tend to deeply anthropomorphize any sufficiently sophisticated technology -- whether a car, a microwave oven, or even a company. Their research, supported by B. J. Fogg at Stanford, has shown that people give computer systems a full range of social considerations, ascribe to them human motivations, assign demographic attributes to them such as age and gender, and react to them in social ways, such as through persuasion and flattery, all the while being unaware that they are doing so. The successful systems, these researchers explain, conform to human social norms. Designers and engineers aren't responsible for anthropomorphizing their systems -- users do that themselves. Instead, they are responsible for developing the systems so that they become acceptable characters, instead of annoying ones, by having them conform to social rules. Though a full examination of those norms and related social cognitive biases are beyond the scope of this book, it follows that designers who want to exploit this effect in the interfaces they design should look into them. (As a plus, you'll be able to charm cocktail acquaintances with terms such as "outgroup homogeneity bias" and the damning "Dunning-Kruger effect." Begin your search with the phrase "social biases.")

Some technologies are designed specifically to trigger this anthropomorphic sense. The ASIMO robot, for example, is designed to appear and move quite like a human. For less humanoid systems, people respond idiosyncratically. For example, some people name their Roomba vacuums and speak of them almost like a pet but others do not.

So what aspects of a system might provide those triggers? Humans are complex and have many qualities that a device could emulate. Our review of anthropomorphic examples in the survey tells us that they fit into the following broad categories: appearance, voice, and behavior. These categories are not mutually exclusive. Gort, the robot from The Day the Earth Stood Still, for instance, emulates human anatomy and behavior, but it has a silver visor where a face should be. The Minority Report "spyders" display the intention of finding and "eyedentiscanning" citizens as well as problem- solving skills, but physically they don't look at all human.

Once a system takes human form or adopts human-like behavior, it implies that the system has human-like capabilities. It also triggers social conventions we normally reserve for other people. Our shorthand mechanisms for dealing with each other kick-in and we start cajoling our cars, naming our computers, and treating that shopping agent as if it really knows what it's doing. These effects become more apparent when we look at the categories of things that trigger them in technology.

Lesson: Design either for absolute realism or stick to obvious representation.

Appearance

The first and most apparent aspect of humanness is in appearance. It can mean just the body, just the face, or just the eyes. It can vary from vaguely human to indistinguishable from human. In The Matrix films, programs are represented in a virtual reality as fully human characters, imparting greater impact, depth, and danger to the audience than almost any other representation could. The hunt-and-destroy program, called Agent Smith, feels more dangerous and capable than one would expect from a program. The prediction program, called the Oracle, seems wiser and more trustworthy when represented as a cookie-baking matron than lines of code.

Agent Smith and the Oracle are programs represented as lifelike characters who think, react, show initiative, and emote on occasion right along with the actual humans in the Matrix, triggering characters and audiences alike to ascribe human motivations, intentions, and constraints to them, to their detriment.

Many interfaces in sci-fi feel human because they sound human. This could be through the sense of "having a voice" through the use of language. It can also mean audible expressiveness without formal language.

We need to be careful not to overgeneralize though. Signs, books, and websites use language, but they aren't anthropomorphic. It is the interactive give-and-take of conversation that signals a human-like, responsive intelligence.

This responsive use of language can be in text with no accompanying voice, like we see in the artificial intelligence called Mother in the movie Alien. It answers questions and displays a crude sense of intention.

More frequently this use of language is embodied in sci-fi by an actual voice, which greatly increases the sense of humanness, beyond just the language used. In the case of the TV series Knight Rider, K.I.T.T. is the onboard artificial intelligence assistant in the car. Almost the entirety of its development as a character on the series is accomplished through K.I.T.T.'s voice. There is a minimal text interface, used rarely, a voice-box light that glows with his speech, a scanning red light at the front of the car, and K.I.T.T. sometimes controls the car itself, but this is the extent of its behavior. The bulk of our acceptance of him as an independent character is due to his very human-like voice, which includes intonation, sophisticated phrasing, and natural cadence, timbre, and annunciation, unlike more robotic-sounding voices.

Lesson: Conversation casts the system in the role of the character.

If a computerized voice sounds more mechanical than human, it will be understood to be an artificial system; however, when a machine system has a natural human vocal representation, it can cause unexpected confusion. For example, when the Atlanta airport train (now called the Plane Train) opened in 1980, the cars had no human operators on board, but were equipped with a prerecorded human voice to give instructions and announce stops. What the designers didn't foresee is that some riders presumed the voice was coming from a human conductor able to make judgment calls when managing the trains. Riders would take more chances, such as rushing doors as they were closing, because they expected the conductor to see them and wait. The realistic voice created unrealistic expectations of how the system would behave. To solve the problem, computerized voice recordings (sounding just like the Cylons in the original Battlestar Galactica TV show) were substituted. These were not as "natural" or "comfortable" for riders, but they set the proper expectations. (This topic has been a hot one for more than 20 years. One of the authors was at a contentious panel discussion about this very subject at a conference way back in 1992!)

Behavior

At its heart, anthropomorphism is behavior-centric. People see faces in most everything, so appearance is an easy win, but human-like behavior increases the sense of anthropomorphism greatly. With human-like behavior, even the most mechanical things seem to gain personhood. One well-known example from sci-fi is in Iron Man, as Tony Stark's robotic helper -- affectionately called Dummy -- becomes a believable and endearing character simply through lifelike movements while responding to Stark's conversation, despite its utilitarian, industrial robot appearance and lack of sound or voice.

Anthropomorphism can even occur with behaviors limited to text responses and button clicks. For example, back in 1966, a computer program named Eliza created a stir (and controversy) by imitating a Rogerian psychologist who simply asked questions based on previous answers. It had a very simple algorithm and a pool of starter questions to ask but exhibited remarkable flexibility in "conversing" with people. In fact, some users were completely deceived into believing they were dealing with a highly sophisticated psychoanalysis program, and, even more remarkably, others reported personal insights despite knowing that the program was just a few lines of trick code. Even when we know the system is a simulation, it is still possible to build a successfully anthropomorphized experience -- if the purpose and constraints of the system are appropriately focused.

The real world doesn't have enough artificial intelligence to create truly autonomous agents yet. They're strictly products of sci-fi at the moment. And there are only a few examples of good agents. We see plenty of examples of guides outside sci-fi, however, in real or prototyped products, including Apple's Knowledge Navigator and the Guides 3.0 prototypes, Microsoft Bob, Microsoft Office's Clippy, and myriad "wizard" interfaces. It's a step outside of sci-fi, but to get interactive anthropomorphism right, we need to head there for a bit.

Two famous examples from Microsoft were unsuccessful guides, underscoring how difficult it can be to do these well. Despite the fact that much of the research behind the personalities of Microsoft Bob, a personal information manager, and Clippy, an assistant inside Microsoft Office, was based on the work of Nass and Reeves, the behaviors exhibited by these products were deemed nearly universally annoying by users. A clear mismatch existed between expectations and actions in terms of behavior. In particular, Clippy was often interruptive and presumptuous about its help, overpromising and underdelivering. It was difficult to turn off permanently. Software often transgresses social norms, but Clippy felt worse because it seemed like a social being that should have known better. It had two eyes, it used conversational language in response to your behavior. It behaved like a real person -- a really annoying person.

Lesson: Anthropomorphized interfaces are difficult to create successfully.

It is possible to get guides and agents right, though. One real-world example comes from Apple. In Knowledge Navigator, an industrial film created in 1987 to show possible future technologies in action, Phil, a partly realistic animated agent, assists a college professor in a variety of tasks before his upcoming lecture. Phil works because he is easily interruptible, doesn't presume too much knowledge, and isn't represented with too much realism. This helps signal that although he's advanced,he's not as capable as a human assistant would be. He conforms to social conventions appropriate to his capabilities.

Anthropomorphism is a powerful effect that should be invoked carefully

People are used to interacting with other people, so this plays into our experiences of technology in both sci-fi and the real world. In sci-fi we see software, robots, cars, and search engines take on aspects of humanity to make it easier for the characters and easier for the audience to relate to and understand. Studying these examples shows that human-like appearance makes people more comfortable and helps interfaces communicate more expressively. Human-like behavior can make systems more instantly relatable and be well suited to assisting users with accomplishing tasks and achieving their goals.

Designers wanting to incorporate anthropomorphism should take great care to get it right, however. Anthropomorphism can mislead users and create unattainable expectations. Elements of anthropomorphism aren't necessarily more efficient or necessarily easier to use. Social behavior may suit the way we think and feel, but such interfaces require more cognitive, social, and emotional overhead of their users. They're much, much harder to build, as well. Finally, designers are social creatures themselves and must take care to avoid introducing their own cultural bias into their creations. These warnings lead us to the main lesson of this chapter.

Lesson: The more human the representation, the higher the expectations of human behavior.

Reference

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press.

Popular in the Community

Close

What's Hot