Imagining Robots, Imagining Ourselves: Our Perception of Robots

Imagining Robots, Imagining Ourselves: Our Perception of Robots
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Can you really ‘bully’ or ‘abuse’ a robot? According to media representations, it seems that you can. Google-owned Boston Dynamics’ human-like robot Atlas and animal-like robot Spot stirred debates not only due to their physical capabilities, but also due to humans’ interactions with these robots. Boston Dynamics released a few videos on YouTube, showing how these robots were tested for stability and balance. In these videos, both robots are kicked and pushed by Boston Dynamics employees.

Mass media were not interested in robots’ abilities in keeping their balance. They were more concerned with how Boston Dynamics employees treated these robots. Mainstream newspapers and media represented the balance test on the human-like Atlas robot, which involved kicking and pushing the robot, as “bullying”. They further questioned the same test on the animal-like Spot and asked whether it is cruel to kick a robot. At this point, we are curious about how the concepts that we use to describe human-robot interactions are shaped.

While ‘bullying’ – a word normally used for interactions between humans – is employed to describe the Boston Dynamics employee’s interaction with the human-like robot Atlas; ‘abuse’ – used for humans and animals – is utilized to represent the interaction between the employee and the animal-like Spot. These words would hardly be used to describe an interaction between a smart kitchen appliance and a human user trying to destroy it. Thus, the answer to our question may lay in the physical appearance of robots, which influences our perceptions of them.

There is no single perception of robots agreed by everyone. A smart kitchen appliance or an automatic vacuum cleaner may be considered to be a robotic device by engineers. However, public perception of robots does not match engineers’ categorizations. The word ‘robot’ is imagined by many as electro-mechanical machines, which resemble organic life forms. People associate robots with the specific organic life form that the robot resembles. Remember how the balance test for the human-like Atlas was represented as ‘bullying’ and the test for the animal-like Spot was conceived as a form of ‘cruelty’ and ‘abuse’?

In this public perception, physical look overshadows the functions and cognitive complexity of robots. IBM’s Watson question answering computer system, which may be seen as an artificial intelligence form, won the Jeopardy quiz show in 2011. Watson is arguably much more ‘intelligent’ than Atlas or Spot. Yet, since Watson does not have a physical form (consisting of processors inside racks and cabinets), its success against humans was discussed only in terms of IBM’s enhanced technologies and research program. In other words, Watson was seen more as a form of technology rather than a character or a figure, which humans can associate with. However, if Watson had a human-like outlook, its success against humans could bring up discussions including a fear that artificial intelligence would one day suppress human mind and govern human behavior. Now, Watson is seen as a technological counterpart, helping humans. Imagine a scenario where Watson’s cabinets are kicked by a human. Do you think it would cause the same controversial debates on the media similar to the ones made for Atlas and Spot? Would this change if Watson had a human or animal form? It seems that people do not imagine or associate themselves with a cognitive being unless it has a physical shape resembling an organic life form.

This is visible in public perception of every kind of electronic and robotic device. Let us consider a smart missile, which is a self-propelled precision-guided munition system. In more simplistic terms, it contains both hardware and software systems, which can “carry out a complex series of actions automatically” and thus can be considered as a robot. Yet, these smart missiles, which do not look like a living being, are conceptualized as war tools rather than smart and destructive robots. No one is concerned about a possible scenario or a future technological development where these smart missiles act on their own and take over the world. In other words, our perception greatly depends on the shape of these missiles and make us conceive them as tools rather than personas.

In a nutshell, our perception of robots is not shaped by the level of their “intelligence” (e.g. the Watson case) or their capabilities (e.g. the smart missile case), but by their appearance (e.g. the Atlas and Spot cases). We tend to perceive robots which look like humans as humans, which look like dogs as dogs, which look like missiles as tools, and which do not have a specific shape as a form of technology. Our perception of robots relies on our imagination of ourselves and our surroundings. We believe that this perception will play a key role in shaping future human-robot interactions, from the public fear of robots constructed in popular culture to debates around robotic rights and ethics in integrating robots to our legal, social and economic systems.

Popular in the Community

Close

What's Hot