A few years ago I wrote about a study looking at the relationship between humans and robots. The researchers suggested that we are much more likely to form productive working relationships with robots when those robots have flaws and imperfections. With robots a much more common feature in our lives, has that situation changed? A recent study would suggest not.
The study, which was published in Frontiers in Robotics and AI, tested how humans respond to robots that are 'perfect' versus those with more human like flaws. As before, the results suggest we take to the flawed robots more easily than the perfect ones.
"Our results show that decoding a human's social signals can help the robot understand that there is an error and subsequently react accordingly," the authors say.
This is crucial as robots are becoming increasingly social, but they aren't yet capable of behaving flawlessly. That isn't really done by design however, with most attempts currently underway to make robots functionally perfect.
To test how humans respond to robots of varying levels of social proficiency, the researchers purposefully programmed faulty behavior into a NAO robot's routine prior to letting it loose. They then recorded how the humans reacted to the machine in terms of its likability, anthropomorphism and perceived intelligence. They also recorded how the humans responded to a mistake made by the machine.
When the data was analyzed, it transpired that humans responded to 'faults' in the robots behavior via quite clear social signals. Such erroneous robots weren't regarded as less intelligent or anthropomorphic by the participants than their more perfectly performing peers. Rather, they actually scored higher, especially in areas of likeability.
"Our results showed that the participants liked the faulty robot significantly more than the flawless one. This finding confirms the Pratfall Effect, which states that people's attractiveness increases when they make a mistake," the authors say. "Specifically exploring erroneous instances of interaction could be useful to further refine the quality of human-robotic interaction. For example, a robot that understands that there is a problem in the interaction by correctly interpreting the user's social signals, could let the user know that it understands the problem and actively apply error recovery strategies."
The findings provide a further reminder that as robots are being designed with social interaction in mind, it might be worthwhile ensuring that they retain the kind of imperfections we humans have rather than designing them out. By embracing the flaws in social robot technology, it might help make robots more believable, likeable and natural, and therefore increase the speed of acceptance within society.