Imagine the original job interview. The first one ever, back on the prehistoric savannahs of eastern Africa. It wouldn't have been exactly like a modern job interview, because early humans had no resumes or Linked-In or letters of recommendation to guide them. There was very little in the way of personal or professional reputation to go on, so in that sense the exchange was much trickier. But the fundamental idea was the same: Somehow the interviewer had to judge, in a brief spot of time, if the applicant -- a complete stranger -- was worthy of trust. Is this a person to do business with, to entrust with your money and your financial future? What subtle, unintended signs might one detect in that initial, face-to-face interaction, to boost the odds of choosing a solid relationship and rejecting a dicey one?
What did our ancient ancestors look for in making these crucial judgments? Indeed, how do we make these judgments nowadays? It's advantageous to enter into cooperative business deals, but the risk of deceit is high. Every time you walk into a used car lot, or shop around for a home contractor or financial advisor, you are using your wits to pick someone trustworthy -- and to avoid scoundrels.
Since trust and cooperation are so essential to the smooth working of human society, it makes sense that people would have learned over eons both to send signals of trustworthiness and to interpret signs of malicious intent. Yet scientists have searched in vain for that single "golden cue" that predicts future cooperation or opportunism. Now there is a growing consensus that the idea of a single, isolated non-verbal signal of trustworthiness -- or deceit -- is simplistic. It's probably not this or that grimace or gesture, scientists are thinking, but rather a subtle constellation of signals emerging dynamically during brief encounters.
That's the idea behind some new research from psychological scientist David DeSteno of Northeastern University. Working with a large team of collaborators at MIT, Cornell and his own university, DeSteno ran a two-part experiment aimed to identify the intertwined non-verbal cues that warn of opportunism in others. In the first part of the study, the scientists observed strangers during their first conversation -- either face-to-face or in a web-based chat -- figuring that if there are indeed a set of non-verbal cues that consistently convey trustworthiness, then people should be better at judging others' intentions face-to-face.
They videotaped pairs of students, who didn't know one another, as they chatted for five minutes, about ordinary topics like spring break, life in Boston, and so forth. Other student pairs had similar chats via the Internet, the only restriction being that they couldn't use emoticons. Then, all the pairs played a game that measures cooperative and self-interested economic behavior. As expected, those who had chatted face-to-face beforehand were more accurate in predicting the trustworthiness -- or sleaziness -- of the stranger. That's presumably because they had gleaned non-verbal information about the opposing player. Something in the interaction -- something that was missing from the purely semantic web interaction -- had given away their opponents' intentions.
But what? To find out, the scientists asked two independent judges to analyze the videotaped interactions, identifying all the possibly meaningful cues: smiling, laughing, leaning, looking away, crossing the arms, nodding, head shaking, and touching. Then they isolated the specific cluster of cues that were actually present when volunteers successfully detected others' self-serving intentions. Again and again, it was a cluster of four cues: hand touching, face touching, crossing arms, and leaning away. None of these cues foretold deceit by itself, but together they transformed into a highly accurate signal. And the more often the participants used this particular cluster of gestures, the less trustworthy they were in the subsequent financial exchange.
This finding was intriguing, but inconclusive. After all, people are constantly twitching and shifting, and sending out all sorts of random cues, so it's difficult to know if this particular cluster of cues -- and only these cues--are the ones involved in signaling malevolence. To test this more rigorously, the scientists needed to experimentally manipulate the suspect cues, and only those cues, and then see if they did indeed warn of self-interested behavior. But how is it possible to achieve that level of control?
Enter Nexi. Nexi is a robot, and in the second phase of the study, she replaced one of the partners in each pair. The remaining partner had a 10-minute "conversation" with Nexi, again about mundane topics, while the scientists, operating Nexi in Wizard-of-Oz fashion, made her lean back, touch her face and hands, and cross her arms. All of Nexi's cues were derived from examples of human motion, to make them as authentic as possible. The order varied, with some cues repeated, to simulate human fidgeting.
Other volunteers, the controls, also chatted with Nexi for 10 minutes, but during these conversations, Nexi used gestures other than the target gestures. The idea was that Nexi's expression of the suspect cue cluster would result in a diminished sense of trust in Nexi. And that's exactly what happened. As reported in a forthcoming issue of the journal Psychological Science, when Nexi used the target gestures -- but not when she used other gestures -- the volunteers reported feelings of distrust toward the robot. What's more, when they played Nexi in the economic exchange game, the volunteers expected to be treated poorly -- and treated Nexi less cooperatively in return.
Interestingly, these results were narrowly focused on trust. That is, even when Nexi's body language made people distrust her motives, they did not necessarily dislike her. This is actually a familiar human experience: Many of us know individuals who we like well enough, but would never, ever trust with our money.