Automated Intimacy: To Err Is [Perhaps] Human

Automated Intimacy: To Err Is [Perhaps] Human
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Relationships are a lot of work.

Forming new bonds, maintaining friendships, and keeping up with acquaintances is something that requires time and energy on our part. One of the most vexing issues facing social networks is trying to balance our impulse and desire to maintain more relationships than is humanly possible.

Our solution: we are automating intimacy.

We have a finite amount of time and intimacy to give, while having a seemingly infinite amount of new connections, acquaintances, and friends to maintain and cultivate. Relationships by their very nature depend on a certain level of time/energy reciprocity, and many of us feel incentivized to “hack intimacy.” Obviously we can always be more efficient in how we maintain relationships, but some of us are blurring the line by intentionally misleading recipients with automated intimacy. In an effort to scale, they are trying to masquerade these automated messages as authentic.

150.

That is the approximate limit of stable relationships that a person has the cognitive ability to maintain. It’s referred to as Dunbar’s number, deriving from the work of British anthropologist Robin Dunbar.

Our social networks provide us with the desire (and seeming ability) to far exceed Dunbar’s number. So how do we do it? We are provided an array of shortcuts that allow us to project intimacy in order to better extend our finite level of time and mental bandwidth. Instead of spending a few minutes crafting a congratulatory message on LinkedIn, we are prompted to utilize automated intimacy tools that only need a few seconds. In other words, we want to show that we care but we don’t have the time and mental bandwidth to provide an authentic level of intimacy and care. [But we hope the recipient thinks we care.]

This conflict, of course, is not extremely new. Email newsletters and mass messages provide the classic example where the sheer number of people we would like to contact greatly outnumbers our ability to provide one-on-one intimacy. Email newsletters typically try to create a level of intimacy by personalizing the message with a fill-in-the-blank name (”Hey David!”), to create a subtle illusion that the sender is addressing just the reader. The lack of personalized content throughout the rest of the newsletter, however, clearly indicates to the reader that this is a polite tactic as opposed to an act of deception.

In the future, we will blur the lines between bot and human—automated and authentic. This is a major problem for relationships.

Why? Human relationships rely on understanding the thought/time/value that a person is giving. Automating intimacy creates a Seinfeldian problem for the digital age—every time we receive a message, we have to wonder: “Did a human write this?” If we think that a human wrote the message, we are more likely to respond in kind. We are returning the level of care that we think went into the message. If we know it is automated, we don’t feel the same urge to respond using our valuable time, energy, and thought.

Humans are reciprocal by nature.

The Imitation Game for our modern social networks doesn’t just involve machines masquerading as humans, but often humans-acting-as-bots-projecting-as-humans. While some people are clearly upfront with their automated messages, others are drawn towards imitation. Let’s have a look at some recent Direct Messages (DMs) I have received on Twitter to illustrate the point.

Here we have a human being upfront with using an automated tool. I know they didn’t spend time thinking about me, and I respect their transparency.

This is the classic ambiguous message. I would assume that it is automated, but at the same time we often quickly communicate in surface-level broad generalities. The message, however, is not trying to imply that the sender spent any significant amount of time/energy/thought.

Okay, now this one reeks of being disingenuous. They are most likely hoping that in a desire to inflate my ego I will suspend all my critical thinking skills. As a human, I want people to care about my work and find me impressive. That feels good! The sender is clearly trying to trigger some level of goodwill towards them, when it would certainly seem to be a blanket message. As much as I want to think that my life warrants two exclamation points, I just can’t fall for this human-as-bot.

This one takes the cake. After starting the message stating that they checked out my profile, the sender ends the DM with a blatant typo. “Have a great dau!!,” followed by “Day* not dau, oops!”

The err is human, so they MUST be human. Right??

Wrong. The sender’s immediate DMs had me curious to find out the level to which humans, unable to adequately scale intimacy, will go in order to project intimacy. So I followed the sender on a different Twitter account that I maintain. Lo and behold, the same typo and apology arrived.

He is an automated human projecting human error. We have reached a bizarro point where we, as humans, are having to imitate our own humanness. Perhaps my forgiveness of this bot-like behavior would be divine.

==

David Ryan Polgar is a three-time TEDx speaker, commentator, consultant, and co-founder of the global Digital Citizenship Summit. Often referred to as a “Tech Ethicist,” he explores our evolving relationship with social media and tech from an ethical, legal, and emotional perspective. TechEthicist.com

Popular in the Community

Close

What's Hot