Civic responsibility on social media channels is a topic eating away at corporate leaders and the general populous alike. The grueling election race surfaced this topic time and time again, yet how to manage “good manners” online is a touchy topic that has not been addressed head on. Social media has shown an immense power to unite people and help bring about social change, demonstrated by events like the Arab Spring movement, and capabilities like Facebook’s Safety Check. However, social media has also shown its power to divide us as anger, racism, and rants of hate speech can run rampant. Where do we draw the line in how we talk to each other? Who owns civic responsibility when it comes to what is said on social channels?
The short answer? We all do. The anonymity of social media has for too long allowed people to speak without taking responsibility for the consequences of their speech. It’s all too easy to forget that behind every Twitter handle and Facebook profile is a real human being. It’s easy to blast away without truly considering how our words impact the other person or group. It isn’t just a personal issue. For those of us involved in building those social platforms and communities people engage on, we have a responsibility to lead change.
What is hate speech?
We know that what offends some people, doesn’t offend others. Yet, an empathetic human should have the capacity to understand the wisdom of the Golden Rule (do unto others as you want others to do unto you). The Supreme Court of the United States has strongly protected the freedom of speech, including hate speech, unless it incites imminent hate violence. There is debate about what constitutes those definitions and whether who is deciding it creates bias against other groups. That debate is beyond the scope of this article. Instead, what I’d like to focus on is our civic duty to protect the rights and well-being of anyone in the communities we are responsible for.
It doesn’t have to be law for it to be right
Recently, there was a meme that said “It’s no longer about whether they have decency, but about whether we do.” It referred to politics, but it’s directly applicable to this conversation. What kind of world do we want to live in? How do we want to relate to other human beings? What human responsibility as leaders do we have to protect our customers? Sometimes doing the right thing is just the right thing to do, even if the legal system still hasn’t caught up to define and enforce it yet.
There’s a fine line to walk. Outright censorship does not work. The perception that an entity is disallowing people to express themselves turns people off and may anger them. Customers won’t tolerate the idea that someone “in corporate” is arbitrarily deleting their right to say what they have to say. But as brands, perhaps we’ve catered a bit too much to the preferences of our customers and like lax parents, allowed the inappropriate behavior to go on too long and too far. While we can’t alienate our customers, as leaders we can clearly define what is and isn’t tolerable behavior, and institute consequences.
Tech and social giants are already taking action. Facebook, Twitter, YouTube, and Microsoft have united in Europe to help mitigate hate speech across their channels. Twitter has deleted upwards of 350,000 accounts due to associations with terrorism. Facebook is working to evaluate reported posts and accounts quickly to help diminish hate speech. And realizing that outright censorship isn’t an option, they’ve begun taking unique and creative approaches – Facebook works closely with anti-hate speech activist groups by giving them ad credits, marketing resources, and strategic support. Periscope has taken what I think is a fantastic approach of crowd-sourced policing, in which they issue instant surveys to people in the conversation thread when a comment is potentially inappropriate, then based on the respondents’ feedback that person may be ejected from the conversation (or from Periscope altogether). With the social networks leading change, it’s time we as brands also step up.
So, how do you address inappropriate content, without going the route of flat-out censorship?
Understand the psychology. Most people are empathetic and receptive to calls to be more caring and compassionate. They want to be decent and treat others fairly. They respect brands who uphold human values. When they understand why you are stepping up measures to prevent inappropriate content, you give them a chance to be altruistic and rise to better behavior. Millennials especially relate to brands who demonstrate strong human values and openly extol their intention to make a positive difference in this world.
Be clear in what your policy is. Consult with your legal department and then draft and institute a policy that outlines an acceptable code of conduct for your community members and social channels. Share it openly and often. People need to know where the lines of accountability are, what constitutes crossing those lines, and what steps to take to report violations. Speak clearly about what the consequences of violations are, and inspire people to live up to the policy.
Put the right leadership and management in place. Every social community – online or offline – needs leaders who inspire positive human behavior, goodwill, equal rights, and kindness. The right leaders and managers make a critical difference by leading by example and living your community values. They should serve as the necessary guardrails that ensure the community and social channels stay on track, focus on common goals, and eliminate the noise, i.e., hate speech, trolls, etc. When inappropriate content comes up or violations are made, the right management will handle it with grace and authority, and the community will back them.
Leverage technology, but don’t let it replace humans altogether. Technology can assist in identifying and addressing inappropriate content with machine learning that flags it and facilitates reporting. But you also want to inspire community members to police themselves (as in the Periscope example). Brands cannot scale enough to identify all potential violations, so sharing reporting responsibilities with members is vital as is making it easy to report.
Run campaigns to reinforce positive behavior. The rules need to be clear and well known to all. Don’t hide them in your terms and conditions fine print (no one reads those!) … run campaigns that put this issue out in the open. Run them often. Enlist people’s goodwill. Remind people of why it matters to be respectful of others. Reward kindness and compliance.
We all must own responsibility to make our digital communities better places to reside. We have the power to lead people toward respect, kindness, empathy, and to teach our members how to listen to opposing viewpoints without resorting to personal attacks. Is it truly our responsibility to “parent” our customers? Perhaps not. But when you are given the opportunity to make life better for thousands, sometimes millions, of people, it just might be the right thing to do.