What you say on Facebook can land you in a mental institution?
That's what happened to Shane Tusch of San Mateo. Shane was imprisoned in a mental institution because of a post he made as a test exercising his First Amendment rights and a "welfare check-in" call that Facebook facilitated or made under Facebook's new suicide-prevention program.
Shane's story is just one example of what can go wrong when Facebook polices our posts as though it should be called Headbook. That's why I wrote CEO Mark Zuckerberg today to ask him to put some new safeguards in his program.
On the first day of the expansion of this suicide-prevention program, and in response to it, Shane posted to his account, as a social experiment, a post indicating that his discontent with his bank made him want to hang himself from the Bay Bridge.
When worried Facebook "friends" contacted him with concerns about his post, he informed them that this was an experiment. Nonetheless, according to Shane, a call was made to the police either by Facebook or another Facebook member who was not his friend. Shane was locked out of his account until he read suicide-prevention literature.
Police came to his house when he was not home. When he went to the police station to discuss a traffic notice that they had placed on his car, which the police claimed was unrelated, he was asked about the Facebook post. Because he acknowledged the post, even though he stated that it was a First Amendment experiment, he was handcuffed and imprisoned for 40 hours in a mental-health facility, where blood was drawn, and then transferred and "locked down" in a hospital for another 30 hours. At the county mental institution he was forced to witness disturbing events that traumatized him.
Facebook announced the suicide-prevention program on Feb. 25. Facebook Product Manager and Community Operations Safety Specialist Rob Boyle described in detail how the tool works in a video. He said that once a concerned user flags a post to bring it to Facebook's attention, the user is presented with four options to proceed, one of which is to request that Facebook take a look at the post.
From there Facebook will evaluate the post and do one of two things: 1) If it deems the post "worrisome but not imminent," Facebook will send the poster resources, such as self-care tips or a connection to a free, confidential chat line, or 2) if Facebook thinks there is an "imminent threat," they will reach out and find local law-enforcement agencies to do a "welfare check." Facebook will then "follow up" with the person who flagged the post, but Facebook has not specified what would be in the "follow-up" or whether it would take any further action.
I warned Zuckerberg about the myriad consequences of the program, including the new liability the company was facing because it chose to assume a new duty under California's Voluntarily Assumed Duty rule.
Because Facebook provides the suicide-prevention tool with every account, users will rely on Facebook to ensure not only that the tool operates but that Facebook will implement the tool services swiftly and adequately to save lives. Facebook is essentially putting the suicide-prevention tool on the same level as an emergency 911 service call.
Imagine a post erroneously tagging a teenager as suicidal in an act of bullying, or a post made by one teenager on another's computer that leads to such a tag and the teenager being locked out of their account. For young people such an unconscionable act may do more to prompt them to commit suicide than the lack of an "intervention."
Facebook runs the risk that the suicide-prevention services might cause harm unrelated to the potential suicide. Some examples: online bullying by users abusing the tool, false positives, account lockouts, reputation damage in the event of privacy breaches, or other harms yet to be found.
I'm a big proponent of the Prime Directive, the rule in the Star Trek series that says that technologically advanced people should butt out of the development of societies that they do not fully understand, because unforeseen issues could create more problems than they solve.
I wrote Zuckerberg that "Facebook should heed the warning of the prime directive because there are some problems, like suicide, that well-intentioned technologists can nonetheless exacerbate."
One thing's certain: If Facebook doesn't realize the power and responsibility it is assuming, the problem is bound to repeat itself.
As for all Facebook users, be careful what you post or what's posted by others on your account. It can and will be used against you.