Let me admit at the outset that I am anything but a fan of social media. I believe that in far too many cases, they pander to the worst impulses of our society's juvenile, obsessive need for instantaneous, narcissistic gratification. That said, in the case of the horrific Boston bombings, I also recognize they played an invaluable role in apprehending the perpetrators. I also recognize that they are here to stay. I just wish that we would invent ones that are more ethical.
My greatest reservation about social media--Facebook in particular--is its unmitigated role in cyber bullying. Indeed, if we had intentionally wanted to invent a tool that could bully the most kids incessantly 24/7, then we couldn't have concocted anything more powerful.
The fact that a number of teenage girls have committed suicide recently as result of relentless bullying is enough to convince me that something is seriously wrong with how our society fosters the latest technologies without doing a proper "ethical assessment" of them beforehand. To be fair--if that's ever truly possible--it's not just social media with which I am concerned, but the Internet, cell phones, etc.
To be clear, as a former engineer, I am anything but hostile to technology. I am in fact a confirmed "techno-holic." I love the latest gadgets.
But more to the point, I am primarily a social philosopher/social scientist. That's why throughout my career, I have always been interested in the murky concept of "ethical technologies."
In brief, an ethical technology is one that incorporates ethics from its very inception. In other words, ethics is not brought in once the genie is out of the bottle, for often that's too late.
Ethical technology starts with a series of questions: What primary assumptions are we making about how our technology will be used? Who will it hurt/harm? Who will it benefit under which conditions? How can it be abused? What can/should we be doing to minimize the improper uses of our technology? Indeed, what's "improper?"
It's not that there are easy answers to these questions. There are not meant to be easy answers. Rather, ethics is concerned fundamentally with raising the toughest questions possible about anything that humans do. In this regard, the notion of ethical technologies starts with a prime presumption: nothing is ever neutral when it affects people. In short, there are no ethically neutral technologies, period!
Take the question of assumptions. It's one thing to develop a Facebook for Harvard students. It's quite another to develop one for young, immature kids who can and will use it to spew out the worse obscenities to taunt their "enemies" with little or no remorse. One can't just assume that the general population will behave as Harvard undergraduates. To assume such raises, or ought to raise, an "ethical flag."
Furthermore, don't tell me given all our experience with countless technologies that we couldn't have raised some such questions beforehand. Rather, if we've learned anything, such questions rarely, if ever, cross the minds of inventors.
I understand perfectly the feelings of those parents who insist that unless they can monitor their underage children's smart phones and Internet accounts for salacious content and hateful messages that they will not allow them to have them. But of course this is reactive. Further, no one can monitor the activities of one's children all the time especially when they are away from home a great deal of the day.
How much better it would have been if Facebook and others had worked with parents, the ACLU, etc. to set up proper safeguards beforehand--monitoring groups, etc.--that at the same time would respect privacy and free speech rights.
At this point, all we can hope for is that our technologies will evolve.
Ian I. Mitroff is a Research Associate at the Center for Catastrophic Risk Management at UC Berkeley. He is an expert in Crisis Management and a social philosopher/social scientist.