Rubbing my eyes awake at 5 a.m. most mornings, the first thing I typically do is look at my iPhone and check my email and Facebook account. The other day, I was startled to see an angry and quite profane Facebook message from an old friend from high school. Pouring my morning coffee, I muttered to myself, "Wow. I wonder what I did to make her so mad."
Nothing, in fact. And after my third cup, I realized that it was a virus, which luckily I didn't click on; if I had, all of my friends might have received the same foul message seemingly from me.
That's the Internet of 2011: along with all of the good things it brings, online criminals, hackers and "phishers" are finding increasingly sophisticated ways to hijack Facebook accounts. But it isn't just Facebook that's at risk: search engines, ISPs and everyday Internet users continue to fight an ongoing battle against deceptive and dangerous websites.
So what's the answer? There's no silver bullet, but Facebook is employing a new strategy, partnering up with the Web of Trust (also known as WOT), which aggregates millions (according to WOT, the tool has been downloaded over 20 million times) of independent users' opinions and ranking about websites' reliability. (My company, LegitScript.com, helps WOT by providing them with our feed of "rogue" Internet pharmacies to assist in their reputation rankings.)
How does it work? For WOT users, it's all about a "plug-in" for your browser: if you use Firefox, Chrome, Safari, Opera or Internet Explorer, you can download hundreds of different tweaks for your browser. Operating in the background, WOT turns your browser into a kaleidoscope of color: when you use a search engine, the aggregated opinions about a website's trustworthiness is reflected, based on a color scheme, right next to the result. And WOT users can give back, ranking websites based on trustworthiness and other criteria.
It's the ultimate democratization of the Internet.
Infallible? No system is, but WOT's process controls for disreputable participants trying to game the system. Take the case of Internet pharmacies: our data indicates that only about 2% - 3% of online pharmacies are legitimate, and the rest fail to meet basic legal or safety standards. Sure, rogue Internet pharmacy operators selling fake drugs may try to rank their websites as "trustworthy" in WOT. But it doesn't typically work, because WOT's system assumes that not all users are equally reliable, and reliability has to be earned. Over time, new users' reliability improves and their ratings have more weight; but biased or dishonest reviewers are called out and their reliability ranking ends up in WOT's basement. In short, it's the wisdom of the crowd.
The result: as Internet users see the Internet pharmacy's (or another website's) low ranking, they are less likely to order drugs from the website. This reduces the incentive, and the financial rewards, for Internet pharmacy and other types of cybercrime.
So what does that mean for Facebook users? If you click on a link, that leads to a page with a poor reputation rating given by the WOT community, Facebook will show a warning message. It doesn't mean you don't have the freedom to visit the page if you still want to, but it is a helpful warning tool.
What's interesting about this is what it says about the limits of automation in making the Internet secure. After all, WOT indicates that its crowd-sourcing model regularly identifies threats that automated processes miss. Facebook, Google and other major companies have increasingly advanced algorithms to fight cybercrime and fraud, and those are also indispensable tools. But at the end of the day, the Facebook-WOT deal, which ultimately relies on the wisdom of the crowd, is a reminder that the Internet belongs to all of us, and we're all responsible for it.
Now, back to my morning coffee... and to double-check that I deleted that profane email from my Facebook inbox.