The recent report of Facebook taking hours to respond to the removal on a video reportedly of a murder is yet another example of digital social media current inability to respond to real world physical concerns in an appropriate and timely manner. Its one thing to claim a mission of open connected society but if we want to protect this real society , the values and norms that are in its “digital twin” the online world need to also have the controls and care it deserves as it becomes entangled with real life.
Facebook does not lack the resources to do more on this with 1.86 billion users and a market capital of $410 Billion. The issue seems to be able to deal with a social media that works in seconds not hours with “network effects” amplifies around the globe. With 8 billion daily video views is about 5 million a minute video views it needs artificial intelligence to work and assist monitoring and management.
Facebook and google back in November 2016 in the American Election said they would do more to stop fake news and changed posting policies and recruited an army of human fact checkers to check semantics.
They are in a conundrum of their own making wanted to keep it “open” and “free speech” but these kinds of heinous crimes, which is what they are, seem to reflect two key problems inside Facebook.
Firstly, Facebook internal organization and culture of 17,000 employees in 40 international and 15 US offices with 9 data centers cannot monitor this volume of traffic it its current form. One office per country and presumably some local hired companies to help out is not the right model. At these speedups of social data volumes (80,000 video views a second in total) it can only be done by artificial intelligence.
(A famous “dining philosophers” problem in computer science shows where limited resources try to make the best group decisions to best use scarce resources but needs to make tradeoffs. Another is the “Byzantine Generals Problem” in blockchain shows failure of communication between organizing different parts of the total system. These and others are the complexities faced by Facebook to oversee it’s sprawling vast empire of clouds and crowds of interconnected distributed spaghetti data .)
Secondly, Facebook is investing heavily in artificial Intelligence admitted publicly since early 2016 and in February 2017 to experiment with m ore sophisticated AI to monitor and flag fake and offensive news items and behavior.
It’s a failure of poor AI training data in natural language processing (NLP) and image recognition and Semantic reasoning to “understand anomalies”. These are frontier AI research topics which are still early days and clearly does not have enough done yet to real-time emergency response which needs to process petabytes of data per day to monitor “whats going on”. We are only just at that stage which some call the 4th Industrial Revolution” (4IR) an era where we have super fast computers and Internet of things power but realistically its 2025 to 2030 before its at the Exascale computing speed that can handle these volumes.
Its one thing for a self-driving car to recognize a pedestrian or a speak-to-text translation to work in milliseconds but that’s just a car-to-person and one-to-one, not 9 billion of them every hour. The stakes have been raised and reveals the true nature of the hyperconnected world that is “post Moore’s law” and needs appropriately geared and scaled solutions as espoused in the 4IR ideas. Facebook is faced with its own growing needs to reinvent its own architecture to include these safety checks and balances that the earlier age of innocence of the 1990’s internet could have barely imagined.
Understanding reasoning by AI needs a lot of training, Facebook clearly have this data on all of us in the order of Trillions of data points but its also at the frontline of protecting our society values with a new level of technology that needs a new Facebook organization fit for the 21st Century.