Why Facebook Actually Shut Down an Artificial Intelligence Program That Created Its Own Language

Why Facebook Actually Shut Down an Artificial Intelligence Program That Created Its Own Language
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
Getty Images

Getty Images

Was Facebook right to shut down its AI after it invented its own language? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by Paul King, Computational Neuroscientist, Data Scientist, Technology Entrepreneur, on Quora:

There’s a very misleading article making the rounds on social media: Researchers Shut Down AI That Invented Its Own Language. Facebook does not appear to have “shut down an AI” that invented its own language, for example, out of any fear that it would get out of control.

Here is a more realistic reporting of Facebook’s AI experiment: A Facebook AI Unexpectedly Created Its Own Unique Language.

In experimenting with language learning, a research algorithm ended up deviating from human language in a way that wasn’t really useful… it started generating what one might call “functional gibberish”. It was functional in that it continued to carry information, but it wasn’t very efficient or useful.

You could compare it to the weird routes that Google Maps generates sometimes… sure they might save thirty seconds of driving, but they involve ten turns down obscure side streets instead of three turns on main streets.

The result was intriguing because it showed the algorithm’s capacity for generating its own encoding scheme, and also showed what can happen with unconstrained feedback in an automated social language product.

The algorithm wasn’t “shut down” any more than myriad algorithms are shut down every time engineers change the algorithm. Most likely the algorithm was only running on a test dataset and was never "live” or interacting with real humans.

Could this idea, taken to its logical extreme, lead one day to software being “alive” and “conscious”? Maybe. It is certainly an intriguing possibility.

This question originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Popular in the Community

Close

What's Hot