The Next Big Threat to Truth and Honesty on the Internet

Photo via <a rel="nofollow" href="https://visualhunt.com/re/a4f13e" target="_blank">Visual Hunt</a>
Photo via Visual Hunt

Starting from the early days of AOL and the Netscape browser through present day, the internet has always been a place where the accuracy and truth of online content has been questionable.

The famous 1993 cartoon in The New Yorker made the point well. It showed a dog sitting at a computer, with the headline “On the internet, nobody knows you’re a dog.”

The value and utility of the internet is unquestionable. Still, the lingering question about any piece of information you retrieve from online is whether it’s real, fake or somewhere in between?

Unfortunately, this idea of fake news isn’t the only concern we should be talking about. Recent advancements in audio and visual digital technologies are making it increasingly difficult to detect if what you actually saw or heard was real.

Over the last few months, commercial technologies have emerged that enable people to copy and alter anyone’s voice, including the voices of famous and noteworthy people, to make it appear as if that person said something that they didn’t.

For example, a new Adobe tool takes voice technology and allows you to use a voice to say anything you want. The easiest way to think of this is as Photoshop for voice and video recordings.

Want to hear a famous actor’s voice say something obscene? A tool like this can make this fictitious event sound real.

Even a few startups have dipped their toes in the water. Companies like Lyrebird have developed voice tools to copy any voice and insert any statement you choose. The company’s website demonstrates how famous people like Hillary Clinton’s or President Trump’s voices can be resembled to talk about their company, as if it was really them speaking.

The similarities and voice quality are surprisingly good.

As with other technologies, seeing what advancements are being made in research universities and institutions can speak volumes about what the future will bring.

A group of researchers from Stanford University, the University of Erlangen-Nuremberg and the Max Planck Institute for Informatics have developed a unique approach to what they refer to as “real-time facial reenactment,” which allows one face to shape the facial expressions of someone else, such as a celebrity.

I assure you, their demonstration will likely shock you. The project thesis webpage shows how the face of former President Bush or Arnold Schwarzenegger can be visually manipulated to repeat the exact facial expressions as the person running the test.

To be clear, all of the companies and institutions behind these specific technologies have developed them with intention for valuable and honest use. Unfortunately, similar to other actions on the internet, these technologies become an easy avenue for abuse.

The potential misuse of these features by bad actors could end up being as destructive as what some people deem as fake news in today’s environment. Think about the possible hazards associated with video and audio going viral with today’s world leaders, CEOs or social influencers fraudulently stating phrases that shape the actions or ideas of others.

Particularly as more content is created and optimized for audio and video, exemplified by the growing significance of on-demand videos and social media material, the harm here can’t fully be realized.

Ultimately, the future may involve a widespread policing of content, similar to what is currently being done with sites like Wikipedia and others. Accuracy and truth will likely only be maintained through community efforts because the internet has become too big and decentralized for any one organization to monitor and censor.

This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
CONVERSATIONS