The Springtime of Artificial Intelligence

Artificial intelligence will either lead to robots that eat us alive or it will make everything we do simpler and more productive. Those are the false choices we assume are the only options when we read Russia’s president make outrageous commentary about AI. And then Elon Musk of Tesla and SpaceX tells us to be wary. There is reason to take note of the advancements of AI and machine learning, though.

But worry?

The current improving viability of AI is a product of a number of factors. Increasingly, cloud-based storage of data and computational power, the advent of Graphics Processing Units (GPUs), and data quality assurance, have all enabled AI and Machine Learning (ML) to become enduring drivers of innovation. A number of software-as-a-service providers like SalesForce are turning to first generation ML functionality to improve their current offerings, and even the software giants like Oracle, Microsoft, and SAP are beginning adoption.

Everything, however, is dependent upon data.

The misconception persists that AI and machine learning can evolve and mature without human involvement. They are, nonetheless, dependent upon the quality of data for performance of various tasks, and data must be parsed and cleaned and tagged, dupes removed, and the datasets turned into a “ground truth” by human hands. In fact, lacking clean data, algorithms cannot learn and neither AI nor ML can effectively perform tasks.

Nathaniel Gates, CEO of Alegion, has bet his company’s future on human involvement in the potential of AI and machine learning. Actually, Alegion is a cloud platform that crowd sources people to turn data into the clean information needed for software to correctly carry out assigned work.

“Businesses now have big aggregated datasets from many systems,” Gates said. “And they’re using algorithms to try to make heads or tails of it. The vast majority of data is not in a state where machine learning and AI can produce good quality work from it. We clean up inconsistencies and dupes and often add a metadata layer so computers can be efficient with their algorithms. Most of what we are doing is applying our ‘crowd’ to go through at a very large scale and clean up data and add labels so AI initiatives can be applied.”

Nathaniel Gates, Alegion
Nathaniel Gates, Alegion

The data indicates there will more data, and companies like Alegion will have healthy business. Navidar, an Austin, Texas technology investment bank in the Mid-American corridor, estimates that the amount of digital data will continue to double in size every two years. An even greater challenge is that 70 percent of the data that is used to make decisions by enterprises is completely unstructured and comes from social media, wearables, the Internet of Things, and various mobile devices.

Jeff Houston, C.F.A., an analyst for Navidar, who wrote a recent report entitled, “Machine Learning is Driving an Innovation Wave in SaaS Software” for the investment bank, does not see a time when AI will clean up its own data and machines will learn on their own.

“One of my conclusions,” he said, “is that humans will always have to be a part of the process. What machine learning does is augmenting human decisions to make them better. Everything, though, is built on quality data, and there is no foreseeable way that machine learning or AI can clean its own data. I don’t know if anyone believes that can ever happen.”

                                     Jeff Houston, Navidar
Jeff Houston, Navidar

An understanding of the distinction between AI and machine learning is necessary to make effective use of the advancing technology. Ian Clarke, who is the practice lead for AI and ML at the business intelligence company Blacklight Solutions in Austin, was an early pioneer in AI, and points out that they are overlapping fields.

“Everything that is machine learning is also AI, but everything that is AI is not machine learning,” he said. “AI is trying to solve problems like getting computers to understand spoken words while machine learning is simply looking for patterns in data that can help us understand things like behaviors. With machine learning you typically don’t tell a machine how to solve a problem; rather, you give the machine a lot of data and let it figure out how to solve the problem itself. But when you punch an address into a navigation app on your phone and it figures out the fastest way to get there, the algorithm it is using on that data falls more under AI.”

                     Ian Clarke, Blacklight Solutions
Ian Clarke, Blacklight Solutions

Which, as always, brings the discussion back to data quality. Companies like Alegion and CrowdFlower are virtual linchpins to the success of AI and ML. Using crowd-sourcing, technology-enabled micro-tasking proprietary platforms, they create intelligent workflows that enable the labeling of tens of millions of data records in weeks or months instead of years.

“Every corporation is trying to monetize analytics and their data,” said Alegion’s Gates. “But judgments coming out of algorithms will be flawed unless you have properly trained them with good data and models to get to a certain confidence level. We are still in a ‘garbage in and garbage out’ world, so building automation requires perfect training data sets in the models.”

The human mind will continue to be required for more than just data preparation. Analysts and data scientists will be involved along with programmers, subject-matter experts, and business and consumer users, though ML and AI will increasingly automate and complement human decisions.

Musk’s argument is that AI is already a fundamental risk to human civilization in a way that “accidents, airplane crashes, faulty drugs or bad food were not” because they endangered sets of individuals within a society and not that society as a whole. Blacklight’s Clarke, who founded the Freenet and early technology for real-time bidding and targeted advertising, believes Musk’s fears will not be realized if humans build an artificial “general intelligence” (AI GI) before it constructs itself.

“We might build it by accident and not realize we’ve created it,” said Clarke. “What if the Facebook feed was an artificial general intelligence and we didn’t know it yet? That would be a pretty dangerous scenario. If it was in any way malevolent and didn’t want to share the planet with us, we’d be screwed as a species. I think it’s important we build one deliberately rather than accidentally and build in constraints like a moral framework. But regulating AI is equally dangerous because it increases the possibility someone illegally builds a general intelligence that could become a malevolent force.”

Clarke predicts that an AI GI will be built in the next 10-40 years, but he’s not worried about a Terminator or Commander Data taking over the world.

Because humans still control the data.

This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
CONVERSATIONS