Asking the Experts: Artificial Intelligence Leaders Answer AI's Most Burning Questions (Part II)

Here, we'll dive into discussing the biggest myths around AI, how the current fear around AI may evolve in the future and what the future may bring with new developments in AI research.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Welcome to Part II of this two-part Q&A series, where I tapped my AI peers for their expert views when it comes to this technology's hot topic questions.

(In Part I, we explored a-ha moments related to AI research, the surprises and discoveries around the technology as well as exploring the most unusual use cases in AI.)

Here, we'll dive into discussing the biggest myths around AI, how the current fear around AI may evolve in the future and what the future may bring with new developments in AI research.

The experts participating in this discussion, again, are:

Adam Cheyer, Co-founder of Viv
David Ackley, Associate Professor of Computer Science at the University of New Mexico
Dr. Dileep George, Co-founder of Vicarious

What are some of the biggest myths you've seen propagated about AI?

Cheyer: I don't think there really are any myths about AI--if we can imagine it, I think it is in the realm of possibility for the future. However, (and it's a big however), I think people often grossly underestimate how hard a task creating human-like intelligence is, and overestimate how close we are to achieving this. There is a lot of press today with reputable scientists such as Elon Musk and Stephen Hawking raising fears that make it seem like true AI and consciousness are just around the corner. Ray Kurzweil has publicly made the claim that within the next 30 years, "$1000 buys a computer a billion times more intelligent than every human combined." Ray's assumptions are primarily hardware-based, e.g., using the number of calculations a human brain can make and comparing that to the processing speed of computers. I would argue that for intelligence, software (e.g. what the instructions are) is more important than the processing speed (how fast instructions are executed), and that even the most advanced scientists in computer science and neuroscience have barely the faintest glimmer about how human intelligence really works. My prediction of when we will achieve automated human-level intelligence would be measured in centuries or millennia rather than in years or decades.

George: There are two extremes I've seen perpetuated from time to time. The first is that general artificial intelligence will never happen (because the brain is too complex to ever understand), or it'll happen tomorrow and spiral out of control. Also, news stories tend to get a lot of what happens in AI wrong. Many times the headlines will overstate what has actually been achieved, or the implications of a discovery, because bigger stories generate more clicks.

How do you think the "fear around AI" will evolve or shift in the future?

Cheyer: I have been working on AI long enough to have experienced several cycles of the "AI Sentiment Sine Wave." In the 1980s, expert systems were hot and were going to create intelligent machines everywhere. But then AI fell out of fashion somewhat in 1990s and early 2000s when the reality didn't live up to the outsized expectations. Since the mid-2000s, AI is back in a big way, and for good reason! Real advances have been made in fields such as image processing, speech recognition, virtual personal assistants, autonomous vehicles, machine learning, and so forth. This convergence of new successes brings outsized expectations (and fears) about what is next. However, I would guess that when reality settles in, this fear (and expectation) around AI will go away for another decade or so... Going back to the late 1950's and early 60's (hot!), the 1970's (not!), and on to today, I think we see continued, incremental advances made while public sentiment swings up and down over time.

Ackley: The notion that there will be a rapid and incomprehensible "technological singularity" due to AI is a myth, in some ways plausible but ultimately another example of intelligence overestimating itself. The dynamics of change in the real world are rate-limited by everything from time and distance to mass and energy to law, money, politics, spite, and so on. Intelligence likes to envision itself as steering the boat, but reality is a darn big boat, with a lot of forces in a lot of directions, and in truth, intelligence is often less the ship's captain than its historian. If, down the road, we produce machines even in the ballpark of our own broad capabilities, our concerns will be less about competing with them and more about raising them right. That said, it is also true, in the nearer term at least, that AI technologies are going to continue and accelerate the economic and workforce dislocations that successful technologies often cause, and as a society we mostly have yet to deal with that. Similarly, autonomous weapons seem both imminent and risky, though much of that risk adheres to the weapon -- the machine with a simple switch to produce extreme damage -- rather than to the warrior, terrorist, lunatic, or imperfect machine holding the switch.

George: Since the invention of fire, new technologies have always had the possibility of being used to help or harm. Part of the responsibility of building something new is ensuring that what you create is a net positive for humanity. I think there's a lot of extra fear around AI right now because it's so poorly understood. I see a big disconnect between the capabilities of the AI research community and the fears of Hollywood or the press. I think once people get a better understanding of how these systems work and their limitations, they will feel a lot more comfortable.

What's the next big AI research field that you see taking center stage?

Cheyer: Right now, deep learning (e.g. deep neural networks) is probably the hottest topic in AI, with its recent successes in the fields of image recognition and speech recognition. In my view, deep learning is predominantly associated with what I would consider "perceptive" functions, such as recognizing words, faces, and objects. I would expect that as machines start to master these more low-level functions, more attention will soon turn to higher-order functions such as planning and reasoning, which are at the core of many human functions. At Viv Labs, we are working on technologies that will enable complex tasks to be solved by a computer through learning-enhanced automatic program synthesis techniques, which will enable the satisfaction of an exponential set of use cases that previously have been constrained by the limitations of human programmer coding.

Ackley: The work will have many names and look different case by case, and there will be future plateaus in research progress, but for the next decade anyway it's "Machine Learning All The Things": Applying large artificial neural networks and similar techniques to every conceivable task for which sufficient training data can be obtained. Hardware for machine learning will become much stronger at the top and much cheaper at the bottom, increasing its research and development range. At first the results will emerge from the giant data centers and pervade the internet, where no one knows that you're a robot, and increased machine competence at nearly every well-defined information processing task -- though not fluent mastery of them -- will be expected, boring, invisible almost before we know it.

George: High level concepts and sensory-motor generative models. The advances that deep learning brought out for image classification has created a lot of excitement and there is quite a lot of good work being done in extending those advances to different aspects of the vision problem -- detection, semantic segmentation, etc.

I hope you enjoyed this two-part series. The goal of this piece was to help share a more balanced view on some of the top questions in AI directly from the AI community.

Again, please reach out to me directly at @SentientDAI if you'd like to discuss these topics with me directly.

Popular in the Community

Close

What's Hot