Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!
AI has made tremendous progress, and I'm wildly optimistic about building a better society that is embedded up and down with machine intelligence. But AI today is still very limited. Almost all the economic and social value of deep learning is still through supervised learning, which is limited by the amount of suitably formatted (i.e., labeled) data. Even though AI is helping hundreds of millions of people already, and is well poised to help hundreds of millions more, I don't see any realistic path to AI threatening humanity.
Looking ahead, there're many other types of AI beyond supervised learning that I find exciting, such as unsupervised learning (where we have a lot more data available, because the data does not need to be labeled). There's a lot of excitement about these other forms of learning in my group and others. All of us hope for a technological breakthrough, but none of us can predict when there will be one.
I think fears of "evil killer AI" is already causing policy makers and leaders to misallocate resources to address a phantom. There are other problems that AI will cause, most notably job displacement. Even though AI will help us build a better society in the next decade, we as AI creators should also take responsibility to solve the problems we'll cause in the meantime. I hope MOOCs (Coursera) will be part of the solution, but we will need more than just education.