Should artificial intelligence be regulated? No.
I understand how recent advances and associated hype can be scary for people, especially since doomsday scenarios related to AI have been part of our popular culture for many decades. I also understand, to address one of Ben Y. Zhao’s concerns, that my opinion might come across as one of those “dismissive insiders”. However, I think there at least 3 good reasons not to regulate AI.
AI is a fundamental technology
Artificial Intelligence is a field of research and development. You can compare it to quantum mechanics, nanotechnology, biochemistry, nuclear energy, or even math, just to cite a few examples. Fundamental research fields or technologies should not be regulated. All of them could have scary or evil applications, but regulating them at the fundamental level would inevitably hinder advances, some of which could have a much more positive impact than we can envision now.
To put this into perspective, perhaps the only fundamental research field that is highly regulated as of today is medicine. Of course, medicine has been developed for centuries, and has intrinsic issues in that it needs to be directly tested on humans or at least living creatures. Even taking that into account, it is widely accepted that strong regulation in medicine makes it extremely hard and costly to innovate. Because of this, the medical field as a whole is clearly years behind in its adoption of basic technological advances.
Therefore, AI as such should not be regulated. What should be heavily regulated is its use in dangerous applications, such as guns or weapons.
It is way too early
If you ask any expert as of today what should be regulated in AI the answer would have to be, inevitably, “we don’t know.” If you take a look at the research being carried out at Elon Musk’s Future of Life Institute on the topic, you will realize that all projects are researchy in nature (e.g. see this one about how to better estimate probabilities for self-driving cars) and most of them are at their infancy (see description of this project on how to teach Deep Learning about moral concepts).
Honestly, if we had to regulate AI any time soon we would not know how to do it. What’s even worse, we could let people with absolutely no understanding of the technology do it. If we connect this to the the previous idea of AI being a fundamental technology, we have a recipe for disaster. This would be worse than having let the governments regulate the Internet in the 80s.
Regulate, at what level?
Ok, let’s pretend I haven’t convinced you enough with the two previous reasons and you still insist on regulating. My question would be: at what level would you do this? Would you want the US government to regulate its research and deployment in general while other countries (including perhaps North Korea) would freely continue to innovate and deploy their latest advances? Clearly not. I am guessing people like Musk proposing regulation have not even thought about this, or are many thinking of a regulation at the UN level? Good luck with that.
As far as I know, health again, is the only example of such a level of international regulation. The World Health Organization managed to convince most countries in the world to sign the IHR after over 40 years of work. Many countries, including the US, signed with reservations.
So, to summarize, AI should not be regulated because it is a fundamental technology, and at this point we would not know what to regulate or how to get enough international support for that to happen. To be fair to Musk and others though, given that it is likely to take 50 years at best to get anything done, it might be ok to have a few loud voices pushing for it now. I just hope that they don’t get their voices heard too soon and we end up in a situation where people with no clue and understanding prevent us from a better future by compromising innovation in such a key area of human development.
Wow. This is a big question, and more than a bit above my pay grade. But I’ll do my best to give an opinion.
Caveats: I use ML, build systems with ML, and try to poke holes in ML. But I am not an ML expert by any stretch of the imagination. It’s quite possible that I understand “just enough to be dangerous.” But for a question like this, it’s not clear who is the best person to answer the question: those who are true experts, and therefore likely biased by their own interpretation of the space, or those who know little about the space, and therefore have completely mistaken impressions of what the problems are?
Back when I took an AI class during my undergrad days at Yale, AI was nothing mysterious, and in someways, a bit of a letdown. I went in with the hopes of understanding about how to build “intelligence,” and came out with a novice level understanding of how to “solve” problems by treating everything as a massive search problem, e.g. how can you play chess by mapping out all possible combinations of moves N steps down the road, and choose the path that maximizes overall probability of winning or a positive position captured by some metric. Today’s statistical machine learning, and deep neural nets are completely different.
I’d like to think I have a reasonable understanding of basic ML classifiers and how to use them. But like most of us, I do not have a good intuition for how DNNs (CNNs, RNNs, LSTMs etc.) do so well at particular problems. For quite some time, I read opposing opinions on both sides on this question, e.g. Musk vs. Ng/Dean/… and I found myself on the fence for a while. That lasted until Jeff Dean came to UCSB and gave a distinguished lecturer on Google’s ML/AI efforts. Aside from the usual cool applications, the two things that stuck out in my mind were: a) massive efforts to accelerate how fast DNN models can take in training data (# of “sensors”), and b) self correction/optimizing models (although this part was admittedly in the early stages).
So from a high level, my simple intuition says AI regulation is a good idea. ML experts are pushing hard to solve problems that would a) accelerate how fast models learn, and b) help models self-improve by fixing problems that produce suboptimal performance w.r.t. specific metrics. Combine the two, and it seems like you have all the makings of a runaway train. From a conservative standpoint, regulation makes a lot of sense, because this is a problem that in the worst case, has world-ending implications. And if we want to have any hope of doing it “right,” we need to give it time. The FDA, FCC, or any of the other regulatory agencies (if this is indeed an apt analogy), all took a long time to settle down into what they are today. Any attempt to regulate AI would require significant effort to educate some policy makers, and much more time to come up with some understanding of what “regulation” even means in the world of AI.
I’ve been somewhat surprised by how dismissive some of the opposing arguments have been. The general answer has been “we are so far away from world ending AI, e.g. Skynet, and there are much more important problems today in the world.” But I find that argument unconvincing. The security community has seen this mistake made over and over for decades. Security considerations/mechanisms cannot be added after the fact. Yet system builders routinely ignore this, and build systems with critical design flaws. Even today in AI, research on negative sides of AI/ML (either attackers using AI, or adversarial ML) lags far behind the core ML efforts. It’s great for people who work in security like me (we have a paper upcoming at CCS 2017 showing how DNNs can be used as a powerful weapon to generate false online reviews that are undetectable by human users and software alike, and we have multiple other fun projects in progress). Yet unlike other computing systems, the worst-case failure mode for the Singularity problem ranges from catastrophic to world-ending. So if it requires some overabundance of cautionary measures, it’s probably worth it. The argument of “we have other, more important problems” fails, because it assumes a level of rationality and liquidity of resources that does not exist. Congress will not become more productive on climate change because it freed up resources from regulating AI.
What I personally worry about is not Skynet or the Matrix, which is admittedly quite far away. I worry more about the complex DNNs that self-optimize into unpredictable outcomes with “bugs” that are undetectable by users or engineers, simply because of how opaque they are. Once deployed, they would be vulnerable to producing disastrous output when prompted by knowledgeable attackers or just random inputs. The rush to use the powerful results of AI seems to always outpace the security understanding of AI, e.g. facial recognition is being deployed widely in China, yet we are still writing papers about how a pair of glasses or facial markings can transform one recognition result into another.
So I guess my answer is yes, I think regulation in AI is a good idea, and likely inevitable. Whether it’s to stave off the singularity or just to manage poorly understood and widely deployed AI tools, I think regulation is necessary, and needs to start soon to have any chance of keeping up with innovation in the field of AI/ML.
This question originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions: