AI Regulation Is Coming - Here's How to Do It Right

AI Regulation Is Coming - Here's How to Do It Right
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Elon Musk famously compared artificial intelligence (AI) to “summoning the demon,” citing such potential threats as robots harming or killing humans. And images from the Terminator movies have been frequently used in discourse about the need to regulate AI.

But consumer threats from AI need not be so dramatic to demonstrate the need for regulation. Consider the following examples:

· A self-driving car causes an accident, killing the other vehicle’s driver.

· A smart thermostat in a consumer’s home malfunctions, cooling the house down to 30 degrees, killing the person’s plants and cat.

· A bank uses AI to screen mortgage applications, and is found to reject applications from under-represented minorities at a significantly higher rate.

· A customer service chatbot is found to be making sexist comments about women.

All of these scenarios, or ones very similar to them, have already occurred. And while these examples lack the drama of autonomous killing machines attempting to annihilate the human race, they nonetheless highlight real, present threats posed by AI.

Given that the threats from AI are indisputably real, regulation is inevitable. Moreover, absent regulation by legislation, AI could wind up regulated through case law from litigation. Not only could regulation from case law create inconsistent directives for businesses, but it also runs the risk that a few individual judges, who might not be well versed in technology or AI, could wind up disproportionately impacting the industry without sufficient input from stakeholders.

Earlier this fall at the Influencer Series, a group of Silicon Valley-based technology executives – most of whom lead or invest in AI companies – overwhelmingly concurred that AI regulation is coming. The question we debated is: how should AI be regulated?

One possibility is to model regulation of AI after regulation of science, wherein the science itself is not be regulated, but rather the outputs derived from science are regulated. Similar to science, AI is neither inherently helpful nor harmful; rather, it is the specific application of AI that can be helpful or harmful. Thus, regulating the outcome, rather than the algorithms themselves, would be in keeping with prior precedent and a more tractable solution from an enforcement perspective.

To avoid over-arching regulation whose industry-by-industry enforcement becomes impractical, regulation of AI should be industry-specific. In practical terms, this means that the Department of Transportation would regulate AI used in airplanes and automobiles, the Food and Drug Administration would regulate AI used in the pharmaceutical industry, and so on.

Leveraging the existing industry-specific regulatory framework offers a number of advantages, including prescribed methods for soliciting inputs from stakeholders. Additionally, the application of AI – and potential harm to consumers from AI – might vary greatly from industry to industry, necessitating different regulations and remedies.

Such a framework has the added benefit of clarifying questions of liability. Consider the self-driving car example above, in which the autonomous vehicle causes an accident that kills the driver of the other car: who assumes liability for the damages? In the proposed framework, liability falls to the owner of the autonomous vehicle, because existing regulations state that the owner of the vehicle at fault is the party responsible for the damages.

Absent such a framework, liability becomes less clear, and could fall to several different parties, including the owner of the autonomous vehicle, the manufacturer of the vehicle, the manufacturer of the engine (if the engine caused the crash), the person who created the AI algorithm, the person responsible for maintaining the AI algorithm, and so on.

However, we must be cautious: heavy industry-specific regulation of AI could pose risks as well. Consider the nuclear industry: a litany of industry-specific regulation has so dramatically handicapped the industry that many nuclear power plants in the U.S. have closed in recent years. This is not to say that regulation of the nuclear industry was unwarranted, but rather that even well-intentioned industry-specific laws can have far-reaching consequences.

Additionally, as we seek to balance vital consumer protections for American consumers with a framework that supports AI innovation, the global impact must be considered: were America to adopt a strident regulatory framework, we may risk losing the global arms race in AI to countries like India and China, where there are few or no restrictions on developing AI. In fact, countries with little or no AI regulation may also begin to attract a deluge of investment and talent in adjacent technology areas as well. Thus, while some regulation of AI may be necessary, it must be balanced with preserving our competitive advantage in the global marketplace.

We need not have realized the science fiction of robot armies to think prescriptively about AI. Industry-specific regulations, derived with input from both businesses and consumers, can enhance innovation in AI, while providing society with sensible protections.

Popular in the Community

Close

What's Hot