Ethics And Creativity In Artificial Intelligence: An Interview With Mark Riedl

"The more an AI system or a robot can understand the values of the people that it’s interacting with, the less conflict there’ll be."
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
In the future, Riedl says, "we’ll have robots and AI systems that are so sophisticated that they can theoretically learn that they have an off switch ― and learn to keep humans from turning them off."
In the future, Riedl says, "we’ll have robots and AI systems that are so sophisticated that they can theoretically learn that they have an off switch ― and learn to keep humans from turning them off."
Shutterstock

By Ariel Conn

If future artificial intelligence systems are to interact with us effectively, Mark Riedl believes we need to teach them “common sense.” I interviewed Riedl to discuss how AIs can use stories and creativity to understand and exhibit culture and ethics, while also gaining “common sense reasoning.” We also discuss the “big red button” problem with AI safety, the process of teaching rationalization to AIs, and computational creativity. Riedl is an associate professor at the Georgia Tech School of Interactive Computing, where his recent work focuses on human-AI interaction and how humans and AI systems can understand each other.

The following transcript has been heavily edited for brevity (the full podcast also includes interviews about the UN negotiations to ban nuclear weapons, not included here). You can read the full transcript here.

Can you explain how an AI could learn about ethics from stories?

One of the things I’ve been looking at is something that we call “common-sense goal errors.” When humans want to communicate to an AI system what they want to achieve, they often leave out the most basic rudimentary things. We have this model that whoever we’re talking to understands the everyday details of how the world works. If we want computers to understand how the real world works and what we want, we have to figure out ways of slamming lots of common sense, everyday knowledge into them.

“'If we want computers to understand how the real world works and what we want, we have to figure out ways of slamming lots of common sense into them.'”

And so, when looking for sources of common sense knowledge, we started looking at stories ― fiction, non-fiction, blogs. When we write stories, we implicitly put everything that we know about the real world and how our culture works, and how our society works, into the characters that we write about.

How do you choose which stories to use? And how can you ensure they’re learning the right lessons?

We ask people to tell stories about common things. How do you go to a restaurant? How do you catch an airplane? Lots of people tell a story about the same topic and have agreements and disagreements, but the disagreements are small. So we build an AI system that looks for commonalities. The common elements that everyone implicitly agrees on bubble to the top. And AI is really good at finding patterns.

When we test our AI system, we watch what it does, and we have things we do not want to see the AI do. But we don’t tell it in advance. We’ll put it into new circumstances and say, do the things you need to do, and then we’ll watch to make sure those [unacceptable] things don’t happen.

“'When we talk about teaching robots ethics, we’re really asking how we help robots avoid conflict with society and culture at large.'”

When we talk about teaching robots ethics, we’re really asking how we help robots avoid conflict with society and culture at large. Stories are written by all different cultures and societies, and they implicitly encode moral constructs and beliefs into their protagonists and antagonists. We can look at stories from different continents and even different subcultures, like inner city versus rural.

I have this firm belief that the AIs and robots of the future should not be one-size -fits-all when it comes to culture. The more an AI system or a robot can understand the values of the people that it’s interacting with, the less conflict there’ll be [and] the more understanding and useful it’ll be to humans.

I want to switch to your recent paper on safely interruptible agents, which were popularized in the media as “the big red button” problem.

Looking into the future, we’ll have robots and AI systems that are so sophisticated that they can theoretically learn that they have an off switch ― “the big red button” ― and learn to keep humans from turning them off.

And the reason why this happens is because an AI system gets little bits of a reward for doing something, [and] turning a robot off means it loses reward.

“'Robots will always be capable of making mistakes.'”

Robots will always be capable of making mistakes. So, we’ll always want an operator in the loop who can push this “big red button” and say: “Stop. Someone is about to get hurt. Let’s shut things down.” We don’t want robots learning that they can stop humans from stopping them, because that ultimately will put people into harm’s way.

Google and their colleagues came up with this idea of modifying the basic algorithms inside learning robots, so that they are less capable of learning about the “big red button.” My team and I came up with a different approach: to take this idea from “The Matrix,” and flip it on its head. We use the big red button to intercept the robot’s sensors and motor controls and move it from the real world into a virtual world, but the robot doesn’t know it’s in a virtual world. The robot keeps doing what it wants to do, but in the real world the robot has stopped moving.

Can you also talk about your work on explainable AI and rationalization?

Explainability of artificial intelligence is really a key dimension of AI safety. When AI systems do something unexpected or fail unexpectedly, we have to answer fundamental questions: Was this robot trained incorrectly? Did the robot have the wrong data? What caused the robot to go wrong?

“'We don’t want robots learning that they can stop humans from stopping them, because that ultimately will put people into harm's way.'”

If humans can’t trust AI systems, they won’t use them. So, we came up with this idea called rationalization: can we have a robot talk about what it’s doing as if a human were doing it? We get a bunch of humans to do some tasks, we get them to talk out loud, we record what they say, and then we teach the robot to use those same words in the same situations.

We’ve tested it in computer games. We have an AI system that plays Frogger, the classic arcade game in which the frog has to cross the street. And we can have a Frogger talk about what it’s doing. It’ll say things like “I’m waiting for a gap in the cars to open before I can jump forward.”

Going back a little in time ― you started with computational creativity, correct?

I have ongoing research in computational creativity. When I think of human-AI interaction, I really think, “what does it mean for AI systems to be on par with humans?” The human is going make cognitive leaps and creative associations, and if the computer can’t make these cognitive leaps, it ultimately won’t be useful to people.

“'I’d like to go up to a computer and say, "hey computer, tell me a story about X, Y or Z."'”

I have two things that I’m working on in terms of computational creativity. One is story writing. I’m interested in how much of the creative process of storytelling we can offload from the human onto a computer. I’d like to go up to a computer and say, “hey computer, tell me a story about X, Y or Z.”

I’m also interested in whether an AI system can build a computer game from scratch. How much of the process of building the construct can the computer do without human assistance?

We see fears that automation will take over jobs, but typically for repetitive tasks. We’re still hearing that creative fields will be much harder to automate. Is that the case?

I think it’s a long, hard climb to the point where we’d trust AI systems to make creative decisions, whether it’s writing an article for a newspaper or making art or music.

“'I think it’s a long, hard climb to the point where we’d trust AI systems to make creative decisions.'”

I don’t see it as a replacement so much as an augmentation. I’m particularly interested in novice creators ― people who want to do something artistic but haven’t learned the skills. I cannot read or write music, but sometimes I get these tunes in my head and I think I can make a song. Can we bring the AI in to become the skills assistant? I can be the creative lead and the computer can help me make something that looks professional. I think this is where creative AI will be the most useful.

For the second half of this podcast, I spoke with scientists, politicians, and concerned citizens about why they support the upcoming negotiations to ban nuclear weapons. Highlights from these interviews include comments by Congresswoman Barbara Lee (D-Calif.), Nobel Laureate Martin Chalfie and FLI president Max Tegmark.

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Before You Go

Musio: AKA's new artificially intelligent robot that wants to be your friend

Meet Musio: AKA's Artificially Intelligent Robot That Simply Wants to Be Your Friend

Popular in the Community

Close

What's Hot