Teaching robots how to be moral

Scientists are trying to teach robots social norms and social conventions or, more in general, to distinguish ‘right’ from ‘wrong’.

Robots might one day be able to learn our values from our stories (credit: www.makeuseof.com)

Robots might one day be able to learn our values from our stories (credit: www.makeuseof.com)

The extremely intriguing study has being carried out by Professor Mark Riedl at the Georgia Institute of Technology in Atlanta (US). As robots will possibly start becoming more and more part of our lives, Professor Riedl’s goal is to try to understand if they could learn to ‘behave’ in human society. As Riedl puts it: “Are they going to be able to harm us? And I don’t just mean in terms of physical damage or physical violence, but in terms of disrupting kind of the social harmonies in terms of… cutting in line with us or insulting us.”

To teach robots to ‘behave’, Riedl and collaborators are feeding them very simple instructional stories, such as how to go to the store and buy a prescription drug. They have started with relatively simple instruction-based narratives because it turns out that extrapolating right and wrong from a story is a really complicated matter. “Natural language processing is very hard. Story understanding is hard in terms of figuring out what are the morals and what are the values and how they’re manifesting. Storytelling is actually a very complicated sort of thing.”

This has become very clear yesterday when Microsoft had to suspend their ‘Tay’, an artificial chat bot on twitter. Tay was supposed to learn from other people’s tweets, with Microsoft claiming that the more you chat with it, “the smarter it gets, learning to engage people through casual and playful conversation”. And learning it did, pretty soon after its launch people started tweeting the bot with all sort of racist, mysoginistic and ‘Donald-Trumpist’ messages and Tay went from saying “Humans are super cool” to “Hitler was right, I hate Jews” or “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”

The problem is that robots try to “super-optimize,” Riedl said. “What artificial intelligence is really good at doing is picking out the most prevalent signals. So the things that it sees over and over and over again… are the things that are going to rise and bubble up to the top.” In the study Riedl and his team are conducting, they hope they will eventually be able to give robots entire libraries of stories. “We imagine feeding entire sets of stories that might have been created by an entire culture or entire society into a computer and having him reverse engineer the values out. So this could be everything from the stories we see on TV, in the movies, in the books we read. Really kind of the popular fiction that we see,” Riedl says.

Understanding requires context, and maybe slips like the Microsoft’s Tay will help us better understanding our own society; through the eyes of an unbiased, unfiltered robot who will bluntly show us the values we cherish, simply by recording the way we express ourselves.

The following two tabs change content below.
Carlo Bradac

Carlo Bradac

Dr Carlo Bradac is a Research Fellow at the University of Technology, Sydney (UTS). He studied physics and engineering at the Polytechnic of Milan (Italy) where he achieved his Bachelor of Science (2004) and Master of Science (2006) in Engineering for Physics and Mathematics. During his employment experience, he worked as Application Engineer and Process Automation & Control Engineer. In 2012 he completed his PhD in Physics at Macquarie University, Sydney (Australia). He worked as a Postdoctoral Research Fellow at Sydney University and Macquarie University, before moving to UTS upon receiving the Chancellor Postdoctoral Research and DECRA Fellowships.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Blue Captcha Image
Refresh

*