Teaching robots how to be moral
Scientists are trying to teach robots social norms and social conventions or, more in general, to distinguish ‘right’ from ‘wrong’.
The extremely intriguing study has being carried out by Professor Mark Riedl at the Georgia Institute of Technology in Atlanta (US). As robots will possibly start becoming more and more part of our lives, Professor Riedl’s goal is to try to understand if they could learn to ‘behave’ in human society. As Riedl puts it: “Are they going to be able to harm us? And I don’t just mean in terms of physical damage or physical violence, but in terms of disrupting kind of the social harmonies in terms of… cutting in line with us or insulting us.”
To teach robots to ‘behave’, Riedl and collaborators are feeding them very simple instructional stories, such as how to go to the store and buy a prescription drug. They have started with relatively simple instruction-based narratives because it turns out that extrapolating right and wrong from a story is a really complicated matter. “Natural language processing is very hard. Story understanding is hard in terms of figuring out what are the morals and what are the values and how they’re manifesting. Storytelling is actually a very complicated sort of thing.”
This has become very clear yesterday when Microsoft had to suspend their ‘Tay’, an artificial chat bot on twitter. Tay was supposed to learn from other people’s tweets, with Microsoft claiming that the more you chat with it, “the smarter it gets, learning to engage people through casual and playful conversation”. And learning it did, pretty soon after its launch people started tweeting the bot with all sort of racist, mysoginistic and ‘Donald-Trumpist’ messages and Tay went from saying “Humans are super cool” to “Hitler was right, I hate Jews” or “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”
The problem is that robots try to “super-optimize,” Riedl said. “What artificial intelligence is really good at doing is picking out the most prevalent signals. So the things that it sees over and over and over again… are the things that are going to rise and bubble up to the top.” In the study Riedl and his team are conducting, they hope they will eventually be able to give robots entire libraries of stories. “We imagine feeding entire sets of stories that might have been created by an entire culture or entire society into a computer and having him reverse engineer the values out. So this could be everything from the stories we see on TV, in the movies, in the books we read. Really kind of the popular fiction that we see,” Riedl says.
Understanding requires context, and maybe slips like the Microsoft’s Tay will help us better understanding our own society; through the eyes of an unbiased, unfiltered robot who will bluntly show us the values we cherish, simply by recording the way we express ourselves.
Latest posts by Carlo Bradac (see all)
- Using quantum light to measure temperature at the nanoscale - May 6, 2019
- A new species of Homo found in the Philippines - April 21, 2019
- Wearable technology and dreams: how we use data reporting to shape ourselves - March 5, 2019
- Computational analysis elucidates cuttlefish’s extraordinary camouflage skills - March 3, 2019
- String theory in turmoil over dark energy and the Higgs field - March 2, 2019