Monday, May 06, 2013
We interrupt this long-form essay to report on my afternoon at our Second Annual Symposium on Human-Robot Interaction. Really. I was there because I study human-human interaction and I've been roped in -- well alright, I didn't really mind, it's kind of interesting -- to letting computer scientists play with my concepts, and they might be useful for getting machines to communicate with us more usefully.
I won't go into that in a lot of depth here, but what I do want to talk about is where the nerds think this whole thing is headed. You may or may not like it. One of the potential applications for interacting robots is to be companions and caregivers for elderly people. This actually gets talked about a lot. The social problem is that more and more people are living to be old and frail and widowed and socially isolated. It's too expensive to give them homemakers and home health aids plus they're lonely. So maybe we can give them a robot.
I don't know about you but I find that fairly icky. Of course, if you could make such a robot, it could also be a house servant for able-bodied families, a janitor, a waiter -- lots of jobs. Even, yes, a nanny, and they were talking about robots being essentially Head Start teachers as well. Is this good?
Where I come in technically is that basically, Siri works, kinda, because all you do is ask her -- excuse me it -- questions and maybe give some basic instructions in a limited domain, such as calling a number. But your robot companion has to accurately interpret much more complex domains of speech, what we call the full range of illocutionary acts -- such as all the various kinds of questions, promises, and expressions of feeling, even jokes; figure out your intentions, desires, and state of mind; and respond appropriately. Note that I didn't say the robot has to understand anything -- that's different. In fact, what we've learned from decades of failure at artificial intelligence is that we have much more success getting computers to respond appropriately to language inputs if we forget about understanding and just automate the responses based on statistical correlations of language content with illocutions.
Fortunately, we are so far from this that I'm not worried about it happening any time soon. I think. But if we do give robots more and more autonomy and behavioral flexibility, then we have to start worrying about robot ethics. Also, does tossing people a robot as a substitute for human companionship or nurture mean we are meeting a social need, or consigning people to a kind of hell?