Monday, May 06, 2013
Yo, Robot!
We interrupt this long-form essay to report on my afternoon at our Second Annual Symposium on Human-Robot Interaction. Really. I was there because I study human-human interaction and I've been roped in -- well alright, I didn't really mind, it's kind of interesting -- to letting computer scientists play with my concepts, and they might be useful for getting machines to communicate with us more usefully.
I won't go into that in a lot of depth here, but what I do want to talk about is where the nerds think this whole thing is headed. You may or may not like it. One of the potential applications for interacting robots is to be companions and caregivers for elderly people. This actually gets talked about a lot. The social problem is that more and more people are living to be old and frail and widowed and socially isolated. It's too expensive to give them homemakers and home health aids plus they're lonely. So maybe we can give them a robot.
I don't know about you but I find that fairly icky. Of course, if you could make such a robot, it could also be a house servant for able-bodied families, a janitor, a waiter -- lots of jobs. Even, yes, a nanny, and they were talking about robots being essentially Head Start teachers as well. Is this good?
Where I come in technically is that basically, Siri works, kinda, because all you do is ask her -- excuse me it -- questions and maybe give some basic instructions in a limited domain, such as calling a number. But your robot companion has to accurately interpret much more complex domains of speech, what we call the full range of illocutionary acts -- such as all the various kinds of questions, promises, and expressions of feeling, even jokes; figure out your intentions, desires, and state of mind; and respond appropriately. Note that I didn't say the robot has to understand anything -- that's different. In fact, what we've learned from decades of failure at artificial intelligence is that we have much more success getting computers to respond appropriately to language inputs if we forget about understanding and just automate the responses based on statistical correlations of language content with illocutions.
Fortunately, we are so far from this that I'm not worried about it happening any time soon. I think. But if we do give robots more and more autonomy and behavioral flexibility, then we have to start worrying about robot ethics. Also, does tossing people a robot as a substitute for human companionship or nurture mean we are meeting a social need, or consigning people to a kind of hell?
Subscribe to:
Post Comments (Atom)
4 comments:
Yikes! RUR, and all that. Even more importantly, from a human perspective, is that perceiving we are having our needs met by an automaton seems akin to a cat getting "affection" by rubbing on a table leg . . . the problem is, we are not cats. (And I know at least one cat that prefers me to a table leg.)
And I feel bad that my mom is in an assisted living facility with other humans, but not with family. I think I'll call her and tell her it could be much much worse.
only the very wealthy will have robots which can fake empathy attending to grandpa. the rest of us will have to do with family who can't understand us.
au contraire, this nerd thinks it will happen within the next 10 years. It will be a good thing too, robots and pets will never replace my loved ones but if I out live them I'll be happy enough with HAL and Rover.
Tech dev is approaching singularity.
Post a Comment