No thanks to Steven Spielberg, robotics research has moved from the realm of science fiction into that of academia in recent years. At Yale this move is manifested in part by the presence of Nico, a short, baseball-capped addition to the Computer Science Department.

Nico is a robot that computer science student Kevin Gold GRD ’09 uses to run experiments dedicated to topical issues in the field of artificial intelligence. Gold, a member of professor Brian Scassellati’s social robotics lab, develops models of language acquisition and self-recognition with the diminutive robot. So far, Nico has been successful in learning to identify its own mirror image and to differentiate between speaker and addressee, giving Gold hope of future discoveries involving learning and interaction between humans and robots.

Self-recognition has traditionally been considered a sign of superior intelligence, since so far only species such as chimpanzees, dolphins, orangutans and humans have managed to achieve it. In a well-known cognitive science experiment, sedated chimps are marked with rouge and, upon awakening, see their own reflection in a mirror. They then reach up and try to wipe the rouge away from their face, indicating that they associate the reflection with themselves.

Researchers have argued that because chimps can recognize themselves in the mirror, they have a notion of what other chimps see­ — an important point in theories of how social hierarchies are formed. But there are conflicting opinions on what the mirror test really means.

According to Gold, there are two hypotheses about how chimps recognize themselves: Either they have a sense of how they appear to others and recognize that in the mirror, or they are able to identify themselves by analyzing the motion of the figure in the mirror. Gold says that within the latter hypothesis it’s not clear what type of motion a chimp would be analyzing — presence, absence of motion, or some other quality.

What is clear though from Gold’s experiments, is that Nico is able to recognize itself in a mirror using only its own motion. This suggests that chimps who seem to recognize themselves may just be tracking motion as well.

The robot classifies everything it sees into one of three categories — self, other and inanimate — which it generates from parameters that Gold has given it and from observing its surroundings. It uses visual feedback information to modify the parameters so it can classify objects with higher confidence, and then compares the motion it sees in the room to the parameters established for the three categories. To identify itself, Nico merely moves its arm and tags the object whose motion matches the parameters as “self.” This is how the robot is able to identify its own movement in the mirror in a way that seems similar to that of a self-aware being.

Gold is also working on how robots can learn the meanings of “I” and “you.” Usually, machines learn language by connecting terms to a particular visual image. “I” and “you,” however, don’t correspond to images, but rather to roles (the speaker, and the person spoken to, respectively).

Instead, the robot learns the correct use of “I” and “you” by watching two people play a game of catch, in which whoever has the ball says “I got the ball,” and whoever throws it says “You got the ball.” The robot then uses a phrase it already knows, namely “got the ball,” to infer the properties and conversational roles (speaker or person spoken to). These properties are then associated with the unknown words “I” and “you.”

Gold said it was tricky to present findings from this kind of research because of pre-existing conceptions about robots’ abilities.

“A lot of people try to overstate the abilities of robots’ programs,” he said. “AI has had a bad history with this, with what these systems can and can’t do.”

From here, there are two interesting directions that Gold’s research could take. The first one is getting the robot to think about its own goals as well as those of others, and, by differentiating between the two, to understand other people’s actions through its own. If the robot’s self-model becomes complex enough, the machine could use it to predict what to expect from a person in a given situation.

The second area Gold could investigate is language acquisition. He is interested in exploring further what robots can do with language — whether machines can learn the meaning of objects and their uses based on how people manipulate them.

Nico is not the only project underway at the social robotics lab. Wilma Bainbridge ’09, who works at the lab, said that researchers are planning to add new robots and improve old ones.

“The lab is currently working on a project that involves prosody, which refers to the way the pitch, stress and intonation of one’s voice varies,” she said. “The goal is to eventually create robots able to interpret prosody.”

Scassellati outlined the next step in robotics research — the creation of robots that can communicate with people on a higher level that involves mutual feedback.

“Many research groups, including our group at Yale, are focusing on how to build robots that can interact with people using the same natural social cues that we as humans use so effortlessly to interact with each other,” he said in an e-mail.

Scassellati also reflected on the rapid integration of robots in human life that has taken place in the recent past.

“In the last decade, we have seen robots begin to move out of the laboratory and the assembly line and into our homes and our hospitals,” he said. “Robots vacuum our floors and mow our lawns, aid in delicate surgeries and deliver medicine, and even give directions through museums and shopping malls.”

Sometimes, Spielberg doesn’t seem quite so far-fetched.