While most science fiction stories depict  a terrifying dystopian future with humans overtaken by robots, the University’s Interactive Machines Group — led by computer science professor Marynel Vázquez — is challenging this narrative by researching robot abuse instigated by humans.

The Interactive Machines Group found that, in the controlled laboratory setup, people are actually willing to help a robot that is mistreated by another person –– the confederate of the experiment –– especially if other robots express sadness in response to this mistreatment. There were instances of people intervening to help a robot right after it was mistreated by another person or even right before the abuse occurred. Promisingly, some participants told the person mistreating the robot to stop or changed the dynamics of the task in order to prevent the robot from doing something that could result in more mistreatment. The implications of this research could influence the future of human-robot relations.

“Motivating human bystanders to intervene in this kind of situation could be a viable way for robots to reduce the negative effects of robot mistreatment,” Vázquez wrote in an email to the News. “We do not know, though, what underlying psychological mechanism is motivating people to help … We want to continue our research to answer these questions.”

Vázquez’s past research had shown that it is better for robots to shut down for a few seconds than to react emotionally, as this increased the perception of robot mistreatment by human bystanders. Thus, her team devised a different approach, utilizing a separate robot that motivated human bystanders to intervene by showing emotions humans may empathize with or recognize.

This study was in part motivated by Vázquez’s prior research experience in human-robot interaction, as well as recent examples of robot mistreatment in the news –– whether it be the absolute destruction of a Canadian “HitchBOT” or an instance of children abusing a non-confrontational robot at the mall. Either way, Vázquez set out to find out how human participants in a study could recognize robot abuse and to gauge human response to other robots showing sad emotions due to this abuse.

“As a roboticist, I spend a lot of time making robots that hopefully, someday, will be able to help people in a variety of ways,” Vázquez said. “But if people act adversarially towards these robots, they won’t be able to help them as originally intended.” 

This research could also potentially be used to de-escalate robot mistreatment, which benefits both the robot and humans that could accidentally get hurt when acting adversarially towards a machine. Furthermore, the research has promising implications for anti-bullying programs in schools. “The idea that human bystanders can help robots when they are mistreated by someone,” Vázquez said, “could be applied to create robots that help with human bullying.”

Yale students have diverse opinions on whether robot abuse is a pertinent problem. 

“Currently, I don’t think [robot abuse] is a big problem,” Alex Johnson ’24, a Yale undergraduate student with eight years of experience in the robotics field, wrote to the News. “At least with modern technology, robots are a combination of wires, motors and code, to say it simply … With advancements in the complexity of [artificial intelligence], I think problems could arise. Humanity may eventually arrive at a point where robots have some degree of sentience … While it is again very complex, at that point I believe (sentient) robots should be treated no less than how humans treat animals most likely even treating robots how one might treat a person.”

Another undergraduate student, Rachel Folmar ’24, who is prospectively majoring in mechanical engineering and has four years of robotics experience, echoed some of Johnson’s sentiments. 

“‘Abuse’ of robots is pretty inconsequential to me as long as those robots don’t feel pain or have any real emotions,” Folmar wrote. “However, if abuse of humanoid robots becomes a way to represent abuse of humans, threaten abuse of humans, or numbs the abuser to the consequences of their actions, then of course there’s potential for ethical problems.”

Students who performed the core of the research work on this project with Vázquez included Joe Connolly ’22, Nicole Salomons GRD ’22 , Viola Mocz GRD ’24 , Nathan Tsoi GRD ’25 and Joseph Valdez. These students piloted the experimental protocol, conducted data collection, analyzed results and wrote the paper. In addition, students Katharine Li ’21 and Ananya Parthasarathy ’20 helped the group early on with the experimental design. Vázquez’s role in the project was to lead the team, ensuring that all members were on the same page about next steps in the study while resolving any problems that could arise along the way. This type of research is a testament to the hands-on research that Yale undergraduate students  can pioneer themselves.

“It’s not hard for undergraduates to get involved in robotics research,” Vázquez wrote. “I often hold meetings at the beginning or the end of the semester to pair students with current projects. All that students need to do is to contact me to learn about potential opportunities.”

Vázquez began studying robotics during her undergraduate years in Venezuela.

Anjali Mangla is a Science & Technology Editor for the News. She previously covered the intersection of STEM and social justice. Anjali is a sophomore in Ezra Stiles College planning to study Neuroscience, Global Affairs and Global Health Studies.