DARPA-funded explainable AI program aims to build trust in robot task-execution
A new study by UCLA computer scientists, statisticians and psychologists shows potential for robots powered by artificial intelligence to earn the trust of humans.
Published in Science Robotics, the study shows a robot not only knows how to open pill bottles with safety locks after a few rounds of human demonstration, it can also explain its behaviors in multiple ways in real time. The study was supported by the Defense Advanced Research Projects Agency, also known as DARPA.
“In the past, machines were designed to do exactly what they are supposed to do, and in restricted workspace under human control,” said Song-Chun Zhu, a UCLA professor of computer science and statistics. “As we enter a new age of AI, and rely on data-driven machines to make decisions and recommendations, they cannot yet explain those decisions and actions to human users. This has impeded the general acceptance of AI and robotics in critical tasks.”
Finding an effective way that allows robots to earn trust is what Zhu and a team of UCLA researchers set out to do. “Instead of focusing on performance alone, the explainable AI demonstrated through this study can foster human trust and help humans predict robots’ future actions,” said study co-lead author Mark Edmonds, a UCLA doctoral student in computer science. “This will allow robots to explain their behaviors effectively when handling complex tasks and become more trustworthy to the human mind.”
In their experiments, the researchers first showed a robot how to open a pill bottle – the kind that needs a push-down before twisting open to pull off the cap. They also wore a glove with sensors that allows the robot to duplicate finger placement and tactile forces. After several attempts, the robot learned how to open the bottle by itself.
The second part of this study looked at how much humans trusted that the robot performed the task. A video of the robot opening the bottle was shown to 150 UCLA students. They were then asked to rate how much they trusted that the robot was handling the task on its own.
“Traditionally, robotics research has not been closely linked to psychology, but that is changing,” said study co-author Hongjing Lu, a UCLA professor of psychology and statistics. “To understand what makes people trust a robot, you have to study people.”
The psychological test asked the students, who were split into five groups, to observe the robot performing the task of opening a pill bottle on video. The groups were given explanations of the actions taken by the robot in different ways – ranging from baseline with no explanation, symbolic (action sequence), haptic (poses and forces) and a combination of the latter two. These explanations were also compared against a text-only summary description of the robot’s action.
Using a 100-point scale, the participants then rated how much they trusted that the robot performed the task by itself. The panel showed most trust in the robot when presented with a combination of the symbolic and haptic explanations, whereas the group received no explanation at all about the robot’s behaviors yielded the least trust-rating.
In a follow-up experiment, the participants were asked to predict the robot’s next actions as it attempted to open a similar bottle. They were shown step-by-step actions and asked to predict which one the robot would take. Those who accurately predicted the robot’s actions were also the ones who trusted the robot the most.
“In hindsight, this makes sense: if I gave you a complex IKEA furniture assembly instruction pamphlet, it’s unlikely you’d trust that I could complete the assembly from that,” Edmonds said. “But if I were to lay out a specific, step-by-step process, it’d be much more likely you’d believe I could assemble the furniture.”
The other lead authors on the paper include UCLA graduate students Feng Gao, Hangxin Liu, and Xu Xie, all members of Zhu’s research group, the Center for Vision, Cognition, Learning, and Autonomy. The study’s other authors, all from UCLA, were graduate student Siyuan Qi; postdoctoral scholar Yixin Zhu; Ying Nian Wu, professor of statistics; and former doctoral student Brandon Rothrock, now at NASA’s Jet Propulsion Laboratory.
UCLA Samueli is a tightly knit community of 185 full-time faculty members, more than 6,000 undergraduate and graduate students, as well as 40,000 active alumni. Known as the birthplace of the internet, UCLA Samueli is also where countless other fields took some of their first steps – from artificial intelligence to reverse osmosis, from mobile communications to human prosthetics. UCLA Samueli is consistently ranked in the Top 10 among U.S. public engineering schools. The school’s online master’s program ranks in the Top 3.