Robots and AI devices are becoming more and more ubiquitous in the developed world. As they become more prevalent, we are becoming more fearful of their impact: one study in 2017 found that 72 percent of Americans said they were at least somewhat worried of a world where machines perform many of the tasks traditionally done by humans. When we think of robots, we often think of human-like machines capable of movement and independent decision making but the vast majority are stationary robotic arms or tools controlled by computers that perform repetitive tasks with clock-work precision time after time. Recent research has found that people demand a much higher success rate from robots than from their flesh-and-bone counterparts. That’s the case even though the overwhelming evidence suggests these new technologies perform better than humans at some of the same tasks.
When humans make mistakes, we have mechanisms to apportion blame or re-evaluate systems and processes to limit errors, but we accept that humans will make mistakes. When a robot or AI makes mistakes we are far less understanding or accepting. Researchers asked a group of undergraduate students to forecast scheduling for hospital rooms. They could get help from either an “advanced” computer system or a human specialist. After both began making bad predictions, however, students were much faster to disregard the computer in subsequent trials than the human. Students found it in themselves to accept the flaws in other people, but not in the machine.
We forget that robots and AI is designed by humans so it will have our flaws in reasoning too. In the US, some areas used a computer algorithm to determine the likelihood of recidivism of early release prisoners. But some of the data points that went into those estimates, like the defendant’s ZIP code, were believed to reinforce racial biases. Come along to the Banshee Labyrinth tonight at 7:30 to hear Prof. Ruth Aylett talk about her work with robots and AI.