Andrew Keane Woods in the University of Colorado Law Review:
[D]octors continue to privilege their own intuitions over automated decision-making aids. Since Meehl’s time, a growing body of social psychology scholarship has offered an explanation: bias against nonhuman decision-makers…. As Jack Balkin notes, “When we talk about robots, or AI agents, or algorithms, we usually focus on whether they cause problems or threats. But in most cases, the problem isn’t the robots. It’s the humans.”Robophobia
Making decisions that go against our own instincts is very difficult (see also List of cognitive biases), and relying on data and algorithms is no different.
A major challenge of AI ethics is figuring out when to trust the AI’s.
Andrew Keane Woods suggests (1) defaulting to use of AI’s; (2) anthropomorphizing machines to encourage us to treat them as fellow decision-makers; (3) educating against robophobia; and perhaps most dramatically (4) banning humans from the loop. 😲