As it becomes increasingly apparent that we cannot tell artificial intelligence precisely what the goal should be, a growing chorus of researchers and ethicists are throwing up their hands and asking the AI’s to learn that part as well.
Machines that have our objectives as their only guiding principle will be necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines.
Uncertainty about objectives might sound counterproductive, but it is actually an essential feature of safe intelligent systems. It implies that no matter how intelligent they become, machines will always defer to humans. They will ask permission when appropriate, they will accept correction, and, most important, they will allow themselves to be switched off — precisely because they want to avoid doing whatever it is that would give humans a reason to switch them off.
How to Stop Superhuman A.I. Before It Stops Us
This begs a lot of questions, not the least of which is what are our objectives? But it turns out we have the same problem describing what it is we want as we have describing how we perceive. We’re just going to have to show you.