No one knows. But lots of folks are asking.
Microsoft has one answer. Google has another, similar answer. The Future of Life organization has another (23 point!) list of ethical rules.
These rules have a lot of overlap, but also a lot of noise. Of course systems should be safe and reliable and just and secure. This is marketing noise and no one disagrees. We need to figure out the hard rules. How transparent should we require AI systems to be? How explainable? This could be hard.
In any case, this year seems the one for forming advisory boards to figure out what rules we should have around (1) letting AI’s defend / kill us; (2) letting AI’s treat us; and (3) maintaining US dominance in AI.
Yet this peculiar retreat was venomous: No matter how Stockfish replied, it was doomed. It was almost as if AlphaZero was waiting for Stockfish to realize, after billions of brutish calculations, how hopeless its position truly was, so that the beast could relax and expire peacefully, like a vanquished bull before a matador. Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.One Giant Step for a Chess-Playing Machine
When I was a graduate student in AI I often pointed to the Deep Blue v. Kasparov series as an illustration of how clueless we were in AI: grandmasters usually considered only a few moves in their two minutes of thinking, and Deep Blue churned through millions. Yet we had no idea how grandmasters came up with those few moves for consideration.
AlphaZero is a revelation. I hope to post more on it.