No one knows. But lots of folks are asking.
Microsoft has one answer. Google has another, similar answer. The Future of Life organization has another (23 point!) list of ethical rules.
These rules have a lot of overlap, but also a lot of noise. Of course systems should be safe and reliable and just and secure. This is marketing noise and no one disagrees. We need to figure out the hard rules. How transparent should we require AI systems to be? How explainable? This could be hard.
In any case, this year seems the one for forming advisory boards to figure out what rules we should have around (1) letting AI’s defend / kill us; (2) letting AI’s treat us; and (3) maintaining US dominance in AI.