The White House has released draft “guidance for regulation of artificial intelligence applications.” The memo states that “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.”
Agencies should consider new regulation only after they have reached the decision . . . that Federal regulation is necessary.
Nevertheless, the memo enumerates ten principles that agencies should take into account should they ultimately take action that impacts AI:
- Public Trust in AI. Don’t undermine it by allowing AI’s to pose risks to privacy, individual rights, autonomy, and civil liberties.
- Public Participation. Don’t block public participation in the rule making process.
- Scientific Integrity and Information Quality. Use scientific principles.
- Risk Assessment and Management. Use risk management principles.
- Benefits and Costs.
- Flexibility. Be flexible and ensure American companies are not disadvantaged by the United States’ regulatory regime.
- Fairness and Non-Discrimination.
- Disclosure and Transparency.
- Safety and Security.
- Interagency Coordination. Agencies should coordinate.
Overall, the memo is a long-winded directive that agencies should not regulate, but if for some reason they feel they have to, they should consider the same basic principles that everyone else is listing about AI concerns: safety, security, transparency, fairness.