European Commission publishes “framework for achieving Trustworthy AI”

Like many recent frameworks, this “High-Level Expert Group” assessment provides a list of fairly vague but nevertheless laudatory principles that AI developers should respect:

Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

1. it should be lawful, complying with all applicable laws and regulations;

2. it should be ethical, ensuring adherence to ethical principles and values; and

3. it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

Ethics Guidelines for Trustworthy AI (via Commission reports website)

Great: lawful, ethical, and robust. Ok, how do we do that? Well, the report also lays out four ethical principles to help achieve Trustworthy AI:

  • Respect for human autonomy
  • Prevention of harm
  • Fairness
  • Explicability

Ok, great: lawful, ethical, and robust. And ethical means respect human autonomy, prevent harm, be fair, explain what the AI is doing. Got it. No wait, there’s seven more (non-exhaustive) requirements for Trustworthy AI:

  • Human agency and oversight
  • Technical robustness and safety (robustness duplicate!)
  • Privacy and data governance (lawfulness duplicate!)
  • Transparency (explicability duplicate!)
  • Diversity, non-discrimination and fairness (fairness duplicate!)
  • Societal and environmental wellbeing (prevention of harm duplicate?)
  • Accountability

Ok, nail all these and we’re good? No, no, the report also recognizes that, “Tensions may arise between the above principles, for which there is no fixed solution.” For example, “trade-offs might have to be made between enhancing a system’s explainability (which may reduce its accuracy) or increasing its accuracy (at the cost of explainability).” And what should we do if tensions arise? “[S]ociety should endeavour to align them.”

Clear as mud. Of course, to be fair, no one else is doing any better.