In this Gradient article, I explain why it is unlikely that universal AI ethics principles will be found and I discuss reasons to avoid using dominant trends as default. Instead, I suggest that each organization should articulate its own AI ethics principles, and I sketch ways to do so responsibly.
➤ What are AI ethics principles?
AI ethics principles describe what features are desirable in an ethical AI system.
More and more organizations are now writing their own principles. For example, Google states that it believes that AI systems should “be socially beneficial,” “avoid creating or reinforcing unfair bias,” “be built and tested for safety,” “be accountable to people,” and so on.
These principles are important because they can be the first step towards designing and deploying AI systems more responsibly. They define what the organization aims for.
➤ Why should organizations write their own AI ethics principles?
AI ethics principles are an expression of organizational values. Organizations should decide on their own values when it comes to AI, just like they do for other codes of ethics.
➤ How to formulate AI ethics principles responsibly?
The process of formulating AI ethics principles should involve the following components:
1. Incorporating feedback from experts, especially experts in AI ethics and organizational behavior, and diverse stakeholders
2. Publicizing the principles and the process behind their formation
3. Periodic revisions of the principles, incorporating additional feedback from experts and stakeholders.
Focusing on these aspects of the process of formulating principles can help prevent crafting superficial, incomplete, or otherwise problematic AI ethics principles.
➤ Formulating principles is the beginning, not the end
No less important, of course, is how the organization acts on the principles. After all, principles put on websites don't make an impact on AI systems, actions do. I am also working on frameworks to evaluate the implementation of AI ethics. More about that in the future!
➤ Read the discussion on LinkedIn here