How To Identify AI risks

How can organizations determine which risks their AI poses to people and society? Here is list of questions organizations can ask themselves when thinking it through. Feedback welcome!

Which AI risks are relevant to your company
Download PDF • 4.76MB

➤ The questions are in the attached document, and you can also find them on my website. Link in the comments.

➤ As you will see, questions are grouped into four categories:

  • Fairness and non-Harm

  • Transparency and Explainability

  • Data Protection

  • Human Autonomy and Control

These categories are based on a research paper mapping dominant themes in AI ethics. (link in the comments)

➤ These categories, and the questions below, are far from exhaustive. However, they can help organizations get started.

Companies that develop AI can use it to reflect on their product. Investors and companies who procure AI systems can use it as part of their due diligence.

➤ The final version of this list will be included in a handbook for investors that I am writing, which will be out soon.

➤ Suggestions of other questions, better ways to formulate the questions, and any other feedback is very welcome!

Recent Posts

See All

How can AI cause harm, and which harms are the most important? In particular, which AI risks are end-users most concerned about? Results fro