What do investors need to know about AI risks?

8 minutes ago



What do investors need to know about AI risks? A few thoughts following a workshop I led at Principles for Responsible Investment. Here is a link to the full blog post about the workshop on the PRI blog. ➤ AI is booming. Most companies already employ AI and investments in AI are on the rise. It is likely that the vast majority of companies will use AI in the coming years. ➤ AI presents many environmental and social (ESG) risks. Examples include exacerbating social inequalities, disrupting democratic processes, and high carbon emissions. Responsible AI is a part of ESG. ➤ Regulation efforts are picking up steam, and some are already in effect. ➤ Here are three approaches to evaluate the environmental and social risks that a particular AI system/company poses: 1. By application type - assign coarse grain risk levels based on the application type following the risk classification presented in the EU AI Act. 2. Evaluate the company's responsible AI maturity - companies that develop and deploy AI responsibly are more likely to detect AI ethics problems and fix them. 3. Third-party evaluation - for mature companies, consider using third-party auditors with relevant technical, ethical, and legal expertise. ➤ Thank you to Daram Pandian who co-authored this blog post with me, to Peter Dunbar, CFA who invited me to lead this workshop, to Eline Sleurink who helped organize, and to the workshop's participants for a wonderful discussion.


Here is the link to the discussion on LinkedIn




Recent Posts

See All

How can AI cause harm, and which harms are the most important? In particular, which AI risks are end-users most concerned about? Results fro