"Why should investors care about AI ethics?" at the AI Ethics International Colloquium

On July 23 2022, I participated in a panel at the Artificial Intelligence Ethics International Colloquium. I talked about why investors should care about AI ethics. The topic of the panel was: "AI Ethical Codes, Discussion and Implementation Prospects".


Here are the points I touched on:


➤ Investors should care about AI ethics because of social and financial reasons


➤ Social reasons

AI is spreading rapidly, and is very likely to be embedded in all companies that use technology in the coming years. While AI can be very helpful, it can also be very destructive. AI ethics helps maximize the good that technology can do and minimize the harm.


➤ Financial reasons

Attention to AI ethics can improve return on investment for five reasons:


AI ethics improves AI systems.

AI ethics improves adoption.

AI ethics attracts talent.

AI ethics is crucial for compliance

AI ethics is key for building a positive reputation


➤ My co-panelists were:


-- Elizabeth D. Gibbons – Instructor, FXB Center for Health & Human Rights, Harvard

University, USA. "Critique of AI codes of ethics from a human rights perspective"


-- Jean Pierre Llored – Professor-researcher-HDR in human and social sciences, École

Centrale Casablanca, Morocco; École Centrale Supélec, France. "From bioethics to AI ethical codes: limits and perspectives"


-- Ayman Boughanmi – Professor at the University of Kairaouane, Tunisia. "The ethical dilemma of AI: between irresistible evolutions and improbable regulations"


-- Nicolás Duque Buitrago – Professor at the University of Caldas, Colombia. "Artificial intelligence and human rights: an ethical reflection on universalism and the Modus vivendi"


-- Yves Demazeau – Director of research at the CNRS and former president of the French Association of Artificial Intelligence, France. "Ethical design of large-scale AI systems"



➤ No recording is available, unfortunately.

Recent Posts

See All

How can AI cause harm, and which harms are the most important? In particular, which AI risks are end-users most concerned about? Results fro