top of page
Ravit banner.jpg

RESOURCES

"Longtermism" and "neartermism" in AI Ethics

What is the "long-term" camp in AI ethics? And what does it have in common with other AI ethics groups? I talked about this question at the Responsible AI symposium, on June 16, 2022.


No recording available, unfortunately. But you can read the LinkedIn discussion here .


Some of my thoughts:


➤ What is "longtermism"?


Longtermism is an approach that focuses on the fate of humanity in the very far future, thousands or millions of years from now.


➤ What is longtermism in AI ethics?


In AI ethics, the longtermist camp is concerned with how AI might impact humanity in the very far future.


For example, what can we do today to prevent super intelligent AI from enslaving, or even exterminating, humans in the very far future? They argue that while the chance of this happening is very small, the consequences would be so grave that we have to act to avoid it.


The approach is ubiquitous among the effective altruism movement and is associated with Silicon Valley culture, especially tech billionaires.


➤ The dispute about longtermism in AI ethics


Some of those who oppose longtermism, myself included, argue that it is a distraction from the racism, sexism, human rights violations, and other harms that AI is causing right now.


The debate between longtermists and other AI ethics has sometimes been very heated. For example, in a tweet, Timnit Gebru wrote: "As far as I'm concerned [effective altruism and longtermism] is the religion of the billionaires & it would have been bizarre and funny if it weren't real" (Link in the comments)


➤ What do longtermists have in common with other AI ethicists?


Bridging between longterminsts and others can strengthen AI ethics. That is the reason for the panel of Thursday.


In my opinion, understanding what longtermists have in common with others starts with understanding what the disagreement is really about.


Longtermists call their opponents "near-termists". I think this name misrepresents the disagreement.


The harms that AI is causing now can also lead, by some small probability, to the extermination of humanity. Just to give one example, the spread misinformation and hate facilitated by AI can lead to global nuclear wars or to broader denial of climate change.


I would be interested in hearing what longtermists think of this way of framing the debate and whether it brings them closer to the concerns that other AI ethicists are raising.


FOR UPDATES

Join my newsletter for tech ethics resources.

I will never use your email for anything else.

bottom of page