Published: The Values Encoded in Machine Learning Research (academic paper)

➤ My co-authors are Abeba Birhane, Pratyusha (Ria) Kalluri, Dallas Card, William Agnew, and Michelle Bao.

➤ The paper in one sentence: Social and political values are deeply embedded in machine learning

➤ The paper was publish at FAcct, the top publication in AI ethics, and also won the "Distinguished Paper" Award.

TL;DR of the paper:

➤ People think that machine learning is "objective" or "value-neural". "It's just math and statistics" some might think. It is not.

➤ We found evidence that machine learning centralizes power

Top cited papers most frequently justify themselves in terms of Performance, Generalization, Quantitative evidence, Efficiency, Building on past work, and Novelty.

These may sound like purely technical concepts. However, systematic textual evidence shows that they are used in ways that centralize power.

➤ We found an increasing influence of big tech and elite universities in machine learning papers

-- Top-cited papers that had authors affiliated with corporations increased from 24% to 55% between 2008-2019

-- Top-cited papers that had authors affiliated with "big tech" increased from 13% to 47% between 2008-2019

-- Top-cited papers that either had corporate authors or corporate funding increased from 45% to 79% between 2008-2019

➤ We found a lack of connection to human needs in machine learning papers

Few of the top papers connect their project to a societal need (15%) and far fewer discuss negative potential (1%)

Recent Posts

See All

How can AI cause harm, and which harms are the most important? In particular, which AI risks are end-users most concerned about? Results fro