Research Interests

My main research areas are epistemology, philosophy of science, and philosophy of machine learning. I am also interested in feminist philosophy and philosophy of religion:

  • My main research program is about the concept of evidence. I develop a view in which what is evidence for what is constituted through norm-conforming social deliberation. One consequence of my view is that evidence is political.

  • I collaborate on projects in the philosophy of machine learning with computer scientists. My collaborations are aimed at analyzing epistemic practices in the discipline of machine learning and understanding their social and political aspects.

  • My interests in feminist philosophy include oppression and the institution of legal marriage. My interests in the philosophy of religion include conversions.

Peer-Reviewed Publications

"Theory choice, non-epistemic values, and machine learning"

In: Synthese (2020)

Winner of the Fink Prize (UC Berkeley, 2019), for the best paper written by a philosophy graduate student

Preprint, Official version

Can we choose between theories without relying on non-epistemic values, such as social and political values? I use a theorem from machine learning to support the claim that we can’t.

"Value-laden disciplinary shifts in machine learning"

With Smith Milli (UC Berkeley, CS)

In: Conference on Fairness, Accountability, and Transparency (FAccT, 2020) 



People often think that deep learning, the currently predominant approach in machine learning, is “objectively” better than its competitors in the sense that favoring it is politically neutral. However, we argue that the rise and fall of model types, such as deep learning, is value-laden and we reveal the social and political values implicit in favoring deep learning over its competitors.

Non-Peer-Reviewed Publications

Current Controversies in Philosophy of Science

Edited with Shamik Dasgupta and Brad Weslake

Routledge (2021), Table of Contents

A collection of papers in contemporary philosophy of science. The book is organized around six questions in the philosophy of science, with two papers addressing each question. The book also contains study questions and recommendations for further reading, meant to provide guidance for students.


With Shamik Dasgupta

In: Current Controversies in Philosophy of Science

An overview of the questions and papers appearing in the book.

Work in Progress

Under Review: I use the social view of evidence to analyze an epistemic phenomenon


Under Review: A paper giving a philosophical analysis of disciplinary shifts in machine learning (co-authored)

"The social view of evidence"

I develop a social view of evidence. On this view, evidential relations, e.g. “e is evidence for h”, are constituted in group deliberations that satisfy norms of deliberation. This paper explains how the view works, shows it is powerful in explaining phenomena related to evidence, and uses it to both show that evidence is political and illuminate long-standing epistemic puzzles, including Hume’s problem of induction and Cartesian Skepticism.

"The values of the machine learning discipline"

With Ria Kalluri (Stanford, Computer Science), Dallas Card (Stanford, Computer Science), William Agnew (University of Washington, Computer Science), Abeba Birhane (University College Dublin, Cognitive Science), and Michelle Bao (Stanford, Computer Science)


Machine learning (ML) currently exerts an outsized influence on the world, increasingly affecting communities and institutional practices. It is therefore critical that we question vague conceptions of the field as value-neutral or universally beneficial, and investigate what specific values the field is advancing. In this paper, we present a rigorous examination of the values of the field by quantitatively and qualitatively analyzing 100 highly cited ML papers published at premier ML conferences, ICML and NeurIPS. We annotate key features of papers which reveal their values: how they justify their choice of project, which aspects they uplift, their consideration of potential negative consequences, and their institutional affiliations and funding sources. We find that societal needs are typically very loosely connected to the choice of project, if mentioned at all, and that consideration of negative consequences is extremely rare. We identify 67 values that are uplifted in machine learning research, and, of these, we find that papers most frequently justify and assess themselves based on performance, generalization, efficiency, researcher understanding, novelty, and building on previous work. We present extensive textual evidence and analysis of how these values are operationalized. Notably, we find that each of these top values is currently being defined and applied with assumptions and implications generally supporting the centralization of power. Finally, we find increasingly close ties between these highly cited papers and tech companies and elite universities.

"Evidence and testimony"

I use the social view of evidence to analyze testimony of evidence, especially expert testimony.

"Evidence and conversions"

I use the social view of evidence to analyze the phenomenon of conversion