currently

completing her PhD at the Stanford AI Lab,
supported by the Open Philanthropy AI Fellowship
+ the PD Soros Fellowship for New Americans

The Values Encoded in Machine Learning Research
Best paper at FAccT, 2022

It is critical that we question vague conceptions of machine learning research as value-neutral or universally beneficial, and investigate what specific values the field is advancing. In this paper, we annotate one hundred highly cited machine learning papers. We find that few of the papers justify how their project connects to a societal need (15%) and far fewer discuss negative potential (1%). We find that the papers most frequently justify and assess themselves based on Performance, Generalization, Quantitative evidence, and Efficiency; and these are being defined in ways that centralize power. Finally, we find increasingly close ties between these highly cited papers and tech companies and elite universities.

Text-to-Image Generation Amplifies Stereotypes at Large Scale
Arxiv, 2022

We find simple user prompts generate thousands of images perpetuating dangerous racial, ethnic, gendered, class, and intersectional stereotypes. For example, an attractive person generates faces approximating a “White ideal” (stark blue eyes, pale skin, or straight hair). Beyond merely reflecting societal disparities, we find cases of near-total stereotype amplification. Models, including Stable Diffusion and DallE, tie specific groups to negative or taboo associations like malnourishment, poverty, and subordination, and these are mitigated by neither model "guardrails" nor carefully written user prompts.

Don’t ask if artificial intelligence is good or fair, ask how it shifts power
Nature, 2020

It is not uncommon now for AI experts to ask whether an AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘good’ are infinitely spacious words that any AI system can be squeezed into. The question to pose is a deeper one: how is AI shifting power?

The Radical AI Network

In 2019, I co-created the Radical AI Network. Radical AI began with a shared understanding that we live in a society that distributes power unevenly. Growing from these roots, this community was formed to examine how AI rearranges power and work with radical hope that our communities can dream up and build different AI/human systems that put power back in the hands of the people.

The Radical AI virtual space was formed for radical people to support each other, connect, read and create radical AI projects.

How machine learning research shifts power
invited talk @ the NeurIPS Queer in AI Workshop

In this talk, I call for the machine learning community to shift away from the vacuous question 'Is this machine learning model doing good?' and toward the question 'How is this machine learning model shifting power?'. I then take the power question head on:

I describe four human roles adjacent to the typical machine learning model, and go through what areas of research would shift power to each of them. The picture that emerges illustrates that the core of the field is overwhelmingly shifting power to experts and model-owners; but there exists a rich space of neglected research that shifts power back to the most datafied and evaluated humans. This talk was widely circulated online and was referenced in Nature and MIT Tech Review.

Looking Inside a Language Model
upcoming paper by Kalluri, Goh, & Olah

Given a sequence of words, the modern language model GPT‑2 reasons by converting the words to a layer of vectors, then another, then another, until it can use the final layer of vectors to predict the rest of the paragraph. While these are often referred to as 'black box' models, we can actually access every layer.

In this project, I combined clustering and search to create a method for visualizing each layer, revealing previously unknown information about what the model attends to, what it ignores, and how it reasons. This tool presents one way for humans to look inside supposedly 'black box' language models, and will hopefully lead to deeper audits and critiques.

Limiting Downstream Unfairness
paper/blogpost by Kalluri, Song, Grover, Zhao, & Ermon

In contrast to fair machine learning models that corporations and governments can choose to use or not, this work allows a party who is concerned with fairness — like a data collector, community organizer, or regulatory body — to convert data to controlled representations, then release only the representations. This controls how much any downstream machine learning model can discriminate. A limitation is that we, like much of the fair machine learning community, limit only demographic parity, equality of odds, and equality of opportunity; future work should study deeper notions of justice.

Our Relationship with AI:
Exploring the Present & Dreaming Up Radical Futures
workshop @ the FAT*-CtrlZ.AI Zine Fair

In this workshop, we deepened our understanding of our emotional relationships with AI, while also grappling with our most vulnerable concerns and hopes for the evolution of these relationships: we asked how do we dream up radical AI futures that are safer, more equitable, and more beautiful?

We began by exploring our daily relationships with AI through creating handmade data visualizations. This led us into a speculative sci-fi activity, writing into existence the seeds of more beautiful future relationships with AI. Finally, we collectively turned the motifs of our imagined futures into a large visualization using paper, paint, and ribbon, and we discussed what lies between our present and the futures we dream of.