currently

completing their PhD at the Stanford AI Lab,
supported by the Open Philanthropy AI Fellowship
+ the PD Soros Fellowship for New Americans

The Radical AI Research Community
email the.radical.ai.network@gmail.com to request to join

In 2019, I founded the Radical AI Research Community. Radical AI begins with a shared understanding that we live in a society that distributes power unevenly. Growing from these roots, radical AI examines how AI rearranges power and works with radical hope that our communities can dream up and build different AI/human systems that put power back in the hands of the people.

In our virtual space, we are connecting, reading my Radical AI Reading List, and doing radical AI projects. We also host in-person events and workshops.

How machine learning research shifts power
invited talk @ The NeurIPS Queer in AI Workshop

In this talk, I call for the machine learning community to shift away from the vacuous question 'Is this machine learning model doing good?' and toward the question 'How is this machine learning model shifting power?'. I then take the power question head on:

I describe four human roles adjacent to the typical machine learning model, and go through what areas of research would shift power to each of them. What emerges is a compelling critique that the core of the field is overwhelmingly shifting power to experts and model-owners; but there exists a rich space of neglected research that shifts power back to the most datafied and evaluated humans. This talk was widely circulated online and was referenced in Nature and MIT Tech Review.

Looking Inside a Language Model
upcoming paper by Kalluri, Goh, & Olah

Given a sequence of words, the modern language model GPT‑2 reasons by converting the words to a layer of vectors, then another, then another, until it can use the final layer of vectors to predict the rest of the paragraph. While these are often referred to as 'black box' models, we can actually access every layer.

In this project, I combined clustering and search to create a color-based visualization of each layer, revealing previously unknown information about what the model attends to, what it ignores, and how it reasons. This tool presents one way for humans to look inside supposedly 'black box' language models, and will hopefully lead to deeper audits and critiques.

Limiting Downstream Unfairness
paper/blogpost by Kalluri, Song, Grover, Zhao, & Ermon

In contrast to fair machine learning models that corporations and governments can choose to use or not, this work allows a party who is concerned with fairness — like a data collector, community organizer, or regulatory body — to convert data to controlled representations, then release only the representations. This controls how much any downstream machine learning model can discriminate. A limitation is that we, like much of the fair machine learning community, limit only demographic parity, equality of odds, and equality of opportunity; future work should study deeper notions of justice.

Our Relationship with AI:
Exploring the Present & Dreaming Up Radical Futures
workshop @ The FAT*-CtrlZ.AI Zine Fair

In this workshop, we deepened our understanding of our emotional relationships with AI, while also grappling with our most vulnerable concerns and hopes for the evolution of these relationships: we asked how do we dream up radical AI futures that are safer, more equitable, and more beautiful?

In the lead up to the workshop, we explored our daily relationships with AI through creating handmade data visualizations. At the workshop, we went on to engage in a speculative sci-fi activity, writing into existence the seeds of more beautiful future relationships with AI. Finally, we collectively turned the motifs of our imagined futures into a large visualization using paper, paint, and ribbon, and we discussed what lies between our present and the futures we dream of.