Ben Cohen-Wang

bencw@mit.edu | Google Scholar | LinkedIn

I'm a second year PhD student in EECS at MIT where I am fortunate to be advised by Aleksander Mądry. I received my BS in computer science from Stanford, and spent two great years at Robust Intelligence before starting my PhD.

I am interested in how we can develop machine learning models that can be safely deployed, with a focus on robustness to distribution shifts. In particular, I would like to understand how we can harness large-scale pre-training (e.g., CLIP, GPT) to develop robust task-specific models. Outside of research, I enjoy climbing, skiing, tennis and volleyball.

Profile

Research

Ask Your Distribution Shift if Pre-Training is Right for You

Benjamin Cohen-Wang, Joshua Vendrow, Aleksander Mądry

Pre-training on a large and diverse general-purpose dataset and then fine-tuning on a task-specific dataset can be an effective approach for developing models that are robust to distribution shifts. In practice, this method helps significantly in some cases, but not at all in others. In this work, we characterize the failure modes that pre-training can and cannot address.

Paper | Blog Post #1 | Blog Post #2 | Code

Comparing the Value of Labeled and Unlabeled Data in Method-of-Moments Latent Variable Estimation

Mayee Chen*, Benjamin Cohen-Wang*, Stephen Mussmann, Frederic Sala, Christopher Ré

Paper | Code

Interactive Programmatic Labeling for Weak Supervision

Benjamin Cohen-Wang, Stephen Mussmann, Alex Ratner, Christopher Ré

Paper