I'm a second year PhD student in EECS at MIT where I am fortunate to be advised by Aleksander Mądry. I received my BS in computer science from Stanford, and spent two great years at Robust Intelligence before starting my PhD.
I am interested in how we can develop machine learning systems that can be safely deployed. Outside of research, I enjoy climbing, skiing, tennis and volleyball.
Benjamin Cohen-Wang*, Harshay Shah*, Kristian Georgiev*, Aleksander Mądry
Language models may need external information to provide a response to a given query. We would provide this information as context and expect the model to interact with it when responding to the query. But how would we know if the model actually used the context, misinterpreted anything, or made something up? We present ContextCite, a method for attributing statements generated by language models back to specific information provided in-context.
Paper | Blog Post #1 | Blog Post #2 | Demo | Python package
Benjamin Cohen-Wang, Joshua Vendrow, Aleksander Mądry
Pre-training on a large and diverse general-purpose dataset and then fine-tuning on a task-specific dataset can be an effective approach for developing models that are robust to distribution shifts. In practice, this method helps significantly in some cases, but not at all in others. In this work, we characterize the failure modes that pre-training can and cannot address.
Paper | Blog Post #1 | Blog Post #2 | Code
Mayee Chen*, Benjamin Cohen-Wang*, Stephen Mussmann, Frederic Sala, Christopher Ré
Benjamin Cohen-Wang, Stephen Mussmann, Alex Ratner, Christopher Ré