Tailored Data Augmentation to Mitigate Model Failures

We demonstrate how we can use Stable Diffusion to target a model's failure modes

ModelDiff: A Framework for Comparing Learning Algorithms

We introduce a framework for comparing ML models trained with different learning algorithms.

Raising the Cost of Malicious AI-Powered Image Editing

Inspired by an episode of the Daily Show, we hacked together a technique for "immunizing" images against being edited by diffusion models.

A Data-Based Perspective on Transfer Learning

We present a framework for pinpointing the impact of the source datasets in transfer learning.

When does Bias Transfer in Transfer Learning?

We demonstrate that biases from pre-trained models can persist even after fine-tuning.

Distilling Model Failures as Directions in Latent Space

We demonstrate how to distill patterns of model errors as directions in a latent space.

Uncovering Brittleness with Datamodels

In the second part of our datamodels series, we use datamodels to identify and study a new form of model brittleness.

Missingness Bias in Model Debugging

We demonstrate how current missingness approximations introduce biases into model debugging.