Unadversarial Examples: Designing Objects for Robust VisionWe show how to design objects to help, rather than hurt, the performance of vision systems; the resulting objects improve performance on natural and distribution-shifted data.
Benchmarks for Subpopulation ShiftWe develop a methodology for constructing large-scale benchmarks to assess the robustness of standard models to subpopulation shift.
Transfer Learning with Adversarially Robust ModelsWe find that adversarially robust neural networks are better for downstream transfer learning than standard networks, despite having lower accuracy.
Noise or Signal: The Role of Backgrounds in Image ClassificationTo what extent to state-of-the-art vision models depend on image backgrounds?
From ImageNet to Image ClassificationWe take a closer look at the ImageNet dataset and identify ways in which it deviates from the underlying object recognition task.
Identifying Statistical Bias in Dataset ReplicationStatistical bias in dataset reproduction studies can lead to skewed outcomes and observations.
Robustness beyond Security: Computer Vision ApplicationsAn off-the-shelf robust classifier can be used to perform a range of computer vision tasks beyond classification.
Robustness beyond Security: Representation LearningRepresentations induced by robust models align better with human perception, and allow for a number of downstream applications.