Unadversarial Examples: Designing Objects for Robust Vision
We show how to design objects to help, rather than hurt, the performance of vision systems; the resulting objects improve performance on natural and distribution-shifted data.Benchmarks for Subpopulation Shift
We develop a methodology for constructing large-scale benchmarks to assess the robustness of standard models to subpopulation shift.Transfer Learning with Adversarially Robust Models
We find that adversarially robust neural networks are better for downstream transfer learning than standard networks, despite having lower accuracy.Noise or Signal: The Role of Backgrounds in Image Classification
To what extent to state-of-the-art vision models depend on image backgrounds?From ImageNet to Image Classification
We take a closer look at the ImageNet dataset and identify ways in which it deviates from the underlying object recognition task.Identifying Statistical Bias in Dataset Replication
Statistical bias in dataset reproduction studies can lead to skewed outcomes and observations.Robustness beyond Security: Computer Vision Applications
An off-the-shelf robust classifier can be used to perform a range of computer vision tasks beyond classification.Robustness beyond Security: Representation Learning
Representations induced by robust models align better with human perception, and allow for a number of downstream applications.
Newer