Oct 14, 2021
Certified Patch Robustness Via Smoothed Vision Transformers (Part 1)
We give an overview of smoothing-based defenses for certified robustness to adversarial attacks, and how it can be used to defend against adversarial patches.
Jun 8, 2021
3DB: A Framework for Debugging Vision Models
We introduce 3DB, an easy-to-use and extensible framework for debugging vision models with 3D rendering.
May 12, 2021
Debuggable Deep Networks: Usage and Evaluation (Part 2)
We show how debuggable deep networks are more amenable to human interpretation while enabling the discovery of unexpected model behaviors.
May 12, 2021
Debuggable Deep Networks: Sparse Linear Models (Part 1)
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks while remaining highly accurate.
Dec 22, 2020
Unadversarial Examples: Designing Objects for Robust Vision
We show how to design objects to help, rather than hurt, the performance of vision systems; the resulting objects improve performance on natural and distribution-shifted data.
Aug 12, 2020
Benchmarks for Subpopulation Shift
We develop a methodology for constructing large-scale benchmarks to assess the robustness of standard models to subpopulation shift.
Jul 20, 2020
Transfer Learning with Adversarially Robust Models
We find that adversarially robust neural networks are better for downstream transfer learning than standard networks, despite having lower accuracy.
Jun 18, 2020
Noise or Signal: The Role of Backgrounds in Image Classification
To what extent to state-of-the-art vision models depend on image backgrounds?