Certified Patch Robustness Via Smoothed Vision Transformers (Part 1)

We give an overview of smoothing-based defenses for certified robustness to adversarial attacks, and how it can be used to defend against adversarial patches.

3DB: A Framework for Debugging Vision Models

We introduce 3DB, an easy-to-use and extensible framework for debugging vision models with 3D rendering.

Debuggable Deep Networks: Usage and Evaluation (Part 2)

We show how debuggable deep networks are more amenable to human interpretation while enabling the discovery of unexpected model behaviors.

Debuggable Deep Networks: Sparse Linear Models (Part 1)

We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks while remaining highly accurate.

Unadversarial Examples: Designing Objects for Robust Vision

We show how to design objects to help, rather than hurt, the performance of vision systems; the resulting objects improve performance on natural and distribution-shifted data.

Benchmarks for Subpopulation Shift

We develop a methodology for constructing large-scale benchmarks to assess the robustness of standard models to subpopulation shift.

Transfer Learning with Adversarially Robust Models

We find that adversarially robust neural networks are better for downstream transfer learning than standard networks, despite having lower accuracy.

Noise or Signal: The Role of Backgrounds in Image Classification

To what extent to state-of-the-art vision models depend on image backgrounds?