Combining Diverse Feature PriorsWe explore how a diverse set of feature priors can be leveraged to improve model generalization.
Certified Patch Robustness Via Smoothed Vision Transformers (Part 2)We demonstrate how vision transformers lead to strong certified patch defenses with standard accuracy and inference times comparable to standard (non-robust) models.
Certified Patch Robustness Via Smoothed Vision Transformers (Part 1)We give an overview of smoothing-based defenses for certified robustness to adversarial attacks, and how it can be used to defend against adversarial patches.
3DB: A Framework for Debugging Vision ModelsWe introduce 3DB, an easy-to-use and extensible framework for debugging vision models with 3D rendering.
Debuggable Deep Networks: Usage and Evaluation (Part 2)We show how debuggable deep networks are more amenable to human interpretation while enabling the discovery of unexpected model behaviors.
Debuggable Deep Networks: Sparse Linear Models (Part 1)We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks while remaining highly accurate.
Unadversarial Examples: Designing Objects for Robust VisionWe show how to design objects to help, rather than hurt, the performance of vision systems; the resulting objects improve performance on natural and distribution-shifted data.
Benchmarks for Subpopulation ShiftWe develop a methodology for constructing large-scale benchmarks to assess the robustness of standard models to subpopulation shift.