Transfer Learning with Adversarially Robust Models

We find that adversarially robust neural networks are better for downstream transfer learning than standard networks, despite having lower accuracy.

Noise or Signal: The Role of Backgrounds in Image Classification

To what extent to state-of-the-art vision models depend on image backgrounds?

From ImageNet to Image Classification

We take a closer look at the ImageNet dataset and identify ways in which it deviates from the underlying object recognition task.

Identifying Statistical Bias in Dataset Replication

Statistical bias in dataset reproduction studies can lead to skewed outcomes and observations.

Robustness Beyond Security: Computer Vision Applications

An off-the-shelf robust classifier can be used to perform a range of computer vision tasks beyond classification.

Robustness Beyond Security: Representation Learning

Representations induced by robust models align better with human perception, and allow for a number of downstream applications.

Adversarial Examples Are Not Bugs, They Are Features

A new perspective on adversarial perturbations

A Closer Look at Deep Policy Gradients (Part 3: Landscapes and Trust Regions)

In the second part of our analysis, we examine gradient estimate quality and the value function as a variance reducing baseline.