Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks

Vikash Sehwag
Princeton Universiery
Rajvardhan Oak
Microsoft Redmond
Mung Chiang
Princeton Universiery
Prateek Mittal
Princeton Universiery
ICML workshop on Object-Oriented Learning, 2020

Overview

With increasing expressive power, deep neural networks have significantly improved the state-of-the-art on image classification datasets, such as ImageNet. In this paper, we investigate to what extent the increasing performance of deep neural networks is impacted by background features? In particular, we focus on \textit{background invariance}, i.e., accuracy unaffected by switching background features and \textit{background influence}, i.e., predictive power of background features itself when foreground is masked. We perform experiments with 32 different neural networks ranging from small-size networks (such as MobileNets) to large-scale networks trained with up to one Billion images. Our investigations reveal that increasing expressive power of DNNs leads to higher influence of background features, while simultaneously, increases their ability to make the correct prediction when background features are removed or replaced with a randomly selected texture-based background.

Other related works


Bibtex

@article{sehwag2020backgroundCheck,
  title={Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks},
  author={Sehwag, Vikash and Oak, Rajvardhan and Chiang, Mung and Mittal, Prateek},
  journal={ICML workshop on Object-Oriented Learning (OOL)},
  year={2020}
}