Udari Madhushani

Vikash Sehwag

PhD Candidate, Princeton University

I am a PhD candidate in Electrical Engineering at Princeton University. I am interested in research problems at the intersection of security, privacy, and machine learning. Some topics I have worked on are adversarial robust supervised / self-supervised learning, adversarial robustness in compressed neural networks, self-supervised detection of outliers, robust open-world machine learning, and privacy leakage in large scale deep learning.

I am co-advised by Prateek Mittal and Mung Chiang. Before joining Princeton, I completed my undergraduate in E&ECE (with minor in CS) from IIT Kharagpur, India. I earlier had an amazing summer internship experience at Microsoft Research, Redmond. Before that, I spent a wonderful summer working with Heinz Koeppl at TU Darmstadt. I have also been fortunate to receive Qualcomm Innovation Fellowship in 2019.

Feel free to reach out if you find my work interesting or looking for any collaborative opportunities.


News

1/2021
Self-supervised outlier detection (SSD) paper accepted at ICLR, 2021 (pdf).
10/2020
Work on self-supervised outlier detection (SSD) accepted at NeurIPS SSL workshop (pdf).
10/2020
Releasing RobustBench, a standardized benchmark for adversarial robustness.
10/2020
Work on fast-convergent federated learning to appear in IEEE JSAC (arxiv, led by Hung T. Nguyen).
07/2020
Paper on background check of deep learning accepted at ICML OOL workshop (pdf, slides, video).
07/2020
Work on separability of self-supervised representations, and another one on critical evaluation of open-world meachine learning, accepted at ICML UDL workshop.
06/2020
Volunteered as junior mentor at Princeton-OLCF-NVIDIA GPU Hackathon.
05/2020
Releasing PatchGuard, a provable defense against adversarial patches (led by Chong Xiang) (Pdf, Code).
04/2020
Work on pruning robust networks accepted at ICLR TTML workshop (slides, video, full paper).
01/2020
Taught a mini-course on adversarial attacks & defenses in Winterssion 2020 (Slides, Colab-notebook).
09/2019
Finished amazing suumer research internship at Microsoft Research, Redmond.
08/2019
Paper on robust open-world machine learning accepted at AISec 2019 (Slides).

Publications

Work in Progress

Understanding the effect of datasets on adversarial robustness

with Saeed Mahloujifar, Mung Chiang, and Prateek Mittal

Analyzing and Improving Self-Supervised Representations (Workshop paper)

with Mung Chiang and Prateek Mittal

Adversarial attacks on deepfake detectors

with Chong Xiang, Mung Chiang and Prateek Mittal

Adversarial attacks and defenses beyond \( \ell_p \) norms

with Jay Stokes and Cha Zhang

On Analyzing and Mitigating Privacy Leakage in Large Scale Deep Learning

with Kavya Chandran, Liwei Song, Mung Chiang, and Prateek Mittal


SSD: A Unified Framework for Self-Supervised outlier detection

Vikash Sehwag, Mung Chiang, Prateek Mittal

Under review at ICLR 2021, Short version accepted at NeurIPS SSL workshop, 2020

Using only unlabeled data, we develop a highly succesful framework to detect outliers or out-of-distribution samples.


RobustBench: A Standardized Adversarial Robustness Benchmark

Francesco Croce, Maksym Andriushchenko, Vikash Sehwag,
Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein

Arxiv, 2020

We provide a leaderboard to track progress + a library for unified access to SOTA defenses against adversarial examples.


Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks

Vikash Sehwag, Rajvardhan Oak, Mung Chiang, Prateek Mittal

ICML workshop on Object-Oriented Learning, 2020

We investigate background invariance and influence over 32 deep neural networks on ImageNet dataset.


On Separability of Self-Supervised Representations

Vikash Sehwag, Mung Chiang, Prateek Mittal

ICML workshop on Uncertainty & Robustness in Deep Learning, 2020

We compare the representations learned by several self-supervised methods with supervised networks.

Theme: How to design robust yet compact neural networks?

HYDRA: Pruning Adversarially Robust Neural Networks

Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana

To appear in NeurIPS 2020, Short paper in ICLR workshop on Trustworthy Machine Learning, 2020

We achieve state-of-the-art accuracy and robustness for pruned networks (pruning up to 100x).

Towards Compact and Robust Deep Neural Networks

Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana

Arxiv, 2019

We investigate the impact of network pruning on both empirical and provable adversarial robustness.


PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields

Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal

Arxiv, 2020

A general defense framework to acheive provable robustness against adversrial patches.


Fast-Convergent Federated Learning

Hung T. Nguyen, Vikash Sehwag, Seyyedali Hosseinalipour, Christopher G. Brinton, Mung Chiang, H. Vincent Poor

To appear in IEEE Journal on Selected Areas in Communications (J-SAC) - Series on Machine Learning for Communications and Networks

We proposed a fast-convergent federated learning algorithm, called FOLB, which improves convergence speed by an intelligent sampling of devices in each round.

Theme: Robust Open-world machine learning: Making neural networks learn what they do and don't know, even in presence of an adversary!

A Critical Evaluation of Open-World Machine Learning

Liwei Song, Vikash Sehwag, Arjun Nitin Bhagoji, Prateek Mittal

ICML Workshop on Uncertainty & Robustness in Deep Learning , 2020

We discover a conflict between the objective of open-world machine learning and adversarial robustness.

Analyzing the Robustness of Open-World Machine Learning

Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, Mung Chiang, Prateek Mittal

ACM Workshop on Artificial Intelligence and Security (AISec), 2019

We demonstrate the vulnerability of open-world ML to adversarial examples and proposed a defense.


Research Work in Undergraduate

A Parallel Stochastic Number Generator With Bit Permutation Networks with N. Prasad and Indrajit Chakrabarti

IEEE Transactions on Circuits and Systems II: Express Briefs, 2017 (Pdf)

Variation Aware Performance Analysis of TFETs for Low-Voltage Computing with Saurav Maji and Mrigank Sharad

IEEE International Symposium on Nanoelectronic and Information Systems (iNIS), 2016 (Pdf)

TV-PUF: a fast lightweight analog physical unclonable function with Tanujay Saha

IEEE International Symposium on Nanoelectronic and Information Systems (iNIS), 2016 (Pdf)

A Study of Stochastic SIS Disease Spreading on Random Graphs with Wasiur R. KhudaBukhsh and Heinz Koeppl, 2016 (Pdf)

Academic Services

Teaching and Mentoring

Taught a mini-course on adversarial attacks & defenses (Winterssion 2020)

Teaching assistant for ELE 535: Machine Learning and Pattern Recognition (Fall 2019)

Mentoring Princeton undergraduates for their senior independent research work
Tinashe Handina (B.S.E., Electrical Engineering 2021); Matteo Russo (B.S.E., Computer Science 2020)

Other Services

One of the three core maintainers of Adversarial Robustness Benchmark (robustbench.github.io)

Volunteered as junior mentor at Princeton-OLCF-NVIDIA GPU Hackathon (June 2020)

Reviewer for ACM Transactions on Privacy and Security (TOPS), PLOS One

Sub-reviewer for USENIX Security 2018, 2019