I am a PhD candidate in Electrical Engineering at Princeton University. I build trustworthy machine learning systems, especially using generative models.
I am advised by Prateek Mittal and Mung Chiang. Before coming to Princeton, I completed my undergraduate at IIT Kharagpur, India. I have previously interned at Microsoft Research and currently interning at Facebook AI. I have also been fortunate to receive Qualcomm Innovation Fellowship.
Major update: I am leading the organization of virtual seminar series on Security & Privacy in Machine Learning (SPML).
Generating High Fidelity Data from Low-density Regions using Diffusion Models
We improve the sampling process of diffusion models to generate high fidelity hard, i.e., from low-density regions, synthetic images.
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
We show that synthetic data from diffusion model provides a termendous boost in generalization performance of robust training.
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
We provide lower-bounds on cross-entropy loss in persence of adversarial attacks on basic vision datasets.
SSD: A Unified Framework for Self-Supervised outlier detection
ICLR 2021, Short version accepted at NeurIPS SSL workshop, 2020
Using only unlabeled data, we develop a highly succesful framework to detect outliers or out-of-distribution samples.
RobustBench: A Standardized Adversarial Robustness Benchmark
We provide a leaderboard to track progress + a library for unified access to SOTA defenses against adversarial examples.
Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks
ICML workshop on Object-Oriented Learning, 2020
We investigate background invariance and influence over 32 deep neural networks on ImageNet dataset.
On Separability of Self-Supervised Representations
ICML workshop on Uncertainty & Robustness in Deep Learning, 2020
We compare the representations learned by several self-supervised methods with supervised networks.
HYDRA: Pruning Adversarially Robust Neural Networks
NeurIPS 2020, Short paper in ICLR workshop on Trustworthy Machine Learning, 2020
We achieve state-of-the-art accuracy and robustness for pruned networks (pruning up to 100x).
PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
A general defense framework to acheive provable robustness against adversrial patches.
Fast-Convergent Federated Learning
To appear in IEEE Journal on Selected Areas in Communications (J-SAC) - Series on Machine Learning for Communications and Networks
We proposed a fast-convergent federated learning algorithm, called FOLB, which improves convergence speed by an intelligent sampling of devices in each round.
A Critical Evaluation of Open-World Machine Learning
ICML Workshop on Uncertainty & Robustness in Deep Learning , 2020
We discover a conflict between the objective of open-world machine learning and adversarial robustness.
Analyzing the Robustness of Open-World Machine Learning
ACM Workshop on Artificial Intelligence and Security (AISec), 2019
We demonstrate the vulnerability of open-world ML to adversarial examples and proposed a defense.
Research Work in Undergraduate
Teaching and Mentoring