I am a PhD candidate in Electrical Engineering at Princeton University. I am interested in research problems at the intersection of security, privacy, and machine learning. Some topics I have worked on are adversarial robust supervised / self-supervised learning, adversarial robustness in compressed neural networks, self-supervised detection of outliers, robust open-world machine learning, and privacy leakage in large scale deep learning.
I am co-advised by Prateek Mittal and Mung Chiang. Before joining Princeton, I completed my undergraduate in E&ECE (with minor in CS) from IIT Kharagpur, India. I earlier had an amazing summer internship experience at Microsoft Research, Redmond. Before that, I spent a wonderful summer working with Heinz Koeppl at TU Darmstadt. I have also been fortunate to receive Qualcomm Innovation Fellowship in 2019.
Feel free to reach out if you find my work interesting or looking for any collaborative opportunities.
Work in Progress
Understanding the effect of datasets on adversarial robustness
Analyzing and Improving Self-Supervised Representations (Workshop paper)
Adversarial attacks on deepfake detectors
Adversarial attacks and defenses beyond \( \ell_p \) norms
On Analyzing and Mitigating Privacy Leakage in Large Scale Deep Learning
SSD: A Unified Framework for Self-Supervised outlier detection
Under review at ICLR 2021, Short version accepted at NeurIPS SSL workshop, 2020
Using only unlabeled data, we develop a highly succesful framework to detect outliers or out-of-distribution samples.
RobustBench: A Standardized Adversarial Robustness Benchmark
We provide a leaderboard to track progress + a library for unified access to SOTA defenses against adversarial examples.
Time for a Background Check! Uncovering the impact of Background Features on Deep Neural Networks
ICML workshop on Object-Oriented Learning, 2020
We investigate background invariance and influence over 32 deep neural networks on ImageNet dataset.
On Separability of Self-Supervised Representations
ICML workshop on Uncertainty & Robustness in Deep Learning, 2020
We compare the representations learned by several self-supervised methods with supervised networks.
Theme: How to design robust yet compact neural networks?
HYDRA: Pruning Adversarially Robust Neural Networks
To appear in NeurIPS 2020, Short paper in ICLR workshop on Trustworthy Machine Learning, 2020
We achieve state-of-the-art accuracy and robustness for pruned networks (pruning up to 100x).
Towards Compact and Robust Deep Neural Networks
We investigate the impact of network pruning on both empirical and provable adversarial robustness.
PatchGuard: Provable Defense against Adversarial Patches Using Masks on Small Receptive Fields
A general defense framework to acheive provable robustness against adversrial patches.
Fast-Convergent Federated Learning
To appear in IEEE Journal on Selected Areas in Communications (J-SAC) - Series on Machine Learning for Communications and Networks
We proposed a fast-convergent federated learning algorithm, called FOLB, which improves convergence speed by an intelligent sampling of devices in each round.
Theme: Robust Open-world machine learning: Making neural networks learn what they do and don't know, even in presence of an adversary!
A Critical Evaluation of Open-World Machine Learning
ICML Workshop on Uncertainty & Robustness in Deep Learning , 2020
We discover a conflict between the objective of open-world machine learning and adversarial robustness.
Analyzing the Robustness of Open-World Machine Learning
ACM Workshop on Artificial Intelligence and Security (AISec), 2019
We demonstrate the vulnerability of open-world ML to adversarial examples and proposed a defense.
Research Work in Undergraduate
Teaching and Mentoring