About the seminar series

The motivation for the seminar is to build a platform to discuss and disseminate the progress made by the community in solving some of the core challenges. We intend to host weekly talks from leading researchers in both academia and industry. Each session will be split into a talk (40 mins) followed by a Q&A + short discussion session (20 mins).

Timing: Tuesday at 1pm Eastern Time (Virtual talks)

We recommend following two steps to get details about talks (including zoom links):

Upcoming talks

We schedule one break after every two talks.

Due to upcoming holidays, we'll suspend all talks between Dec 15 to Jan 5.

Previous talks

07 June 2022
Tom Goldstein (University of Maryland)
Just how private is federated learning?
Abstract & Bio

Abstract: Federated learning is often touted as a training paradigm that preserves user privacy. In this talk, I’ll discuss ways that federated protocols leak user information, and ways that malicious actors can exploit federated protocols to scrape information from users. If time permits, I’ll also discuss how recent advances in data poisoning can manipulate datasets to preserve privacy by preventing data from being used for model training.

Bio: Tom Goldstein is the Perotto Associate Professor of Computer Science at the University of Maryland. His research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. Before joining the faculty at Maryland, Tom completed his PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. Professor Goldstein has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, a JP Morgan Faculty award, and a Sloan Fellowship.

14 June 2022
Bo Li (University of Illinois Urbana-Champaign)
Trustworthy Machine Learning: Robustness, Privacy, Generalization, and their Interconnections
Abstract & Bio

Abstract: Advances in machine learning have led to the rapid and widespread deployment of learning based methods in safety-critical applications, such as autonomous driving and medical healthcare. Standard machine learning systems, however, assume that training and test data follow the same, or similar, distributions, without explicitly considering active adversaries manipulating either distribution. For instance, recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors during inference through poisoning attacks. Such distribution shift could also lead to other trustworthiness issues such as generalization. In this talk, I will describe different perspectives of trustworthy machine learning, such as robustness, privacy, generalization, and their underlying interconnections. I will focus on a certifiably robust learning approach based on statistical learning with logical reasoning as an example, and then discuss the principles towards designing and developing practical trustworthy machine learning systems with guarantees, by considering these trustworthiness perspectives in a holistic view.

Bio: Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the MIT Technology Review TR-35 Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, IJCAI Computer and Thought Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.

21 June 2022
Ben Y. Zhao (University of Chicago)
Adversarial Robustness and Forensics in Deep Neural Networks
Abstract & Bio

Abstract: Despite their tangible impact on a wide range of real world applications, deep neural networks are known to be vulnerable to numerous attacks, including inference time attacks based on adversarial perturbations, as well as training time attacks such as backdoors. The security community has done extensive work to explore both attacks and defenses, only to produce a seemingly endless cat-and-mouse game.

In this talk, I will talk about some of our recent work into adversarial robustness for DNNs, with a focus on ML digital forensics. I start by summarizing some of our recent projects at UChicago SAND Lab covering both sides of the attack/defense struggle, including honeypot defenses (CCS 2020) and physical domain poison attacks (CVPR 2021). Our experiences in these projects motivated us to seek a broader, more realistic view towards adversarial robustness, beyond the current static, binary views of attack and defense. Like real world security systems, we take a pragmatic view that given sufficient incentive and resources, attackers will eventually succeed in compromising DNN systems. Just as in traditional security realms, digital forensics tools can serve dual purposes: identifying the sources of the compromise so that they can be mitigated, while also providing a strong deterrent against future attackers. I will present results from our first paper in this space (Usenix Security 2022), specifically addressing forensics for poisoning attacks against DNNs, and show how we can trace back corrupted models to specific subsets of training data responsible for the corruption. Our approach builds up on ideas from model unlearning, and succeeds with high precision/recall for both dirty- and clean-label attacks.

Bio: Ben Zhao is Neubauer Professor of Computer Science at University of Chicago. Prior to joining UChicago, he held the position of Professor of Computer Science at UC Santa Barbara. He completed his Ph.D. at U.C. Berkeley (2004), and B.S. from Yale (1997). He is an ACM Fellow, and a recipient of the NSF CAREER award, MIT Technology Review's TR-35 Award (Young Innovators Under 35), ComputerWorld Magazine's Top 40 Technology Innovators award, IEEE ITC Early Career Award, and Google Faculty awards. His work has been covered by media outlets such as New York Times, Boston Globe, LA Times, MIT Tech Review, Wall Street Journal, Forbes, Fortune, CNBC, MSNBC, New Scientist, and Slashdot. He has published extensively in areas of security and privacy, machine learning, networking, and HCI. He served as TPC (co)chair for the World Wide Web conference (WWW 2016) and ACM Internet Measurement Conference (IMC 2018). He also serves on the steering committee for HotNets, and was general co-chair for HotNets 2020.

28 June 2022
Beyond Differential Privacy: Two Case Studies in Private Data Analysis
Abstract & Bio

Abstract: Differential privacy has emerged as the gold standard in private data analysis. However, there are some use-cases where it does not directly apply. In this talk, we will look at two such use-cases and the challenges that they pose. The first is privacy of language representations, where we offer sentence-level privacy and propose a new mechanism which uses public data to maintain high fidelity. The second is privacy of location traces, where we use Gaussian process priors to model correlations in location trajectory data, and offer privacy against an inferential adversary.

Joint work with Casey Meehan and Khalil Mrini

05 July 2022
Optimal Membership Inference Bounds in DP-SGD
Abstract & Bio

Abstract: Given a trained model and a data sample, membership-inference (MI) attacks predict whether the sample was in the model's training set. A common countermeasure against MI attacks is to utilize differential privacy (DP) during model training to mask the presence of individual examples. While this use of DP is a principled approach to limit the efficacy of MI attacks, there is a gap between the bounds provided by DP and the empirical performance of MI attacks. In this paper, we derive bounds for the advantage of an adversary mounting a MI attack, and demonstrate tightness for the widely-used Gaussian mechanism.

Bio: Alexandre Sablayrolles is a Research Scientist at Meta AI in Paris, working on the privacy and security of machine learning systems. He received his PhD from Université Grenoble Alpes in 2020, following a joint CIFRE program with Facebook AI. Prior to that, he completed his Master's degree in Data Science at NYU, and received a B.S. and M.S. in Applied Mathematics and Computer Science from École Polytechnique. Alexandre's research interests include privacy and security, computer vision, and applications of deep learning.

12 July 2022
Chuan Guo (Meta AI)
Bounding Training Data Reconstruction in Private (Deep) Learning
Abstract & Bio

Abstract: Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this talk, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods -- Rényi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.

Bio: Chuan Guo is a Research Scientist on the Fundamental AI Research (FAIR) team at Meta. He received his PhD from Cornell University, and his M.S. and B.S. degrees in computer science and mathematics from the University of Waterloo in Canada. His research interests lie in machine learning privacy and security, with recent works centering around the subjects of privacy-preserving machine learning, federated learning, and adversarial robustness. In particular, his work on privacy accounting using Fisher information leakage received the Best Paper Award at UAI in 2021.

26 July 2022
Soham De and Leonard Berrada (Deepmind)
Talk title: Unlocking High-Accuracy Differentially Private Image Classification through Scale
Abstract & Bio

Abstract: Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP training method, realizes this protection by injecting noise during training. However previous works have found that DP-SGD often leads to a significant degradation in performance on standard image classification benchmarks. Furthermore, some authors have postulated that DP-SGD inherently performs poorly on large models, since the norm of the noise required to preserve privacy is proportional to the model dimension. In this talk, we will describe our recent paper where we demonstrate that DP-SGD on over-parameterized models can perform significantly better than previously thought. Combining careful hyper-parameter tuning with simple techniques to ensure signal propagation and improve the convergence rate, we achieve 81.4% test accuracy on CIFAR-10 under (8, 10^(-5))-DP using a 40-layer Wide-ResNet, improving over the previous best result of 71.7%. When fine-tuning a pre-trained Normalizer-Free Network, we achieve 86.7% top-1 accuracy on ImageNet under (8, 8x10^(-7))-DP, markedly exceeding the previous best of 47.9% under a larger privacy budget of (10, 10^(-6))-DP.

Bio: Soham De is a Senior Research Scientist at DeepMind in London. He is interested in better understanding and improving large-scale deep learning, and currently works on optimization and initialization. Prior to joining DeepMind, he received his PhD from the Department of Computer Science at the University of Maryland, where he worked on stochastic optimization theory and game theory.

Leonard Berrada is a research scientist at DeepMind. His research interests span optimization, deep learning, verification and privacy, and lately he has been particularly interested in making differentially private training to work well with neural networks. Leonard completed his PhD in 2020 at the University of Oxford, under the supervision of M. Pawan Kumar and Andrew Zisserman. He holds an M.Eng. from University of California, Berkeley, an M.S. from Ecole Centrale-Supelec, and B.S. from University Paris-Sud and Ecole Centrale-Supelec.

02 Aug 2022
Alfred Chen (University of California, Irvine)
On the Semantic AI Security in CPS: The Case of Autonomous Driving
Abstract & Bio

Abstract: Recent years have witnessed a global phenomenon in the real-world development, testing, deployment, and commercialization of AI-enabled Cyber-Physical Systems (CPSs) such as autonomous driving cars, drones, industrial and home robots. These systems are rapidly revolutionizing a wide range of industries today, from transportation, retail, and logistics (e.g., robo-taxi, autonomous truck, delivery drones/robots), to domotics, manufacturing, construction,and healthcare. In such systems, the AI stacks are in charge of highly safety- and mission-critical decision-making processes such as obstacle avoidance and lane-keeping, which makes their security more critical than ever. Meanwhile, since these AI algorithms are only components of the entire CPS system enclosing them, their security issues are only meaningful when studied with direct integration of the semantic CPS problem context, which forms what we call the “semantic AI security” problem space and introduces various new AI security research challenges.

In this talk, I will focus on our recent efforts on the semantic AI security in one of the most safety-critical and fastest-growing AI-enabled CPS today, Autonomous Driving (AD) systems. Specifically, we performed the first security analysis on a wide range of critical AI components in industry-grade AD systems such as 3D perception, sensor fusion, lane detection, localization, prediction, and planning, and in this talk I will describe our key findings and also how we address the corresponding semantic AI security research challenges. I will conclude with a recent systemization of knowledge (SoK) we performed for this growing research space, with a specific emphasis on the most critical scientific gap we observed and our solution proposal.

Bio: Alfred Chen is an Assistant Professor of Computer Science at University of California, Irvine. His research interest spans AI security, systems security, and network security. His most recent research focuses are AI security in autonomous driving and intelligent transportation. His works have high impacts in both academic and industry with 30+ research papers in top-tier venues across security, mobile systems, transportation, software engineering, and machine learning; a nationwide USDHS US-CERT alert, multiple CVEs; 50+ news coverage by major media such as Forbes, Fortune, and BBC; and vulnerability report acknowledgments from USDOT, Apple, Microsoft, etc. Recently, his research triggered 30+ autonomous driving companies and the V2X standardization workgroup to start security vulnerability investigations; some confirmed to work on fixes. He co-founded the AutoSec workshop (co-located with NDSS), and co-created DEF CON’s first AutoDriving-themed hacking competition. He received various awards such as NSF CAREER Award, ProQuest Distinguished Dissertation Award, and UCI Chancellor’s Award for mentoring. Chen received Ph.D. from University of Michigan in 2018.

09 Aug 2022
Chaowei Xiao (Arizona State University + Nvidia Research)
Towards Socially Responsible Machine Learning
Abstract & Bio

Bio: Chaowei Xiao is an assistant professor at Arizona State University and the research scientist at NVIDIA Research. Dr. Xiao received his B.E. degree in School of Software from Tsinghua University in 2015 and Ph.D. degree in Computer Science Department from University of Michigan, Ann Arbor in 2020, respectively. His research interests lie at the intersection of computer security, privacy, and machine learning. His works have been featured in multiple media outlets, including Wired, Fortune, IEEE SPECTRUM. One of his research outputs was on display at the Science Museum in London. He has received the best paper award at Modicum 2014 and ESWN 2021.

23 Aug 2022
Anastasios Angelopoulos (UC Berkeley)
A Gentle Introduction to Conformal Prediction and Conformal Risk Control
Abstract & Bio

Abstract: High-risk machine learning deployments demand rigorous uncertainty quantification certifying the safety of the prediction algorithm. Conformal prediction is a new way of constructing distribution-free ”confidence intervals” for black-box algorithms like neural networks. These intervals are guaranteed to contain the ground truth with high probability regardless of the underlying algorithm or dataset. I will introduce the audience to conformal prediction and conformal risk control, an extension allowing it to apply to complex machine learning tasks. The presentation will draw from the following manuscripts: https://arxiv.org/abs/2107.07511; https://arxiv.org/abs/2208.02814

Bio: Anastasios Angelopoulos is a fourth year PhD student at UC Berkeley advised by Michael I. Jordan and Jitendra Malik

30 Aug 2022
Katherine Lee (Google Brain)
What does respecting privacy mean for language models?
Abstract & Bio

Abstract: Language models memorize training data. Defining and quantifying memorization is just as challenging of a task as understanding human communication. Furthermore, understanding the risks of memorization and when memorization is required is equally complex. In this talk, I'll present several papers we have written on how memorization scales with model size, deduplicating language model training data to reduce memorization, and what it means to respect privacy in language models.

Bio: Katherine Lee is a Research Engineer at Google Brain and a PhD student at Cornell advised by David Mimno. She studies security and privacy in large language models. She's broadly interested in the translation between people and the systems we build. What kinds of decisions can algorithms help with, and which should we leave algorithms out of? What kinds of objectives, political or social, can we, or can we not write down?

13 Sept 2022
Formal Verification of Deep Neural Networks: Challenges and Recent Advances
Abstract & Bio

Abstract: Neural networks have become a crucial element in modern artificial intelligence. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. In this talk, I will first introduce the problem of neural network verification and the challenges of guaranteeing the behavior of a neural network given input specifications. Then, I will discuss the bound-propagation-based algorithms (e.g., CROWN and beta-CROWN), which are efficient, scalable and powerful techniques for formal verification of neural networks and can also be generalizable to computational graphs beyond neural networks. My talk will highlight state-of-the-art verification techniques used in our α,β-CROWN (alpha-beta-CROWN) verifier that won the 2nd and 3rd International Verification of Neural Networks Competition (VNN-COMP 2021 and 2022), as well as novel applications of neural network verification.

Bio: Huan Zhang is a postdoctoral researcher at CMU, supervised by Prof. Zico Kolter. He received his Ph.D. degree at UCLA in 2020. Huan's research focuses on the trustworthiness of artificial intelligence, especially on developing formal verification methods to guarantee the robustness and safety of machine learning. Huan was awarded an IBM Ph.D. fellowship and he led the winning team in the 2021 International Verification of Neural Networks Competition. Huan received the 2021 AdvML Rising Star Award sponsored by MIT-IBM Watson AI Lab.

20 Sept 2022
Emily Wenger (University of Chicago)
Physical backdoor attacks: towards more realistic threat models in adversarial machine learning
Abstract & Bio

Abstract: Backdoor attacks against deep neural networks (DNN), in which hidden behaviors embedded in a DNN are activated by a certain “trigger” present on DNN inputs, have been extensively studied in the adversarial machine learning (ML) community. However, most existing research on backdoor attacks for image classification models focuses on pixel-based triggers, in which triggers are edited onto images after their creation. In this talk, I propose a more realistic threat model for image-based backdoor attacks – physical backdoor attacks – and describe our recent work demonstrating the severity of the threat posed by such attacks. I conclude by discussing future research directions arising from this more realistic backdoor attack threat model.

Bio: Emily Wenger is a final year computer science PhD student at the University of Chicago, advised by Ben Zhao and Heather Zheng. Her research focuses on security and privacy issues of machine learning systems. Her work has been published at top computer security (CCS, USENIX, Oakland) and machine learning (NeurIPS, CVPR) conferences and has been covered by media outlets including the New York Times, MIT Tech Review, and Nature. She is the recipient of the GFSD, Harvey, and Neubauer fellowships. Previously, she worked for the US Department of Defense and interned at Meta AI Research.

04 Oct 2022
Deep network models of the deep network mechanisms of (part of) human visual intelligence
Abstract & Bio

Abstract: We are embarked on a bold scientific quest — to understand the neural mechanisms of human intelligence. Recent progress in multiple subfields of brain research suggests that key next steps in this quest will result from building real-world capable, systems-level network models that aim to abstract, emulate and explain the mechanisms underlying natural intelligent behavior. In this talk, I will tell the story of how neuroscience, cognitive science and computer science converged to create specific, image-computable, deep neural network models intended to appropriately abstract, emulate and explain the mechanisms of primate core visual object recognition. Based on a large body of primate neurophysiological and behavioral data, some of these network models are currently the leading (i.e. most accurate) scientific theories of the internal mechanisms of the primate ventral visual stream and how those mechanisms support the ability of humans and other primates to rapidly and accurately infer latent world content (e.g. object identity, position, pose, etc.) from the set of pixels in most natural images. While still far from complete, these leading scientific models already have many uses in brain science and beyond. In this talk, I will highlight one particular use: the design of patterns of light energy on the retina (i.e. new images) that neuroscientists can use to precisely modulate neuronal activity deep in the brain. Our most recent experimental work suggests that, when targeted in this new way, the responses of individual high-level primate neurons are exquisitely sensitive to barely perceptible image modifications. While surprising to many neuroscientists — ourselves included — this result is in line with the predictions of the current leading scientific models (above), it offers guidance to contemporary computer vision research, and it suggests a currently untapped non-pharmacological avenue to approach clinical interventions.

Bio: bio.pdf

11 Oct 2022
Suman Jana (Columbia University)
Efficient Neural Network Verification using Branch and Bound
Abstract & Bio

Abstract: In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons; and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α, β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score.

Bio: Suman Jana is an associate professor in the department of computer science and the data science institute at Columbia University. His primary research interest is at the intersections of computer security and machine learning. His research has received six best paper awards, a CACM research highlight, a Google faculty fellowship, a JPMorgan Chase Faculty Research Award, an NSF CAREER award, and an ARO young investigator award.

25 Oct 2022
Enabling practical and trustworthy differential privacy for neural models
Abstract & Bio

Abstract: Differential privacy has been the gold privacy guarantee, but adoption has been slow in deep learning due to a perception that privacy and large neural models are incompatible. In this talk, we discuss two important aspects of differentially private deep learning. First, we present recent work on leveraging pre-training to enable differentially private deep neural nets that achieve stringent privacy guarantees and minimal utility losses. Second, we show that these same high-performance private models can be substantially more miscalibrated than their non-private counterparts, and post-hoc recalibration techniques are necessary to ensure that the confidence estimates of these private models can be trusted. Together, these works demonstrate promises and challenges in developing private deep neural networks.

Bio: Tatsu is currently an assistant professor at the computer science department in Stanford university. His research uses tools from statistics to make machine learning systems more robust and reliable — especially in challenging tasks involving natural language. The goal of his research is to use robustness and worst-case performance as a lens to understand and make progress on several fundamental challenges in machine learning and natural language processing.

1 Nov 2022
Sven Gowal and Olivia Wiles (Deepmind)
Specification-Driven Machine Learning: Robustness to Adversarial and Natural Shifts
Abstract & Bio

Abstract: Despite achieving super-human accuracy on benchmarks like ImageNet, machine learning models are still susceptible to a number of issues leading to poor performance in the real world. First, small adversarial perturbations that are invisible to the human eye cause the model to predict another class. Second, models are prone to shortcut learning and using spurious correlations, leading to poor performance under distribution shifts. Using a specification driven approach, we present three strategies to mitigate and expose these issues. First, we demonstrate how to use data augmentation to achieve SOTA performance in adversarial robustness. We then demonstrate how similar techniques can be applied to specifications encoded in a generative model and can be exploited to improve robustness against style transformations. Finally, we demonstrate how we can go beyond formal specifications to surface human interpretable failures in vision models automatically in an open-ended manner. These techniques are steps along the path to building reliable and trustworthy AI.

Bio: Olivia Wiles is a Senior Researcher at DeepMind working on robustness in machine learning, focussing on how to detect and mitigate failures arising from spurious correlation and distribution shift. Prior to this, she was a PhD student at Oxford with Andrew Zisserman studying self-supervised representations for 3D and spent a summer at FAIR working on view synthesis with Justin Johnson, Georgia Gkioxari and Rick Szeliski.

Sven Gowal is a Staff Research Engineer at DeepMind, UK. He led numerous initiatives on "robust and certifiable machine learning" at DeepMind and has co-authored over 30 papers in the domain of Robust ML receiving 2 best paper awards. Prior to DeepMind, he worked for Google Research, where he focused on video content analysis and real-time object detection. He completed his PhD at the Swiss Federal Institute of Technology (EPFL), Switzerland, in 2013, on the topic of decentralized multi-robot control. He received his MSc in 2007 from EPFL after working on the DARPA Urban Challenge with Caltech and having spent part of his undergrad at Carnegie Mellon University.

15 Nov 2022
Ross Anderson (University of Cambridge)
Adversarial Machine Learning Along the Pipeline
Abstract & Bio

Abstract: Tbd

22 Nov 2022
Efficient and Effective Augmentation Strategy for Adversarial Training
Abstract & Bio

Abstract: Deep Neural Networks are vulnerable to crafted imperceptible perturbations known as Adversarial Attacks that can flip the model's predictions to unrelated classes, leading to disastrous implications. Adversarial Training has been the most successful defense strategy, where a model is explicitly trained to be robust in the presence of such attacks. However, adversarial training is very data-hungry, much more than standard training. Furthermore, standard data augmentations such as AutoAugment, which have led to substantial gains in standard training of image classifiers, have not been successful with Adversarial Training. In this talk, I will first discuss this contrasting behavior by viewing augmentation during training as a problem of domain generalization, and further present our recent work - Diverse Augmentation-based Joint Adversarial Training (DAJAT), to use data augmentations effectively in adversarial training. We aim to handle the conflicting goals of enhancing the diversity of the training dataset and training with data that is close to the test distribution by using a combination of simple and complex augmentations with separate batch normalization layers during training. I will next discuss some methods for improving the computational efficiency of adversarial training and further present the two-step defense, Ascending Constraint Adversarial Training (ACAT), that uses an increasing epsilon schedule and weight-space smoothing to improve the efficiency and training stability of DAJAT.

Bio: Sravanti Addepalli is a Ph.D. student at the Indian Institute of Science (IISc), Bengaluru, and a Student Researcher at Google Research India. She is pursuing research in the area of Adversarial Robustness of Deep Networks at the Video Analytics Lab (VAL) in the Department of Computational and Data Sciences (CDS). Her research interests include Adversarial Attacks and Defences, OOD generalization of Deep Networks, and Self-supervised learning. Sravanti is broadly interested in exploring the vulnerabilities of Deep Networks and building algorithms to improve their robustness and generalization. She is a recipient of the Google Ph.D. fellowship, Prime Minister’s Fellowship for Doctoral Research, and Qualcomm Innovation Fellowship.

13 Dec 2022
Kate Saenko (Boston University)
Data Shift Happens, What To Do About It?
Abstract & Bio

Abstract: In computer vision, generalization of learned representations is usually measured on i.i.d. data. This hides the fact that models often struggle to generalize to non-i.i.d data and fail to overcome the biases inherent in visual datasets. Labeling additional data in each new situation is the standard solution but is often prohibitively expensive. I will discuss some recent work in my lab addressing the core challenges in overcoming dataset bias, including adaptation to natural domain shifts, sim2real transfer, avoiding spurious correlations, and the role of pretraining in generalizability.

Bio: Kate is a Professor of Computer Science at Boston University. She leads the Computer Vision and Learning Group at BU, is the founder and co-director of the Artificial Intelligence Research (AIR) initiative, and member of the Image and Video Computing research group. Kate holds a PhD from MIT EECD and did her postdoctoral training at UC Berkeley and Harvard. Her research interests are in the broad area of Artificial Intelligence with a focus on dataset bias, adaptive machine learning, learning for image and language understanding, and deep learning.

Organizers: Vikash Sehwag (Princeton), Cihang Xie (UCSC), and Jamie Hayes (Deepmind)
Advisory committee: Prateek Mittal (Princeton), Reza Shokri (NUS)

You can reach us at or or for questions or suggestions related to the seminar.