About the seminar series

The motivation for the seminar is to build a platform to discuss and disseminate the progress made by the community in solving some of the core challenges. We intend to host weekly talks from leading researchers in both academia and industry. Each session will be split into a talk (40 mins) followed by a Q&A + short discussion session (20 mins).

Timing: Tuesday at 1pm Eastern Time (Virtual talks)

We recommend following two steps to get details about talks (including zoom links):

Upcoming talks

We schedule one break after every two talks.

27 Sept 2022 - Scheduled Break
04 Oct 2022
Deep network models of the deep network mechanisms of (part of) human visual intelligence
Abstract & Bio

Abstract: We are embarked on a bold scientific quest — to understand the neural mechanisms of human intelligence. Recent progress in multiple subfields of brain research suggests that key next steps in this quest will result from building real-world capable, systems-level network models that aim to abstract, emulate and explain the mechanisms underlying natural intelligent behavior. In this talk, I will tell the story of how neuroscience, cognitive science and computer science converged to create specific, image-computable, deep neural network models intended to appropriately abstract, emulate and explain the mechanisms of primate core visual object recognition. Based on a large body of primate neurophysiological and behavioral data, some of these network models are currently the leading (i.e. most accurate) scientific theories of the internal mechanisms of the primate ventral visual stream and how those mechanisms support the ability of humans and other primates to rapidly and accurately infer latent world content (e.g. object identity, position, pose, etc.) from the set of pixels in most natural images. While still far from complete, these leading scientific models already have many uses in brain science and beyond. In this talk, I will highlight one particular use: the design of patterns of light energy on the retina (i.e. new images) that neuroscientists can use to precisely modulate neuronal activity deep in the brain. Our most recent experimental work suggests that, when targeted in this new way, the responses of individual high-level primate neurons are exquisitely sensitive to barely perceptible image modifications. While surprising to many neuroscientists — ourselves included — this result is in line with the predictions of the current leading scientific models (above), it offers guidance to contemporary computer vision research, and it suggests a currently untapped non-pharmacological avenue to approach clinical interventions.

Bio: bio.pdf

11 Oct 2022
Suman Jana (Columbia University)
Tbd
Abstract & Bio

Abstract: Tbd

18 Oct 2022 - Scheduled Break
25 Oct 2022
Tbd
Abstract & Bio

Abstract: Tbd

1 Nov 2022
Sven Gowal and Olivia Wiles (Deepmind)
Tbd
Abstract & Bio

Abstract: Tbd

8 Nov 2022 - Scheduled Break
15 Nov 2022
Ross Anderson (University of Cambridge)
Tbd
Abstract & Bio

Abstract: Tbd

22 Nov 2022
Tbd
Abstract & Bio

Abstract: Tbd

Previous talks

07 June 2022
Tom Goldstein (University of Maryland)
Just how private is federated learning?
Abstract & Bio

Abstract: Federated learning is often touted as a training paradigm that preserves user privacy. In this talk, I’ll discuss ways that federated protocols leak user information, and ways that malicious actors can exploit federated protocols to scrape information from users. If time permits, I’ll also discuss how recent advances in data poisoning can manipulate datasets to preserve privacy by preventing data from being used for model training.

Bio: Tom Goldstein is the Perotto Associate Professor of Computer Science at the University of Maryland. His research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. Before joining the faculty at Maryland, Tom completed his PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. Professor Goldstein has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, a JP Morgan Faculty award, and a Sloan Fellowship.

14 June 2022
Bo Li (University of Illinois Urbana-Champaign)
Trustworthy Machine Learning: Robustness, Privacy, Generalization, and their Interconnections
Abstract & Bio

Abstract: Advances in machine learning have led to the rapid and widespread deployment of learning based methods in safety-critical applications, such as autonomous driving and medical healthcare. Standard machine learning systems, however, assume that training and test data follow the same, or similar, distributions, without explicitly considering active adversaries manipulating either distribution. For instance, recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors during inference through poisoning attacks. Such distribution shift could also lead to other trustworthiness issues such as generalization. In this talk, I will describe different perspectives of trustworthy machine learning, such as robustness, privacy, generalization, and their underlying interconnections. I will focus on a certifiably robust learning approach based on statistical learning with logical reasoning as an example, and then discuss the principles towards designing and developing practical trustworthy machine learning systems with guarantees, by considering these trustworthiness perspectives in a holistic view.

Bio: Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the MIT Technology Review TR-35 Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, IJCAI Computer and Thought Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.

21 June 2022
Ben Y. Zhao (University of Chicago)
Adversarial Robustness and Forensics in Deep Neural Networks
Abstract & Bio

Abstract: Despite their tangible impact on a wide range of real world applications, deep neural networks are known to be vulnerable to numerous attacks, including inference time attacks based on adversarial perturbations, as well as training time attacks such as backdoors. The security community has done extensive work to explore both attacks and defenses, only to produce a seemingly endless cat-and-mouse game.

In this talk, I will talk about some of our recent work into adversarial robustness for DNNs, with a focus on ML digital forensics. I start by summarizing some of our recent projects at UChicago SAND Lab covering both sides of the attack/defense struggle, including honeypot defenses (CCS 2020) and physical domain poison attacks (CVPR 2021). Our experiences in these projects motivated us to seek a broader, more realistic view towards adversarial robustness, beyond the current static, binary views of attack and defense. Like real world security systems, we take a pragmatic view that given sufficient incentive and resources, attackers will eventually succeed in compromising DNN systems. Just as in traditional security realms, digital forensics tools can serve dual purposes: identifying the sources of the compromise so that they can be mitigated, while also providing a strong deterrent against future attackers. I will present results from our first paper in this space (Usenix Security 2022), specifically addressing forensics for poisoning attacks against DNNs, and show how we can trace back corrupted models to specific subsets of training data responsible for the corruption. Our approach builds up on ideas from model unlearning, and succeeds with high precision/recall for both dirty- and clean-label attacks.

Bio: Ben Zhao is Neubauer Professor of Computer Science at University of Chicago. Prior to joining UChicago, he held the position of Professor of Computer Science at UC Santa Barbara. He completed his Ph.D. at U.C. Berkeley (2004), and B.S. from Yale (1997). He is an ACM Fellow, and a recipient of the NSF CAREER award, MIT Technology Review's TR-35 Award (Young Innovators Under 35), ComputerWorld Magazine's Top 40 Technology Innovators award, IEEE ITC Early Career Award, and Google Faculty awards. His work has been covered by media outlets such as New York Times, Boston Globe, LA Times, MIT Tech Review, Wall Street Journal, Forbes, Fortune, CNBC, MSNBC, New Scientist, and Slashdot. He has published extensively in areas of security and privacy, machine learning, networking, and HCI. He served as TPC (co)chair for the World Wide Web conference (WWW 2016) and ACM Internet Measurement Conference (IMC 2018). He also serves on the steering committee for HotNets, and was general co-chair for HotNets 2020.

28 June 2022
Beyond Differential Privacy: Two Case Studies in Private Data Analysis
Abstract & Bio

Abstract: Differential privacy has emerged as the gold standard in private data analysis. However, there are some use-cases where it does not directly apply. In this talk, we will look at two such use-cases and the challenges that they pose. The first is privacy of language representations, where we offer sentence-level privacy and propose a new mechanism which uses public data to maintain high fidelity. The second is privacy of location traces, where we use Gaussian process priors to model correlations in location trajectory data, and offer privacy against an inferential adversary.

Joint work with Casey Meehan and Khalil Mrini

05 July 2022
Optimal Membership Inference Bounds in DP-SGD
Abstract & Bio

Abstract: Given a trained model and a data sample, membership-inference (MI) attacks predict whether the sample was in the model's training set. A common countermeasure against MI attacks is to utilize differential privacy (DP) during model training to mask the presence of individual examples. While this use of DP is a principled approach to limit the efficacy of MI attacks, there is a gap between the bounds provided by DP and the empirical performance of MI attacks. In this paper, we derive bounds for the advantage of an adversary mounting a MI attack, and demonstrate tightness for the widely-used Gaussian mechanism.

Bio: Alexandre Sablayrolles is a Research Scientist at Meta AI in Paris, working on the privacy and security of machine learning systems. He received his PhD from Université Grenoble Alpes in 2020, following a joint CIFRE program with Facebook AI. Prior to that, he completed his Master's degree in Data Science at NYU, and received a B.S. and M.S. in Applied Mathematics and Computer Science from École Polytechnique. Alexandre's research interests include privacy and security, computer vision, and applications of deep learning.

12 July 2022
Chuan Guo (Meta AI)
Bounding Training Data Reconstruction in Private (Deep) Learning
Abstract & Bio

Abstract: Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this talk, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods -- Rényi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.

Bio: Chuan Guo is a Research Scientist on the Fundamental AI Research (FAIR) team at Meta. He received his PhD from Cornell University, and his M.S. and B.S. degrees in computer science and mathematics from the University of Waterloo in Canada. His research interests lie in machine learning privacy and security, with recent works centering around the subjects of privacy-preserving machine learning, federated learning, and adversarial robustness. In particular, his work on privacy accounting using Fisher information leakage received the Best Paper Award at UAI in 2021.

26 July 2022
Soham De and Leonard Berrada (Deepmind)
Talk title: Unlocking High-Accuracy Differentially Private Image Classification through Scale
Abstract & Bio

Abstract: Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points. Differentially Private Stochastic Gradient Descent (DP-SGD), the most popular DP training method, realizes this protection by injecting noise during training. However previous works have found that DP-SGD often leads to a significant degradation in performance on standard image classification benchmarks. Furthermore, some authors have postulated that DP-SGD inherently performs poorly on large models, since the norm of the noise required to preserve privacy is proportional to the model dimension. In this talk, we will describe our recent paper where we demonstrate that DP-SGD on over-parameterized models can perform significantly better than previously thought. Combining careful hyper-parameter tuning with simple techniques to ensure signal propagation and improve the convergence rate, we achieve 81.4% test accuracy on CIFAR-10 under (8, 10^(-5))-DP using a 40-layer Wide-ResNet, improving over the previous best result of 71.7%. When fine-tuning a pre-trained Normalizer-Free Network, we achieve 86.7% top-1 accuracy on ImageNet under (8, 8x10^(-7))-DP, markedly exceeding the previous best of 47.9% under a larger privacy budget of (10, 10^(-6))-DP.

Bio: Soham De is a Senior Research Scientist at DeepMind in London. He is interested in better understanding and improving large-scale deep learning, and currently works on optimization and initialization. Prior to joining DeepMind, he received his PhD from the Department of Computer Science at the University of Maryland, where he worked on stochastic optimization theory and game theory.

Leonard Berrada is a research scientist at DeepMind. His research interests span optimization, deep learning, verification and privacy, and lately he has been particularly interested in making differentially private training to work well with neural networks. Leonard completed his PhD in 2020 at the University of Oxford, under the supervision of M. Pawan Kumar and Andrew Zisserman. He holds an M.Eng. from University of California, Berkeley, an M.S. from Ecole Centrale-Supelec, and B.S. from University Paris-Sud and Ecole Centrale-Supelec.

02 Aug 2022
Alfred Chen (University of California, Irvine)
On the Semantic AI Security in CPS: The Case of Autonomous Driving
Abstract & Bio

Abstract: Recent years have witnessed a global phenomenon in the real-world development, testing, deployment, and commercialization of AI-enabled Cyber-Physical Systems (CPSs) such as autonomous driving cars, drones, industrial and home robots. These systems are rapidly revolutionizing a wide range of industries today, from transportation, retail, and logistics (e.g., robo-taxi, autonomous truck, delivery drones/robots), to domotics, manufacturing, construction,and healthcare. In such systems, the AI stacks are in charge of highly safety- and mission-critical decision-making processes such as obstacle avoidance and lane-keeping, which makes their security more critical than ever. Meanwhile, since these AI algorithms are only components of the entire CPS system enclosing them, their security issues are only meaningful when studied with direct integration of the semantic CPS problem context, which forms what we call the “semantic AI security” problem space and introduces various new AI security research challenges.

In this talk, I will focus on our recent efforts on the semantic AI security in one of the most safety-critical and fastest-growing AI-enabled CPS today, Autonomous Driving (AD) systems. Specifically, we performed the first security analysis on a wide range of critical AI components in industry-grade AD systems such as 3D perception, sensor fusion, lane detection, localization, prediction, and planning, and in this talk I will describe our key findings and also how we address the corresponding semantic AI security research challenges. I will conclude with a recent systemization of knowledge (SoK) we performed for this growing research space, with a specific emphasis on the most critical scientific gap we observed and our solution proposal.

Bio: Alfred Chen is an Assistant Professor of Computer Science at University of California, Irvine. His research interest spans AI security, systems security, and network security. His most recent research focuses are AI security in autonomous driving and intelligent transportation. His works have high impacts in both academic and industry with 30+ research papers in top-tier venues across security, mobile systems, transportation, software engineering, and machine learning; a nationwide USDHS US-CERT alert, multiple CVEs; 50+ news coverage by major media such as Forbes, Fortune, and BBC; and vulnerability report acknowledgments from USDOT, Apple, Microsoft, etc. Recently, his research triggered 30+ autonomous driving companies and the V2X standardization workgroup to start security vulnerability investigations; some confirmed to work on fixes. He co-founded the AutoSec workshop (co-located with NDSS), and co-created DEF CON’s first AutoDriving-themed hacking competition. He received various awards such as NSF CAREER Award, ProQuest Distinguished Dissertation Award, and UCI Chancellor’s Award for mentoring. Chen received Ph.D. from University of Michigan in 2018.

09 Aug 2022
Chaowei Xiao (Arizona State University + Nvidia Research)
Towards Socially Responsible Machine Learning
Abstract & Bio

Bio: Chaowei Xiao is an assistant professor at Arizona State University and the research scientist at NVIDIA Research. Dr. Xiao received his B.E. degree in School of Software from Tsinghua University in 2015 and Ph.D. degree in Computer Science Department from University of Michigan, Ann Arbor in 2020, respectively. His research interests lie at the intersection of computer security, privacy, and machine learning. His works have been featured in multiple media outlets, including Wired, Fortune, IEEE SPECTRUM. One of his research outputs was on display at the Science Museum in London. He has received the best paper award at Modicum 2014 and ESWN 2021.

23 Aug 2022
Anastasios Angelopoulos (UC Berkeley)
A Gentle Introduction to Conformal Prediction and Conformal Risk Control
Abstract & Bio

Abstract: High-risk machine learning deployments demand rigorous uncertainty quantification certifying the safety of the prediction algorithm. Conformal prediction is a new way of constructing distribution-free ”confidence intervals” for black-box algorithms like neural networks. These intervals are guaranteed to contain the ground truth with high probability regardless of the underlying algorithm or dataset. I will introduce the audience to conformal prediction and conformal risk control, an extension allowing it to apply to complex machine learning tasks. The presentation will draw from the following manuscripts: https://arxiv.org/abs/2107.07511; https://arxiv.org/abs/2208.02814

Bio: Anastasios Angelopoulos is a fourth year PhD student at UC Berkeley advised by Michael I. Jordan and Jitendra Malik

30 Aug 2022
Katherine Lee (Google Brain)
What does respecting privacy mean for language models?
Abstract & Bio

Abstract: Language models memorize training data. Defining and quantifying memorization is just as challenging of a task as understanding human communication. Furthermore, understanding the risks of memorization and when memorization is required is equally complex. In this talk, I'll present several papers we have written on how memorization scales with model size, deduplicating language model training data to reduce memorization, and what it means to respect privacy in language models.

Bio: Katherine Lee is a Research Engineer at Google Brain and a PhD student at Cornell advised by David Mimno. She studies security and privacy in large language models. She's broadly interested in the translation between people and the systems we build. What kinds of decisions can algorithms help with, and which should we leave algorithms out of? What kinds of objectives, political or social, can we, or can we not write down?

13 Sept 2022
Formal Verification of Deep Neural Networks: Challenges and Recent Advances
Abstract & Bio

Abstract: Neural networks have become a crucial element in modern artificial intelligence. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. In this talk, I will first introduce the problem of neural network verification and the challenges of guaranteeing the behavior of a neural network given input specifications. Then, I will discuss the bound-propagation-based algorithms (e.g., CROWN and beta-CROWN), which are efficient, scalable and powerful techniques for formal verification of neural networks and can also be generalizable to computational graphs beyond neural networks. My talk will highlight state-of-the-art verification techniques used in our α,β-CROWN (alpha-beta-CROWN) verifier that won the 2nd and 3rd International Verification of Neural Networks Competition (VNN-COMP 2021 and 2022), as well as novel applications of neural network verification.

Bio: Huan Zhang is a postdoctoral researcher at CMU, supervised by Prof. Zico Kolter. He received his Ph.D. degree at UCLA in 2020. Huan's research focuses on the trustworthiness of artificial intelligence, especially on developing formal verification methods to guarantee the robustness and safety of machine learning. Huan was awarded an IBM Ph.D. fellowship and he led the winning team in the 2021 International Verification of Neural Networks Competition. Huan received the 2021 AdvML Rising Star Award sponsored by MIT-IBM Watson AI Lab.

20 Sept 2022
Emily Wenger (University of Chicago)
Physical backdoor attacks: towards more realistic threat models in adversarial machine learning
Abstract & Bio

Abstract: Backdoor attacks against deep neural networks (DNN), in which hidden behaviors embedded in a DNN are activated by a certain “trigger” present on DNN inputs, have been extensively studied in the adversarial machine learning (ML) community. However, most existing research on backdoor attacks for image classification models focuses on pixel-based triggers, in which triggers are edited onto images after their creation. In this talk, I propose a more realistic threat model for image-based backdoor attacks – physical backdoor attacks – and describe our recent work demonstrating the severity of the threat posed by such attacks. I conclude by discussing future research directions arising from this more realistic backdoor attack threat model.

Bio: Emily Wenger is a final year computer science PhD student at the University of Chicago, advised by Ben Zhao and Heather Zheng. Her research focuses on security and privacy issues of machine learning systems. Her work has been published at top computer security (CCS, USENIX, Oakland) and machine learning (NeurIPS, CVPR) conferences and has been covered by media outlets including the New York Times, MIT Tech Review, and Nature. She is the recipient of the GFSD, Harvey, and Neubauer fellowships. Previously, she worked for the US Department of Defense and interned at Meta AI Research.

Organizers: Vikash Sehwag (Princeton), Cihang Xie (UCSC), and Jamie Hayes (Deepmind)
Advisory committee: Prateek Mittal (Princeton), Reza Shokri (NUS)

You can reach us at or or for questions or suggestions related to the seminar.