AI/ML Seminar Series

Standard

Weekly Seminar in AI & Machine Learning
Sponsored by Cylance

Apr. 10
DBH 4011
1 pm

Durk Kingma

Research Scientist
Google Research

Some believe that maximum likelihood is incompatible with high-quality image generation. We provide counter-evidence: diffusion models with SOTA FIDs (e.g. https://arxiv.org/abs/2301.11093) are actually optimized with the ELBO, with very simple data augmentation (additive noise). First, we show that diffusion models in the literature are optimized with various objectives that are special cases of a weighted loss, where the weighting function specifies the weight per noise level. Uniform weighting corresponds to maximizing the ELBO, a principled approximation of maximum likelihood. In current practice diffusion models are optimized with non-uniform weighting due to better results in terms of sample quality. In this work we expose a direct relationship between the weighted loss (with any weighting) and the ELBO objective. We show that the weighted loss can be written as a weighted integral of ELBOs, with one ELBO per noise level. If the weighting function is monotonic, as in some SOTA models, then the weighted loss is a likelihood-based objective: it maximizes the ELBO under simple data augmentation, namely Gaussian noise perturbation. Our main contribution is a deeper theoretical understanding of the diffusion objective, but we also performed some experiments comparing monotonic with non-monotonic weightings, finding that monotonic weighting performs competitively with the best published results.

Bio: I do research on principled and scalable methods for machine learning, with a focus on generative models. My contributions include the Variational Autoencoder (VAE), the Adam optimizer, Glow, and Variational Diffusion Models, but please see Scholar for a more complete list. I obtained a PhD (cum laude) from University of Amsterdam in 2017, and was part of the founding team of OpenAI in 2015. Before that, I co-founded Advanza which got acquired in 2016. My formal name is Diederik, but have the Frysian nickname Durk (pronounced like Dirk). I currently live in the San Francisco Bay area.
Apr. 17
DBH 4011
1 pm

Danish Pruthi

Assistant Professor
Department of Computational and Data Sciences (CDS)
Indian Institute of Science (IISc), Bangalore

While large deep learning models have become increasingly accurate, concerns about their (lack of) interpretability have taken a center stage. In response, a growing subfield on interpretability and analysis of these models has emerged. While hundreds of techniques have been proposed to “explain” predictions of models, what aims these explanations serve and how they ought to be evaluated are often unstated. In this talk, I will present a framework to quantify the value of explanations, along with specific applications in a variety of contexts. I would end with some of my thoughts on evaluating large language models and the rationales they generate.

Bio: Danish Pruthi is an incoming assistant professor at the Indian Institute of Science (IISc), Bangalore. He received his Ph.D. from the School of Computer Science at Carnegie Mellon University, where he was advised by Graham Neubig and Zachary Lipton. He is broadly interested in the areas of natural language processing and deep learning, with a focus on model interpretability. He completed his bachelors degree in computer science from BITS Pilani, Pilani. He has spent time doing research at Google AI, Facebook AI Research, Microsoft Research, Amazon AI and IISc. He is also a recipient of the Siebel Scholarship and the CMU Presidential Fellowship. His legal name is only Danish—a cause of airport quagmires and, in equal parts, funny anecdotes.
Apr. 24
DBH 4011
1 pm

Anthony Chen

PhD Student
Department of Computer Science, UC Irvine

As the strengths of large language models (LLMs) have become prominent, so too have their weaknesses. A glaring weakness of LLMs is their penchant for generating false, biased, or misleading claims in a phenomena broadly referred to as “hallucinations”. Most LLMs also do not ground their generations to any source, exacerbating this weakness. To enable attribution while still preserving all the powerful advantages of LLMs, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically retrieves evidence to support the output of any LLM followed by 2) post-editing the output to fix any information that contradicts the retrieved evidence while preserving the original output as much as possible. When applied to the output of several state-of-the-art LLMs on a diverse set of generation tasks, we find that RARR significantly improves attribution.

Bio: Anthony Chen is a final-year doctoral student advised by Sameer Singh. He is broadly interested in how we can evaluate the limits of large language models and design efficient methods to address their deficiencies. Recently, his research has been focused on tackling the pernicious problem of attribution and hallucinations in large language models and making them more reliable to use.
May 1
DBH 4011
1 pm

Hengrui Cai

Assistant Professor of Statistics
University of California, Irvine

The causal revolution has spurred interest in understanding complex relationships in various fields. Under a general causal graph, the exposure may have a direct effect on the outcome and also an indirect effect regulated by a set of mediators. An analysis of causal effects that interprets the causal mechanism contributed through mediators is hence challenging but on demand. In this talk, we introduce a new statistical framework to comprehensively characterize causal effects with multiple mediators, namely, ANalysis Of Causal Effects (ANOCE). Built upon such causal impact learning, we focus on two emerging challenges in causal relation learning, heterogeneity and spuriousness. To characterize the heterogeneity, we first conceptualize heterogeneous causal graphs (HCGs) by generalizing the causal graphical model with confounder-based interactions and multiple mediators. In practice, only a small number of variables in the graph are relevant for the outcomes of interest. As a result, causal estimation with the full causal graph — especially given limited data — could lead to many falsely discovered, spurious variables that may be highly correlated with but have no causal impact on the target outcome. We propose to learn a class of necessary and sufficient causal graphs (NSCG) that only contain causally relevant variables by utilizing the probabilities of causation. Across empirical studies of simulated and real data applications, we show that the proposed algorithms outperform existing ones and can reveal true heterogeneous and non-spurious causal graphs.

Bio: Dr. Hengrui Cai is an Assistant Professor in the Department of Statistics at the University of California Irvine. She obtained her Ph.D. degree in Statistics at North Carolina State University in 2022. Cai has broad research interests in methodology and theory in causal inference, reinforcement learning, and graphical modeling, to establish reliable, powerful, and interpretable solutions to real-world problems. Currently, her research focuses on causal inference and causal structure learning, and policy optimization and evaluation in reinforcement/deep learning. Her work has been published in conferences including ICLR, NeurIPS, ICML, and IJCAI, as well as journals including the Journal of Machine Learning Research, Stat, and Statistics in Medicine.
May 8
DBH 4011
1 pm

Pierre Baldi and Alexander Shmakov

Department of Computer Science, UC Irvine

The Baldi group will present ongoing progress in the theory and applications of deep learning. On the theory side, we will discuss homogeneous activation functions and their important connections to the concept of generalized neural balance. On the application side, we will present applications of neural transformers to physics, in particular for the assignment of observation measurements to the leaves of partial Feynman diagrams in particle physics. In these applications, the permutation invariance properties of transformers are used to capture fundamental symmetries (e.g. matter vs antimatter) in the laws of physics.

Bio: Pierre Baldi earned M.S. degrees in mathematics and psychology from the University of Paris, France, in 1980, and the Ph.D. degree in mathematics from the Caltech, CA, USA, in 1986. He is currently a Distinguished Professor with the Department of Computer Science, Director with the Institute for Genomics and Bioinformatics, and Associate Director with the Center for Machine Learning and Intelligent Systems at the University of California, Irvine, CA, USA. His research interests include understanding intelligence in brains and machines. He has made several contributions to the theory of deep learning, and developed and applied deep learning methods for problems in the natural sciences. He has written 4 books and over 300 peer-reviewed articles. Dr. Baldi was the recipient of the 1993 Lew Allen Award at JPL, the 2010 E. R. Caianiello Prize for research in machine learning, and a 2014 Google Faculty Research Award. He is an Elected Fellow of the AAAS, AAAI, IEEE, ACM, and ISCB Alexander Shmakov is a Ph.D. student in the Baldi research group who loves everything deep learning and robotics. He has published papers on applications of deep learning to planning, robotic control, high energy physics, astronomy, chemical synthesis, and biology.
May 15
DBH 4011
1 pm

Guy Van den Broeck

Associate Professor of Computer Science
University of California, Los Angeles

Many expect that AI will go from powering chatbots to providing mental health services. That it will go from advertisement to deciding who is given bail. The expectation is that AI will solve society’s problems by simply being more intelligent than we are. Implicit in this bullish perspective is the assumption that AI will naturally learn to reason from data: that it can form trains of thought that “make sense”, similar to how a mental health professional or judge might reason about a case, or more formally, how a mathematician might prove a theorem. This talk will investigate the question whether this behavior can be learned from data, and how we can design the next generation of AI techniques that can achieve such capabilities, focusing on neuro-symbolic learning and tractable deep generative models.

Bio: Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the StarAI lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His papers have been recognized with awards from key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.
May 22
DBH 4011
1 pm

Gabe Hope

PhD Student, Computer Science
University of California, Irvine

Variational autoencoders (VAEs) have proven to be an effective approach to modeling complex data distributions while providing compact representations that can be interpretable and useful for downstream prediction tasks. In this work we train variational autoencoders with the dual goals of good likelihood-based generative modeling and good discriminative performance in supervised and semi-supervised prediction tasks. We show that the dominant approach to training semi-supervised VAEs has key weaknesses: it is fragile as model capacity increases; it is slow due to marginalization over labels; and it incoherently decouples into separate discriminative and generative models when all data is labeled. Our novel framework for semi-supervised VAE training uses a more coherent architecture and an objective that maximizes generative likelihood subject to prediction quality constraints. To handle cases when labels are very sparse, we further enforce a consistency constraint, derived naturally from the generative model, that requires predictions on reconstructed data to match those on the original data. Our approach enables advances in generative modeling to be incorporated by semi-supervised classifiers, which we demonstrate by augmenting deep generative models with latent variables corresponding to spatial transformations and by introducing a “very deep'” prediction-constrained VAE with many layers of latent variables. Our experiments show that prediction and consistency constraints improve generative samples as well as image classification performance in semi-supervised settings.

Bio: Gabe Hope is a final-year PhD student at UC Irvine working with professor Erik Sudderth. His research focuses on deep generative models, interpretable machine learning and semi-supervised learning. This fall he will join the faculty at Harvey Mudd College as a visiting assistant professor in computer science.
May 29
No Seminar (Memorial Day)
June 5
DBH 4011
1 pm

Sangeetha Abdu Jyothi

Assistant Professor of Computer Science
University of California, Irvine

Lack of explainability is a key factor limiting the practical adoption of high-performant Deep Reinforcement Learning (DRL) controllers in systems environments. Explainable RL for networking hitherto used salient input features to interpret a controller’s behavior. However, these feature-based solutions do not completely explain the controller’s decision-making process. Often, operators are interested in understanding the impact of a controller’s actions on performance in the future, which feature-based solutions cannot capture. In this talk, I will present CrystalBox, a framework that explains a controller’s behavior in terms of the future impact on key network performance metrics. CrystalBox employs a novel learning-based approach to generate succinct and expressive explanations. We use reward components of the DRL network controller, which are key performance metrics meaningful to operators, as the basis for explanations. I will finally present three practical use cases of CrystalBox: cross-state explainability, guided reward design, and network observability.

Bio: Sangeetha Abdu Jyothi is an Assistant Professor in the Computer Science department at the University of California, Irvine. Her research interests lie at the intersection of computer systems, networking, and machine learning. Prior to UCI, she completed her Ph.D. at the University of Illinois, Urbana-Champaign in 2019 where she was advised by Brighten Godfrey and had a brief stint as a postdoc at VMware Research. She is currently an Affiliated Researcher at VMware Research. She leads the Networking, Systems, and AI Lab (NetSAIL) at UCI. Her current research focus revolves around: Internet and Cloud Resilience, and Systems and Machine Learning.
July 20
DBH 3011
11 am

Vincent Fortuin

Research group leader in Machine Learning
Helmholtz AI

Many researchers have pondered the same existential questions since the release of ChatGPT: Is scale really all you need? Will the future of machine learning rely exclusively on foundation models? Should we all drop our current research agenda and work on the next large language model instead? In this talk, I will try to make the case that the answer to all these questions should be a convinced “no” and that now, maybe more than ever, should be the time to focus on fundamental questions in machine learning again. I will provide evidence for this by presenting three modern use cases of Bayesian deep learning in the areas of self-supervised learning, interpretable additive modeling, and sequential decision making. Together, these will show that the research field of Bayesian deep learning is very much alive and thriving and that its potential for valuable real-world impact is only just unfolding.

Bio: Vincent Fortuin is a tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS). He is also a Branco Weiss Fellow. His research focuses on reliable and data-efficient AI approaches leveraging Bayesian deep learning, deep generative modeling, meta-learning, and PAC-Bayesian theory. Before that, he did his PhD in Machine Learning at ETH Zürich and was a Research Fellow at the University of Cambridge. He is a member of ELLIS, a regular reviewer for all major machine learning conferences, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative.