Fall 2023

Standard
Oct. 9
Oct. 16
DBH 4011
1 pm

Marius Kloft

Professor of Computer Science
PTU Kaiserslautern-Landau, Germany

Anomaly detection is one of the fundamental topics in machine learning and artificial intelligence. The aim is to find instances deviating from the norm – so-called ‘anomalies’. Anomalies can be observed in various scenarios, from attacks on computer or energy networks to critical faults in a chemical factory or rare tumors in cancer imaging data. In my talk, I will first introduce the field of anomaly detection, with an emphasis on ‘deep anomaly detection’ (anomaly detection based on deep learning). Then, I will present recent algorithms and theory for deep anomaly detection, with images as primary data type. I will demonstrate how these methods can be better understood using explainable AI methods. I will show new algorithms for deep anomaly detection on other data types, such as time series, graphs, tabular data, and contamined data. Finally, I will close my talk with an outlook on exciting future research directions in anomaly detection and beyond.

Bio: Marius Kloft has worked and researched at various institutions in Germany and the US, including TU Berlin (PhD), UC Berkeley (PhD), NYU (Postdoc), Memorial Sloan-Kettering Cancer Center (Postdoc), HU Berlin (Assist. Prof.), and USC (Visiting Assoc. Prof.). Since 2017, he is a professor of machine learning at RPTU Kaiserslautern-Landau. His research covers a broad spectrum of machine learning, from mathematical theory and fundamental algorithms to applications in medicine and chemical engineering. He received the Google Most Influential Papers 2013 Award, and he is a recipient of the German National Science Foundation’s Emmy-Noether Career Award. In 2022, the paper ‘Deep One-class Classification’ (ICML, 2018) main-authored by Marius Kloft received the ANDEA Test-of-Time Award for the most influential paper in anomaly detection in the last ten years (2012-2022). The paper is highly cited, with around 500 citations per year.
Oct. 23
DBH 4011
1 pm

Sarah Wiegreffe

Postdoctoral Researcher
Allen Institute for AI and University of Washington

Recently-released language models have attracted a lot of attention for their major successes and (often more subtle, but still plentiful) failures. In this talk, I will motivate why transparency into model operations is needed to rectify these failures and increase model utility in a reliable way. I will highlight how techniques must be developed in this changing NLP landscape for both open-source models and black-box models behind an API. I will provide an example of each from my recent work demonstrating how improved transparency can improve language model performance on downstream tasks.

Bio: Sarah Wiegreffe is a young investigator (postdoc) at the Allen Institute for AI (AI2), working on the Aristo project. She also holds a courtesy appointment in the Allen School at the University of Washington. Her research focuses on language model transparency. She received her PhD from Georgia Tech in 2022, during which she interned at Google and AI2. She frequently serves on conference program committees, receiving outstanding area chair award at ACL 2023.
Oct. 30
DBH 4011
1 pm

Noga Zaslavsky

Assistant Professor of Language Science
University of California, Irvine

Our world is extremely complex, and yet we are able to exchange our thoughts and beliefs about it using a relatively small number of words. What computational principles can explain this extraordinary ability? In this talk, I argue that in order to communicate and reason about meaning while operating under limited resources, both humans and machines must efficiently compress their representations of the world. In support of this claim, I present a series of studies showing that: (i) human languages evolve under pressure to efficiently compress meanings into words via the Information Bottleneck (IB) principle; (ii) the same principle can help ground meaning representations in artificial neural networks trained for vision; and (iii) these findings offer a new framework for emergent communication in artificial agents. Taken together, these results suggest that efficient compression underlies meaning in language and offer a new approach to guiding artificial agents toward human-like communication without relying on massive amounts of human-generated training data.

Bio: Noga Zaslavsky is an Assistant Professor in UCI’s Language Science department. Before joining UCI this year, she was a postdoctoral fellow at MIT. She holds a Ph.D. (2020) in Computational Neuroscience from the Hebrew University, and during her graduate studies she was also affiliated with UC Berkeley. Her research aims to understand the computational principles that underlie language and cognition by integrating methods from machine learning, information theory, and cognitive science. Her work has been recognized by several awards, including a K. Lisa Yang Integrative Computational Neuroscience Postdoctoral Fellowship, an IBM Ph.D. Fellowship Award, and a 2018 Computational Modeling Prize from the Cognitive Science Society.
Nov. 6
DBH 4011
1 pm

Mariel Werner

PhD Student
Department of Electrical Engineering and Computer Science, UC Berkeley

I will be discussing my recent work on personalization in federated learning. Federated learning is a powerful distributed optimization framework in which multiple clients collaboratively train a global model without sharing their raw data. In this work, we tackle the personalized version of the federated learning problem. In particular, we ask: throughout the training process, can clients identify a subset of similar clients and collaboratively train with just those clients? In the affirmative, we propose simple clustering-based methods which are provably optimal for a broad class of loss functions (the first such guarantees), are robust to malicious attackers, and perform well in practice.

Bio: Mariel Werner is a 5th-year PhD student in the Department of Electrical Engineering and Computer Science at UC Berkeley advised by Michael I. Jordan. Her research focus is federated learning, with a particular interest in economic applications. Currently, she is working on designing data-sharing mechanisms for firms in oligopolistic markets, motivated by ideas from federated learning. Recently, she has also been studying dynamics of privacy and reputation-building in principal-agent interactions. Mariel holds an undergraduate degree in Applied Mathematics from Harvard University.
Nov. 13
DBH 4011
1 pm

Yian Ma

Assistant Professor, Halıcıoğlu Data Science Institute
University of California, San Diego

I will introduce some recent progress towards understanding the scalability of Markov chain Monte Carlo (MCMC) methods and their comparative advantage with respect to variational inference. I will fact-check the folklore that “variational inference is fast but biased, MCMC is unbiased but slow”. I will then discuss a combination of the two via reverse diffusion, which holds promise of solving some of the multi-modal problems. This talk will be motivated by the need for Bayesian computation in reinforcement learning problems as well as the differential privacy requirements that we face.

Bio: Yian Ma is an assistant professor at the Halıcıoğlu Data Science Institute and an affiliated faculty member at the Computer Science and Engineering Department of UC San Diego. Prior to UCSD, he spent a year as a visiting faculty at Google Research. Before that, he was a post-doctoral fellow at UC Berkeley, hosted by Mike Jordan. Yian completed his Ph.D. at University of Washington. His current research primarily revolves around scalable inference methods for credible machine learning, with application to time series data and sequential decision making tasks. He has received the Facebook research award, and the best paper award at the NeurIPS AABI symposium.
Nov. 20
DBH 4011
1 pm

Yuhua Zhu

Assistant Professor, Halicioglu Data Science Institute and Dept. of Mathematics
University of California, San Diego

In this talk, I will build the connection between Hamilton-Jacobi-Bellman equations (HJB) and the multi-armed bandit (MAB) problems. HJB is an important equation in solving stochastic optimal control problems. MAB is a widely used paradigm for studying the exploration-exploitation trade-off in sequential decision making under uncertainty. This is the first work that establishes this connection in a general setting. I will present an efficient algorithm for solving MAB problems based on this connection and demonstrate its practical applications. This is a joint work with Lexing Ying and Zach Izzo from Stanford University.

Bio: Yuhua Zhu is an assistant professor at UC San Diego, where she holds a joint appointment in the Halicioğlu Data Science Institute (HDSI) and the Department of Mathematics. Previously, she was a Postdoctoral Fellow at Stanford University mentored by Lexing Ying. She earned her Ph.D. from UW-Madison in 2019 advised by Shi Jin, and she obtained her BS in Mathematics from SJTU in 2014. Her work builds the bridge between differential equations and machine learning, spanning the areas of reinforcement learning, stochastic optimization, sequential decision-making, and uncertainty quantification.
Nov. 21
DBH 4011
11 am

Yejin Choi

Wissner-Slivka Professor of Computer Science and & Engineering
University of Washington and Allen Institute for Artificial Intelligence

In this talk, I will question if there can be possible impossibilities of large language models (i.e., the fundamental limits of transformers, if any) and the impossible possibilities of language models (i.e., seemingly impossible alternative paths beyond scale, if at all).

Bio: Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016.
Nov. 27
DBH 4011
1 pm

Tryphon Georgiou

Distinguished Professor of Mechanical and Aerospace Engineering
University of California, Irvine

The energetic cost of information erasure and of energy transduction can be cast as the stochastic problem to minimize entropy production during thermodynamic transitions. This formalism of Stochastic Thermodynamics allows quantitative assessment of work exchange and entropy production for systems that are far from equilibrium. In the talk we will highlight the cost of Landauer’s bit-erasure in finite time and explain how to obtain bounds the performance of Carnot-like thermodynamic engines and of processes that are powered by thermal anisotropy. The talk will be largely based on joint work with Olga Movilla Miangolarra, Amir Taghvaei, Rui Fu, and Yongxin Chen.

Bio: Tryphon T. Georgiou was educated at the National Technical University of Athens, Greece (1979) and the University of Florida, Gainesville (PhD 1983). He is currently a Distinguished Professor at the Department of Mechanical and Aerospace Engineering, University of California, Irvine. He is a Fellow of IEEE, SIAM, IFAC, AAAS and a Foreign Member of the Royal Swedish Academy of Engineering Sciences (IVA).
Dec. 4
DBH 4011
1 pm

Deying Kong

Software Engineer, Google

Despite its extensive range of potential applications in virtual reality and augmented reality, 3D interacting hand pose estimation from RGB image remains a very challenging problem, due to appearance confusions between keypoints of the two hands, and severe hand-hand occlusion. Due to their ability to capture long range relationships between keypoints, transformer-based methods have gained popularity in the research community. However, the existing methods usually deploy tokens at keypoint level, which inevitably results in high computational and memory complexity. In this talk, we will propose a simple yet novel mechanism, i.e., hand-level tokenization, in our transformer based model, where we deploy only one token for each hand. With this novel design, we will also propose a pose query enhancer module, which can refine the pose prediction iteratively, by focusing on features guided by previous coarse pose predictions. As a result, our proposed model, Handformer2T, can achieve high performance while remaining lightweight.

Bio: Deying Kong currently is a software engineer from Google Inc. He earned his PhD in Computer Science from University of California, Irvine in 2022, under the supervision of Professor Xiaohui Xie. His research interests mainly focus on computer vision, especially hand/human pose estimation.
Dec. 11
No Seminar (Finals Week and NeurIPS Conference)

Spring 2023

Standard
Apr. 10
DBH 4011
1 pm

Durk Kingma

Research Scientist
Google Research

Some believe that maximum likelihood is incompatible with high-quality image generation. We provide counter-evidence: diffusion models with SOTA FIDs (e.g. https://arxiv.org/abs/2301.11093) are actually optimized with the ELBO, with very simple data augmentation (additive noise). First, we show that diffusion models in the literature are optimized with various objectives that are special cases of a weighted loss, where the weighting function specifies the weight per noise level. Uniform weighting corresponds to maximizing the ELBO, a principled approximation of maximum likelihood. In current practice diffusion models are optimized with non-uniform weighting due to better results in terms of sample quality. In this work we expose a direct relationship between the weighted loss (with any weighting) and the ELBO objective. We show that the weighted loss can be written as a weighted integral of ELBOs, with one ELBO per noise level. If the weighting function is monotonic, as in some SOTA models, then the weighted loss is a likelihood-based objective: it maximizes the ELBO under simple data augmentation, namely Gaussian noise perturbation. Our main contribution is a deeper theoretical understanding of the diffusion objective, but we also performed some experiments comparing monotonic with non-monotonic weightings, finding that monotonic weighting performs competitively with the best published results.

Bio: I do research on principled and scalable methods for machine learning, with a focus on generative models. My contributions include the Variational Autoencoder (VAE), the Adam optimizer, Glow, and Variational Diffusion Models, but please see Scholar for a more complete list. I obtained a PhD (cum laude) from University of Amsterdam in 2017, and was part of the founding team of OpenAI in 2015. Before that, I co-founded Advanza which got acquired in 2016. My formal name is Diederik, but have the Frysian nickname Durk (pronounced like Dirk). I currently live in the San Francisco Bay area.
Apr. 17
DBH 4011
1 pm

Danish Pruthi

Assistant Professor
Department of Computational and Data Sciences (CDS)
Indian Institute of Science (IISc), Bangalore

While large deep learning models have become increasingly accurate, concerns about their (lack of) interpretability have taken a center stage. In response, a growing subfield on interpretability and analysis of these models has emerged. While hundreds of techniques have been proposed to “explain” predictions of models, what aims these explanations serve and how they ought to be evaluated are often unstated. In this talk, I will present a framework to quantify the value of explanations, along with specific applications in a variety of contexts. I would end with some of my thoughts on evaluating large language models and the rationales they generate.

Bio: Danish Pruthi is an incoming assistant professor at the Indian Institute of Science (IISc), Bangalore. He received his Ph.D. from the School of Computer Science at Carnegie Mellon University, where he was advised by Graham Neubig and Zachary Lipton. He is broadly interested in the areas of natural language processing and deep learning, with a focus on model interpretability. He completed his bachelors degree in computer science from BITS Pilani, Pilani. He has spent time doing research at Google AI, Facebook AI Research, Microsoft Research, Amazon AI and IISc. He is also a recipient of the Siebel Scholarship and the CMU Presidential Fellowship. His legal name is only Danish—a cause of airport quagmires and, in equal parts, funny anecdotes.
Apr. 24
DBH 4011
1 pm

Anthony Chen

PhD Student
Department of Computer Science, UC Irvine

As the strengths of large language models (LLMs) have become prominent, so too have their weaknesses. A glaring weakness of LLMs is their penchant for generating false, biased, or misleading claims in a phenomena broadly referred to as “hallucinations”. Most LLMs also do not ground their generations to any source, exacerbating this weakness. To enable attribution while still preserving all the powerful advantages of LLMs, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically retrieves evidence to support the output of any LLM followed by 2) post-editing the output to fix any information that contradicts the retrieved evidence while preserving the original output as much as possible. When applied to the output of several state-of-the-art LLMs on a diverse set of generation tasks, we find that RARR significantly improves attribution.

Bio: Anthony Chen is a final-year doctoral student advised by Sameer Singh. He is broadly interested in how we can evaluate the limits of large language models and design efficient methods to address their deficiencies. Recently, his research has been focused on tackling the pernicious problem of attribution and hallucinations in large language models and making them more reliable to use.
May 1
DBH 4011
1 pm

Hengrui Cai

Assistant Professor of Statistics
University of California, Irvine

The causal revolution has spurred interest in understanding complex relationships in various fields. Under a general causal graph, the exposure may have a direct effect on the outcome and also an indirect effect regulated by a set of mediators. An analysis of causal effects that interprets the causal mechanism contributed through mediators is hence challenging but on demand. In this talk, we introduce a new statistical framework to comprehensively characterize causal effects with multiple mediators, namely, ANalysis Of Causal Effects (ANOCE). Built upon such causal impact learning, we focus on two emerging challenges in causal relation learning, heterogeneity and spuriousness. To characterize the heterogeneity, we first conceptualize heterogeneous causal graphs (HCGs) by generalizing the causal graphical model with confounder-based interactions and multiple mediators. In practice, only a small number of variables in the graph are relevant for the outcomes of interest. As a result, causal estimation with the full causal graph — especially given limited data — could lead to many falsely discovered, spurious variables that may be highly correlated with but have no causal impact on the target outcome. We propose to learn a class of necessary and sufficient causal graphs (NSCG) that only contain causally relevant variables by utilizing the probabilities of causation. Across empirical studies of simulated and real data applications, we show that the proposed algorithms outperform existing ones and can reveal true heterogeneous and non-spurious causal graphs.

Bio: Dr. Hengrui Cai is an Assistant Professor in the Department of Statistics at the University of California Irvine. She obtained her Ph.D. degree in Statistics at North Carolina State University in 2022. Cai has broad research interests in methodology and theory in causal inference, reinforcement learning, and graphical modeling, to establish reliable, powerful, and interpretable solutions to real-world problems. Currently, her research focuses on causal inference and causal structure learning, and policy optimization and evaluation in reinforcement/deep learning. Her work has been published in conferences including ICLR, NeurIPS, ICML, and IJCAI, as well as journals including the Journal of Machine Learning Research, Stat, and Statistics in Medicine.
May 8
DBH 4011
1 pm

Pierre Baldi and Alexander Shmakov

Department of Computer Science, UC Irvine

The Baldi group will present ongoing progress in the theory and applications of deep learning. On the theory side, we will discuss homogeneous activation functions and their important connections to the concept of generalized neural balance. On the application side, we will present applications of neural transformers to physics, in particular for the assignment of observation measurements to the leaves of partial Feynman diagrams in particle physics. In these applications, the permutation invariance properties of transformers are used to capture fundamental symmetries (e.g. matter vs antimatter) in the laws of physics.

Bio: Pierre Baldi earned M.S. degrees in mathematics and psychology from the University of Paris, France, in 1980, and the Ph.D. degree in mathematics from the Caltech, CA, USA, in 1986. He is currently a Distinguished Professor with the Department of Computer Science, Director with the Institute for Genomics and Bioinformatics, and Associate Director with the Center for Machine Learning and Intelligent Systems at the University of California, Irvine, CA, USA. His research interests include understanding intelligence in brains and machines. He has made several contributions to the theory of deep learning, and developed and applied deep learning methods for problems in the natural sciences. He has written 4 books and over 300 peer-reviewed articles. Dr. Baldi was the recipient of the 1993 Lew Allen Award at JPL, the 2010 E. R. Caianiello Prize for research in machine learning, and a 2014 Google Faculty Research Award. He is an Elected Fellow of the AAAS, AAAI, IEEE, ACM, and ISCB Alexander Shmakov is a Ph.D. student in the Baldi research group who loves everything deep learning and robotics. He has published papers on applications of deep learning to planning, robotic control, high energy physics, astronomy, chemical synthesis, and biology.
May 15
DBH 4011
1 pm

Guy Van den Broeck

Associate Professor of Computer Science
University of California, Los Angeles

Many expect that AI will go from powering chatbots to providing mental health services. That it will go from advertisement to deciding who is given bail. The expectation is that AI will solve society’s problems by simply being more intelligent than we are. Implicit in this bullish perspective is the assumption that AI will naturally learn to reason from data: that it can form trains of thought that “make sense”, similar to how a mental health professional or judge might reason about a case, or more formally, how a mathematician might prove a theorem. This talk will investigate the question whether this behavior can be learned from data, and how we can design the next generation of AI techniques that can achieve such capabilities, focusing on neuro-symbolic learning and tractable deep generative models.

Bio: Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the StarAI lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His papers have been recognized with awards from key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.
May 22
DBH 4011
1 pm

Gabe Hope

PhD Student, Computer Science
University of California, Irvine

Variational autoencoders (VAEs) have proven to be an effective approach to modeling complex data distributions while providing compact representations that can be interpretable and useful for downstream prediction tasks. In this work we train variational autoencoders with the dual goals of good likelihood-based generative modeling and good discriminative performance in supervised and semi-supervised prediction tasks. We show that the dominant approach to training semi-supervised VAEs has key weaknesses: it is fragile as model capacity increases; it is slow due to marginalization over labels; and it incoherently decouples into separate discriminative and generative models when all data is labeled. Our novel framework for semi-supervised VAE training uses a more coherent architecture and an objective that maximizes generative likelihood subject to prediction quality constraints. To handle cases when labels are very sparse, we further enforce a consistency constraint, derived naturally from the generative model, that requires predictions on reconstructed data to match those on the original data. Our approach enables advances in generative modeling to be incorporated by semi-supervised classifiers, which we demonstrate by augmenting deep generative models with latent variables corresponding to spatial transformations and by introducing a “very deep'” prediction-constrained VAE with many layers of latent variables. Our experiments show that prediction and consistency constraints improve generative samples as well as image classification performance in semi-supervised settings.

Bio: Gabe Hope is a final-year PhD student at UC Irvine working with professor Erik Sudderth. His research focuses on deep generative models, interpretable machine learning and semi-supervised learning. This fall he will join the faculty at Harvey Mudd College as a visiting assistant professor in computer science.
May 29
No Seminar (Memorial Day)
June 5
DBH 4011
1 pm

Sangeetha Abdu Jyothi

Assistant Professor of Computer Science
University of California, Irvine

Lack of explainability is a key factor limiting the practical adoption of high-performant Deep Reinforcement Learning (DRL) controllers in systems environments. Explainable RL for networking hitherto used salient input features to interpret a controller’s behavior. However, these feature-based solutions do not completely explain the controller’s decision-making process. Often, operators are interested in understanding the impact of a controller’s actions on performance in the future, which feature-based solutions cannot capture. In this talk, I will present CrystalBox, a framework that explains a controller’s behavior in terms of the future impact on key network performance metrics. CrystalBox employs a novel learning-based approach to generate succinct and expressive explanations. We use reward components of the DRL network controller, which are key performance metrics meaningful to operators, as the basis for explanations. I will finally present three practical use cases of CrystalBox: cross-state explainability, guided reward design, and network observability.

Bio: Sangeetha Abdu Jyothi is an Assistant Professor in the Computer Science department at the University of California, Irvine. Her research interests lie at the intersection of computer systems, networking, and machine learning. Prior to UCI, she completed her Ph.D. at the University of Illinois, Urbana-Champaign in 2019 where she was advised by Brighten Godfrey and had a brief stint as a postdoc at VMware Research. She is currently an Affiliated Researcher at VMware Research. She leads the Networking, Systems, and AI Lab (NetSAIL) at UCI. Her current research focus revolves around: Internet and Cloud Resilience, and Systems and Machine Learning.
July 20
DBH 3011
11 am

Vincent Fortuin

Research group leader in Machine Learning
Helmholtz AI

Many researchers have pondered the same existential questions since the release of ChatGPT: Is scale really all you need? Will the future of machine learning rely exclusively on foundation models? Should we all drop our current research agenda and work on the next large language model instead? In this talk, I will try to make the case that the answer to all these questions should be a convinced “no” and that now, maybe more than ever, should be the time to focus on fundamental questions in machine learning again. I will provide evidence for this by presenting three modern use cases of Bayesian deep learning in the areas of self-supervised learning, interpretable additive modeling, and sequential decision making. Together, these will show that the research field of Bayesian deep learning is very much alive and thriving and that its potential for valuable real-world impact is only just unfolding.

Bio: Vincent Fortuin is a tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS). He is also a Branco Weiss Fellow. His research focuses on reliable and data-efficient AI approaches leveraging Bayesian deep learning, deep generative modeling, meta-learning, and PAC-Bayesian theory. Before that, he did his PhD in Machine Learning at ETH Zürich and was a Research Fellow at the University of Cambridge. He is a member of ELLIS, a regular reviewer for all major machine learning conferences, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative.

Winter 2023

Standard
Jan. 30
DBH 4011
1 pm

Maarten Bos

Lead Research Scientist
Snap Research

Corporate research labs aim to push the scientific and technological forefront of innovation outside traditional academia. Snap Inc. combines academia and industry by hiring academic researchers and doing application-driven research. In this talk I will give examples of research projects from my corporate research experience. My goal is to showcase the value of – and hurdles for – working both with and within corporate research labs, and how some of these values and hurdles are different from working in traditional academia.

Bio: Maarten Bos is a Lead Research Scientist at Snap Inc. After receiving his PhD in The Netherlands and postdoctoral training at Harvard University, he led a behavioral science group at Disney Research before joining Snap in 2018. His research interests range from decision science, to persuasion, and human-technology interaction. His work has been published in journals such as Science, Psychological Science, and the Journal of Marketing Research, and has been covered by the Wall Street Journal, Harvard Business Review, and The New York Times.
Feb. 6
DBH 4011
1 pm

Kolby Nottingham

PhD Student, Department of Computer Science
University of California, Irvine

While it’s common for other machine learning modalities to benefit from model pretraining, reinforcement learning (RL) agents still typically learn tabula rasa. Large language models (LLMs), trained on internet text, have been used as external knowledge sources for RL, but, on their own, they are noisy and lack the grounding necessary to reason in interactive environments. In this talk, we will cover methods for grounding LLMs in environment dynamics and applying extracted knowledge to training RL agents. Finally, we will demonstrate our newly proposed method for applying LLMs to improving RL sample efficiency through guided exploration. By applying LLMs to guiding exploration rather than using them as planners at execution time, our method remains robust to errors in LLM output while also grounding LLM knowledge in environment dynamics.

Bio: Kolby Nottingham is a PhD student at the University of California Irvine where he is coadvized by Professors Roy Fox and Sameer Singh. Kolby’s research interests lie at the intersection of reinforcement learning and natural language processing. His research applies recent advances in large language models to improving sequential decision making techniques.
Feb. 13
DBH 4011
1 pm

Noble Kennamer

PhD Student, Department of Computer Science
University of California, Irvine

Bayesian optimal experimental design is a sub-field of statistics focused on developing methods to make efficient use of experimental resources. Any potential design is evaluated in terms of a utility function, such as the (theoretically well-justified) expected information gain (EIG); unfortunately however, under most circumstances the EIG is intractable to evaluate. In this talk we build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the EIG. Past work focused on learning a new variational model from scratch for each new design considered. Here we present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs. To further improve computational efficiency, we also propose to train the variational model on a significantly cheaper-to-evaluate lower bound, and show empirically that the resulting model provides an excellent guide for more accurate, but expensive to evaluate bounds on the EIG. We demonstrate the effectiveness of our technique on generalized linear models, a class of statistical models that is widely used in the analysis of controlled experiments. Experiments show that our method is able to greatly improve accuracy over existing approximation strategies, and achieve these results with far better sample efficiency.

Bio: Noble Kennamer recently completed his PhD at UC Irvine under Alexander Ihler, where he worked on variational methods for optimal experimental design and applications of machine learning to the physical sciences. In March he will be starting as a Research Scientist at Netflix.
Feb. 20
No Seminar (Presidents’ Day)
Feb. 27
Seminar Canceled
Mar. 6
DBH 4011
1 pm

Shlomo Zilberstein

Professor of Computer Science
University of Massachusetts, Amherst

Competence is the ability to do something well. Competence awareness is the ability to represent and learn a model of self competence and use it to decide how to best use the agent’s own abilities as well as any available human assistance. This capability is critical for the success and safety of autonomous systems that operate in the open world. In this talk, I introduce two types of competence-aware systems (CAS), namely Type I and Type II CAS. The former refers to a stand-alone system that can learn its own competence and use it to fine-tune itself to the characteristics of the problem instance at hand, without human assistance. The latter is a human-aware system that can uses a self-competence model to optimize the utilization of costly human assistive actions. I describe recent results that demonstrate the benefits of the two types of competence awareness in different contexts, including autonomous vehicle decision making.

Bio: Shlomo Zilberstein is Professor of Computer Science and Associate Dean for Research and Engagement in the Manning College of Information and Computer Sciences at the University of Massachusetts, Amherst. He received a B.A. in Computer Science from the Technion, and a Ph.D. in Computer Science from the UC Berkeley. Zilberstein’s research focuses on the foundations and applications of resource-bounded reasoning techniques, which allow complex systems to make decisions while coping with uncertainty, missing information, and limited computational resources. His research interests include decision theory, reasoning under uncertainty, Markov decision processes, design of autonomous agents, heuristic search, real-time problem solving, principles of meta-reasoning, planning and scheduling, multi-agent systems, and reinforcement learning. Zilberstein is a Fellow of AAAI and the ACM. He is recipient of the University of Massachusetts Chancellor’s Medal (2019), the IFAAMAS Influential Paper Award (2019), the AAAI Distinguished Service Award (2019), a National Science Foundation CAREER Award (1996), and the Israel Defense Prize (1992). He received numerous Paper Awards from AAAI (2017,2021), IJCAI (2020), AAMAS (2003), ECAI (1998), ICAPS (2010), and SoCS (2022) among others. He is the past Editor-in-Chief of the Journal of Artificial Intelligence Research, former Chair of the AAAI Conference Committee, former President of ICAPS, a former Councilor of AAAI, and the Chairman of the AI Access Foundation.

Fall 2022

Standard
Oct. 10
DBH 4011
1 pm

Furong Huang

Assistant Professor of Computer Science
University of Maryland

With the burgeoning use of machine learning models in an assortment of applications, there is a need to rapidly and reliably deploy models in a variety of environments. These trustworthy machine learning models must satisfy certain criteria, namely the ability to: (i) adapt and generalize to previously unseen worlds although trained on data that only represent a subset of the world, (ii) allow for non-iid data, (iii) be resilient to (adversarial) perturbations, and (iv) conform to social norms and make ethical decisions. In this talk, towards trustworthy and generally applicable intelligent systems, I will cover some reinforcement learning algorithms that achieve fast adaptation by guaranteed knowledge transfer, principled methods that measure the vulnerability and improve the robustness of reinforcement learning agents, and ethical models that make fair decisions under distribution shifts.

Bio: Furong Huang is an Assistant Professor of the Department of Computer Science at University of Maryland. She works on statistical and trustworthy machine learning, reinforcement learning, graph neural networks, deep learning theory and federated learning with specialization in domain adaptation, algorithmic robustness and fairness. Furong is a recipient of the NSF CRII Award, the MLconf Industry Impact Research Award, the Adobe Faculty Research Award, and three JP Morgan Faculty Research Awards. She is a Finalist of AI in Research – AI researcher of the year for Women in AI Awards North America 2022. She received her Ph.D. in electrical engineering and computer science from UC Irvine in 2016, after which she completed postdoctoral positions at Microsoft Research NYC.
Oct. 17
DBH 4011
1 pm

Bodhi Majumder

PhD Student, Department of Computer Science and Engineering
University of California, San Diego

The use of artificial intelligence in knowledge-seeking applications (e.g., for recommendations and explanations) has shown remarkable effectiveness. However, the increasing demand for more interactions, accessibility and user-friendliness in these systems requires the underlying components (dialog models, LLMs) to be adequately grounded in the up-to-date real-world context. However, in reality, even powerful generative models often lack commonsense, explanations, and subjectivity — a long-standing goal of artificial general intelligence. In this talk, I will partly address these problems in three parts and hint at future possibilities and social impacts. Mainly, I will discuss: 1) methods to effectively inject up-to-date knowledge in an existing dialog model without any additional training, 2) the role of background knowledge in generating faithful natural language explanations, and 3) a conversational framework to address subjectivity—balancing task performance and bias mitigation for fair interpretable predictions.

Bio: Bodhisattwa Prasad Majumder is a final-year PhD student at CSE, UC San Diego, advised by Prof. Julian McAuley. His research goal is to build interactive machines capable of producing knowledge grounded explanations. He previously interned at Allen Institute of AI, Google AI, Microsoft Research, FAIR (Meta AI) and collaborated with U of Oxford, U of British Columbia, and Alan Turing Institute. He is a recipient of the UCSD CSE Doctoral Award for Research (2022), Adobe Research Fellowship (2022), UCSD Friends Fellowship (2022), and Qualcomm Innovation Fellowship (2020). In 2019, Bodhi led UCSD in the finals of Amazon Alexa Prize. He also co-authored a best-selling NLP book with O’Reilly Media that is being adopted in universities internationally. Website: http://www.majumderb.com/.
Oct. 24
DBH 4011
1 pm

Mark Steyvers

Professor of Cognitive Sciences
University of California, Irvine

Artificial intelligence (AI) and machine learning models are being increasingly deployed in real-world applications. In many of these applications, there is strong motivation to develop hybrid systems in which humans and AI algorithms can work together, leveraging their complementary strengths and weaknesses. In the first part of the presentation, I will discuss results from a Bayesian framework where we statistically combine the predictions from humans and machines while taking into account the unique ways human and algorithmic confidence is expressed. The framework allows us to investigate the factors that influence complementarity, where a hybrid combination of human and machine predictions leads to better performance than combinations of human or machine predictions alone. In the second part of the presentation, I will discuss some recent work on AI-assisted decision making where individuals are presented with recommended predictions from classifiers. Using a cognitive modeling approach, we can estimate the AI reliance policy used by individual participants. The results show that AI advice is more readily adopted if the individual is in a low confidence state, receives high-confidence advice from the AI and when the AI is generally more accurate. In the final part of the presentation, I will discuss the question of “machine theory of mind” and “theory of machine”, how humans and machines can efficiently form mental models of each other. I will show some recent results on theory-of-mind experiments where the goal is for individuals and machine algorithms to predict the performance of other individuals in image classification tasks. The results show performance gaps where human individuals outperform algorithms in mindreading tasks. I will discuss several research directions designed to close the gap.

Bio: Mark Steyvers is a Professor of Cognitive Science at UC Irvine and Chancellor’s Fellow. He has a joint appointment with the Computer Science department and is affiliated with the Center for Machine Learning and Intelligent Systems. His publications span work in cognitive science as well as machine learning and has been funded by NSF, NIH, IARPA, NAVY, and AFOSR. He received his PhD from Indiana University and was a Postdoctoral Fellow at Stanford University. He is currently serving as Associate Editor of Computational Brain and Behavior and Consulting Editor for Psychological Review and has previously served as the President of the Society of Mathematical Psychology, Associate Editor for Psychonomic Bulletin & Review and the Journal of Mathematical Psychology. In addition, he has served as a consultant for a variety of companies such as eBay, Yahoo, Netflix, Merriam Webster, Rubicon and Gimbal on machine learning problems. Dr. Steyvers received New Investigator Awards from the American Psychological Association as well as the Society of Experimental Psychologists. He also received an award from the Future of Privacy Forum and Alfred P. Sloan Foundation for his collaborative work with Lumosity.
Oct. 31
DBH 4011
1 pm

Alex Boyd

PhD Student, Department of Statistics
University of California, Irvine

In reasoning about sequential events it is natural to pose probabilistic queries such as “when will event A occur next” or “what is the probability of A occurring before B”, with applications in areas such as user modeling, medicine, and finance. However, with machine learning shifting towards neural autoregressive models such as RNNs and transformers, probabilistic querying has been largely restricted to simple cases such as next-event prediction. This is in part due to the fact that future querying involves marginalization over large path spaces, which is not straightforward to do efficiently in such models. In this talk, we will describe a novel representation of querying for these discrete sequential models, as well as discuss various approximation and search techniques that can be utilized to help estimate these probabilistic queries. Lastly, we will briefly touch on ongoing work that has extended these techniques into sequential models for continuous time events.

Bio: Alex Boyd is a Statistics PhD candidate at UC Irvine, co-advised by Padhraic Smyth and Stephan Mandt. His work focuses on improving probabilistic methods, primarily for deep sequential models. He was selected in 2020 as a National Science Foundation Graduate Fellow.
Nov. 7
DBH 4011
1 pm

Yanning Shen

Assistant Professor of Electrical Engineering and Computer Science
University of California, Irvine

We live in an era of data deluge, where pervasive media collect massive amounts of data, often in a streaming fashion. Learning from these dynamic and large volumes of data is hence expected to bring significant science and engineering advances along with consequent improvements in quality of life. However, with the blessings come big challenges. The sheer volume of data makes it impossible to run analytics in batch form. Large-scale datasets are noisy, incomplete, and prone to outliers. As many sources continuously generate data in real-time, it is often impossible to store all of it. Thus, analytics must often be performed in real-time, without a chance to revisit past entries. In response to these challenges, this talk will first introduce an online scalable function approximation scheme that is suitable for various machine learning tasks. The novel approach adaptively learns and tracks the sought nonlinear function ‘on the fly’ with quantifiable performance guarantees, even in adversarial environments with unknown dynamics. Building on this robust and scalable function approximation framework, a scalable online learning approach with graph feedback will be outlined next for online learning with possibly related models. The effectiveness of the novel algorithms will be showcased in several real-world datasets.

Bio: Yanning Shen is an assistant professor with the EECS department at the University of California, Irvine. She received her Ph.D. degree from the University of Minnesota (UMN) in 2019. She was a finalist for the Best Student Paper Award at the 2017 IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, and the 2017 Asilomar Conference on Signals, Systems, and Computers. She was selected as a Rising Star in EECS by Stanford University in 2017. She received the Microsoft Academic Grant Award for AI Research in 2021, the Google Research Scholar Award in 2022, and the Hellman Fellowship in 2022. Her research interests span the areas of machine learning, network science, data science, and signal processing.
Nov. 14
DBH 4011
1 pm

Muhao Chen

Assistant Research Professor of Computer Science
University of Southern California

Information extraction (IE) is the process of automatically inducing structures of concepts and relations described in natural language text. It is the fundamental task to assess the machine’s ability for natural language understanding, as well as the essential step for acquiring structural knowledge representation that is integral to any knowledge-driven AI systems. Despite the importance, obtaining direct supervision for IE tasks is always very difficult, as it requires expert annotators to read through long documents and identify complex structures. Therefore, a robust and accountable IE model has to be achievable with minimal and imperfect supervision. Towards this mission, this talk covers recent advances of machine learning and inference technologies that (i) grant robustness against noise and perturbation, (ii) prevent systematic errors caused by spurious correlations, and (iii) provide indirect supervision for label-efficient and logically consistent IE.

Bio: Muhao Chen is an Assistant Research Professor of Computer Science at USC, and the director of the USC Language Understanding and Knowledge Acquisition (LUKA) Lab. His research focuses on robust and minimally supervised machine learning for natural language understanding, structured data processing, and knowledge acquisition from unstructured data. His work has been recognized with an NSF CRII Award, faculty research awards from Cisco and Amazon, an ACM SIGBio Best Student Paper Award and a best paper nomination at CoNLL. Dr. Chen obtained his Ph.D. degree from UCLA Department of Computer Science in 2019, and was a postdoctoral researcher at UPenn prior to joining USC.
Nov. 21
DBH 4011
1 pm

Peter Orbanz

Professor of Machine Learning
Gatsby Computational Neuroscience Unit, University College London

Consider a large random structure — a random graph, a stochastic process on the line, a random field on the grid — and a function that depends only on a small part of the structure. Now use a family of transformations to ‘move’ the domain of the function over the structure, collect each function value, and average. Under suitable conditions, the law of large numbers generalizes to such averages; that is one of the deep insights of modern ergodic theory. My own recent work with Morgane Austern (Harvard) shows that central limit theorems and other higher-order properties also hold. Loosely speaking, if the i.i.d. assumption of classical statistics is substituted by suitable properties formulated in terms of groups, the fundamental theorems of inference still hold.

Bio: Peter Orbanz is a Professor of Machine Learning in the Gatsby Computational Neuroscience Unit at University College London. He studies large systems of dependent variables in machine learning and inference problems. That involves symmetry and group invariance properties, such as exchangeability and stationarity, random graphs and random structures, hierarchies of latent variables, and the intersection of ergodic theory and statistical physics with statistics and machine learning. In the past, Peter was a PhD student of Joachim M. Buhmann at ETH Zurich, a postdoc with Zoubin Ghahramani at the University of Cambridge, and Assistant and Associate Professor in the Department of Statistics at Columbia University.
Nov. 28
No Seminar (NeurIPS Conference)

Spring 2022

Standard

Live Stream for all Spring 2022 CML Seminars

May 2
DBH 4011 &
Live Stream
1 pm

Maurizio Filippone

Associate Professor, EURECOM
and
Ba-Hien Tran
PhD Student, EURECOM

YouTube Stream: https://youtu.be/oZAuh686ipw

The Bayesian treatment of neural networks dictates that a prior distribution is specified over their weight and bias parameters. This poses a challenge because modern neural networks are characterized by a huge number of parameters and non-linearities. The choice of these priors has an unpredictable effect on the distribution of the functional output which could represent a hugely limiting aspect of Bayesian deep learning models. Differently, Gaussian processes offer a rigorous non-parametric framework to define prior distributions over the space of functions. In this talk, we aim to introduce a novel and robust framework to impose such functional priors on modern neural networks for supervised learning tasks through minimizing the Wasserstein distance between samples of stochastic processes. In addition, we extend this framework to carry out model selection for Bayesian autoencoders for unsupervised learning tasks. We provide extensive experimental evidence that coupling these priors with scalable Markov chain Monte Carlo sampling offers systematically large performance improvements over alternative choices of priors and state-of-the-art approximate Bayesian deep learning approaches.

Bio: Maurizio Filippone received a Master’s degree in Physics and a Ph.D. in Computer Science from the University of Genova, Italy, in 2004 and 2008, respectively. In 2007, he was a Research Scholar with George Mason University, Fairfax, VA. From 2008 to 2011, he was a Research Associate with the University of Sheffield, U.K. (2008-2009), with the University of Glasgow, U.K. (2010), and with University College London, U.K (2011). From 2011 to 2015 he was a Lecturer at the University of Glasgow, U.K, and he is currently AXA Chair of Computational Statistics and Associate Professor at EURECOM, Sophia Antipolis, France. His current research interests include the development of tractable and scalable Bayesian inference techniques for Gaussian processes and Deep/Conv Nets with applications in life and environmental sciences.
Bio: Ba-Hien Tran is currently a PhD student within the Data Science department of EURECOM, under the supervision of Professor Maurizio Filippone. His research focuses on Accelerating Inference for Deep Probabilistic Modeling. In 2016, he received a Bachelor of Science degree with honors in Computer Science from Vietnam National University, HCMC. His thesis investigated Deep Learning approaches for data-driven image captioning. In 2020, he received a Master of Science in Engineering degree in Data Science from Télécom Paris. His thesis focused on Bayesian Inference for Deep Neural Networks.
May 9
DBH 4011 &
Live Stream
1 pm

Ties van Rozendaal

Senior Machine Learning Researcher
Qualcomm AI Research

YouTube Stream: https://youtu.be/LQu-kwpfFg4

Neural data compression has been shown to outperform classical methods in terms of rate-distortion performance, with results still improving rapidly. These models are fitted to a training dataset and cannot be expected to optimally compress test data in general due to limitations on model capacity, distribution shifts, and imperfect optimization. If the test-time data distribution is known and has relatively low entropy, the model can easily be finetuned or adapted to this distribution. Instance-adaptive methods take this approach to the extreme, adapting the model to a single test instance, and signaling the updated model along in the bitstream. In this talk, we will show the potential of different types of instance-adaptive methods and discuss the tradeoffs that these methods pose.

Bio: Ties is a senior machine learning researcher at Qualcomm AI Research. He obtained his masters’s degree at the University of Amsterdam with a thesis on personalizing automatic speech recognition systems using unsupervised methods. At Qualcomm AI research he has been working on neural compression, with a focus on using generative models to compress image and video data. His research includes work on semantic compression and constrained optimization as well as instance-adaptive and neural-implicit compression.
May 16
DBH 4011 &
Live Stream
1 pm

Robin Jia

Assistant Professor of Computer Science
University of Southern California

YouTube Stream: https://youtu.be/ALqqlgbzAB0

Natural language processing (NLP) models have achieved impressive accuracies on in-distribution benchmarks, but they are unreliable in out-of-distribution (OOD) settings. In this talk, I will give an exclusive preview of my group’s ongoing work on evaluating and improving model performance in OOD settings. First, I will propose likelihood splits, a general-purpose way to create challenging non-i.i.d. benchmarks by measuring generalization to the tail of the data distribution, as identified by a language model. Second, I will describe the advantages of neurosymbolic approaches over end-to-end pretrained models for OOD generalization in visual question answering; these results highlight the importance of measuring OOD generalization when comparing modeling approaches. Finally, I will show how synthesized examples can improve open-set recognition, the task of abstaining on OOD examples that come from classes never seen at training time.

Bio: Robin Jia is an Assistant Professor of Computer Science at the University of Southern California. He received his Ph.D. in Computer Science from Stanford University, where he was advised by Percy Liang. He has also spent time as a visiting researcher at Facebook AI Research, working with Luke Zettlemoyer and Douwe Kiela. He is interested broadly in natural language processing and machine learning, with a particular focus on building NLP systems that are robust to distribution shift. Robin’s work has received best paper awards at ACL and EMNLP.
May 23
No Seminar
May 30
No Seminar (Memorial Day Holiday)
June 6
DBH 4011 &
Live Stream
1 pm

Bobak Pezeshki

PhD Student, Department of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/Yl_aCTieVqc

Computational protein design (CPD) is the task of creating new proteins to fulfill a desired function. In this talk, I will share work recently accepted at UAI 2022 based on a new formulation of CPD as a graphical model designed for optimizing subunit binding affinity. These new methods showed promising results when compared with state-of-the-art algorithm BBK* that is part of a long-time developed software package dedicated to CPD. In the talk, I will first describe CPD in general and for optimizing a quantity called K* (which approximates binding affinity). I will relate this to the well known task of MMAP for which many powerful algorithms have been recently developed and from which our methods are inspired. Next I will give a preview of the promising results of our new framework. I will then go on to describe the framework, presenting the formulation of the problem as a graphical model for K* optimization and introducing a weighted mini-bucket heuristic for bounding K* and guiding search. Finally, I will share our algorithm AOBB-K* and modifications that can enhance it, describing some of the empirical benefits and limitations of our scheme. To conclude, I will outline some future directions for advancing the use of this framework.

Bio: Bobak Pezeshki is a fifth year PhD student of Computer Science at the University of California, Irvine, under advisement of Professor Rina Dechter. His research focus is in automated reasoning over graphical models with focus in Abstraction Sampling and applying automated reasoning over graphical models to computational protein design. He completed his undergraduate studies at UC Berkeley majoring in Molecular and Cell Biology (with an emphasis in Biochemistry) and Integrative Biology. Before pursuing his PhD at UCI, he was involved in protein biochemistry research at the Stroud Lab, UCSF, and at Novartis Vaccines and Diagnostics.

Winter 2022

Standard

Live Stream for all Winter 2022 CML Seminars

January 3
No Seminar
January 10
Live Stream
1 pm

Roy Fox

Assistant Professor
Department of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/ImvsK5CFp0w

Ensemble methods for reinforcement learning have gained attention in recent years, due to their ability to represent model uncertainty and use it to guide exploration and to reduce value estimation bias. We present MeanQ, a very simple ensemble method with improved performance, and show how it reduces estimation variance enough to operate without a stabilizing target network. Curiously, MeanQ is theoretically *almost* equivalent to a non-ensemble state-of-the-art method that it significantly outperforms, raising questions about the interaction between uncertainty estimation, representation, and resampling.
In adversarial environments, where a second agent attempts to minimize the first’s rewards, double-oracle (DO) methods grow a population of policies for both agents by iteratively adding the best response to the current population. DO algorithms are guaranteed to converge when they exhaust all policies, but are only effective when they find a small population sufficient to induce a good agent. We present XDO, a DO algorithm that exploits the game’s sequential structure to exponentially reduce the worst-case population size. Curiously, the small population size that XDO needs to find good agents more than compensates for its increased difficulty to iterate with a given population size.

Bio: Roy Fox is an Assistant Professor and director of the Intelligent Dynamics Lab at the Department of Computer Science at UCI. He was previously a postdoc in UC Berkeley’s BAIR, RISELab, and AUTOLAB, where he developed algorithms and systems that interact with humans to learn structured control policies for robotics and program synthesis. His research interests include theory and applications of reinforcement learning, algorithmic game theory, information theory, and robotics. His current research focuses on structure, exploration, and optimization in deep reinforcement learning and imitation learning of virtual and physical agents and multi-agent systems.
January 17
No Seminar (Martin Luther King, Jr. Day)
January 24
Live Stream
1 pm

Ransalu Senanayake

Postdoctoral Scholar
Department of Computer Science
Stanford University

YouTube Stream: https://youtu.be/3yR8BqBElXw

Autonomous agents such as self-driving cars have already gained the capability to perform individual tasks such as object detection and lane following, especially in simple, static environments. While advancing robots towards full autonomy, it is important to minimize deleterious effects on humans and infrastructure to ensure the trustworthiness of such systems. However, for robots to safely operate in the real world, it is vital for them to quantify the multimodal aleatoric and epistemic uncertainty around them and use that uncertainty for decision-making. In this talk, I will talk about how we can leverage tools from approximate Bayesian inference, kernel methods, and deep neural networks to develop interpretable autonomous systems for high-stakes applications.

Bio: Ransalu Senanayake is a postdoctoral scholar in the Statistical Machine Learning Group at the Department of Computer Science, Stanford University. He focuses on making downstream applications of machine learning trustworthy by quantifying uncertainty and explaining the decisions of such systems. Currently, he works with Prof. Emily Fox and Prof. Carlos Guestrin. He also worked on decision-making under uncertainty with Prof. Mykel Kochenderfer. Prior to joining Stanford, Ransalu obtained a PhD in Computer Science from the University of Sydney, Australia, and an MPhil in Industrial Engineering and Decision Analytics from the Hong Kong University of Science and Technology, Hong Kong.
January 31
Live Stream
1 pm

Dylan Slack

PhD Student
Department of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/71RJvjPhk3U

For domain experts to adopt machine learning (ML) models in high-stakes settings such as health care and law, they must understand and trust model predictions. As a result, researchers have proposed numerous ways to explain the predictions of complex ML models. However, these approaches suffer from several critical drawbacks, such as vulnerability to adversarial attacks, instability, inconsistency, and lack of guidance about accuracy and correctness. For practitioners to safely use explanations in the real world, it is vital to properly characterize the limitations of current techniques and develop improved explainability methods. This talk will describe the shortcomings of explanations and introduce current research demonstrating how they are vulnerable to adversarial attacks. I will also discuss promising solutions and present recent work on explanations that leverage uncertainty estimates to overcome several critical explanation shortcomings.

Bio: Dylan Slack is a Ph.D. candidate at UC Irvine advised by Sameer Singh and Hima Lakkaraju and associated with UCI NLP, CREATE, and the HPI Research Center. His research focuses on developing techniques that help researchers and practitioners build more robust, reliable, and trustworthy machine learning models. In the past, he has held research internships at GoogleAI and Amazon AWS and was previously an undergraduate at Haverford College advised by Sorelle Friedler where he researched fairness in machine learning.
February 7
Live Stream
1 pm

Maja Rudolph

Senior Research Scientist
Bosch Center for AI

YouTube Stream: https://youtu.be/9fRw74WhRdE

Recurrent neural networks (RNNs) are a popular choice for modeling sequential data. Standard RNNs assume constant time-intervals between observations. However, in many datasets (e.g. medical records) observation times are irregular and can carry important information. To address this challenge, we propose continuous recurrent units (CRUs) – a neural architecture that can naturally handle irregular intervals between observations. The CRU assumes a hidden state which evolves according to a linear stochastic differential equation and is integrated into an encoder-decoder framework. The recursive computations of the CRU can be derived using the continuous-discrete Kalman filter and are in closed form. The resulting recurrent architecture has temporal continuity between hidden states and a gating mechanism that can optimally integrate noisy observations. We derive an efficient parametrization scheme for the CRU that leads to a fast implementation (f-CRU). We empirically study the CRU on a number of challenging datasets and find that it can interpolate irregular time series better than methods based on neural ordinary differential equations.

Bio: Maja Rudolph is a Senior Research Scientist at the Bosch Center for AI where she works on machine learning research questions derived from engineering problems: for example, how to model driving behavior, how to forecast the operating conditions of a device, or how to find anomalies in the sensor data of an assembly line. In 2018, Maja completed her Ph.D. in Computer Science at Columbia University, advised by David Blei. She holds a MS in Electrical Engineering from Columbia University and a BS in Mathematics from MIT.
February 14
Live Stream
1 pm

Ruiqi Gao

Research Scientist
Google Brain

YouTube Stream: https://youtu.be/eAozs_JKp4o

Energy-based models (EBMs) are an appealing class of probabilistic models, which can be viewed as generative versions of discriminators, yet can be learned from unlabeled data. Despite a number of desirable properties, two challenges remain for training EBMs on high-dimensional datasets. First, learning EBMs by maximum likelihood requires Markov Chain Monte Carlo (MCMC) to generate samples from the model, which can be extremely expensive. Second, the energy potentials learned with non-convergent MCMC can be highly biased, making it difficult to evaluate the learned energy potentials or apply the learned models to downstream tasks.
In this talk, I will present two algorithms to tackle the challenges of training EBMs. (1) Diffusion Recovery Likelihood, where we tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset. Each EBM is trained with recovery likelihood, which maximizes the conditional probability of the data at a certain noise level given their noisy versions at a higher noise level. (2) Flow Contrastive Estimation, where we jointly estimate an EBM and a flow-based model, in which the two models are iteratively updated based on a shared adversarial value function. We demonstrate that EBMs can be trained with a small budget of MCMC or completely without MCMC. The learned energy potentials are faithful and can be applied to likelihood evaluation and downstream tasks, such as feature learning and semi-supervised learning.

Bio: Ruiqi Gao is a research scientist at Google, Brain team. Her research interests are in statistical modeling and learning, with a focus on generative models and representation learning. She received her Ph.D. degree in statistics from the University of California, Los Angeles (UCLA) in 2021 advised by Song-Chun Zhu and Ying Nian Wu. Prior to that, she received her bachelor’s degree from Peking University. Her recent research themes include scalable training algorithms of deep generative models, variational inference, and representational models with implications in neuroscience.
February 21
No Seminar (Presidents’ Day)
February 28
DBH 4011 &
Live Stream
1 pm

Sunipa Dev

Research Scientist
Ethical AI Team, Google AI

YouTube Stream: https://youtu.be/V93uXTBnpFw

Large language models are commonly used in different paradigms of natural language processing and machine learning, and are known for their efficiency as well as their overall lack of interpretability. Their data driven approach for emulating human language often results in human biases being encoded and even amplified, potentially leading to cyclic propagation of representational and allocational harm. We discuss in this talk some aspects of detecting, evaluating, and mitigating biases and associated harms in a holistic, inclusive, and culturally-aware manner. In particular, we discuss the disparate impact on society of common language tools that are not inclusive of all gender identities.

Bio: Sunipa Dev is a Research Scientist on the Ethical AI team at Google AI. Previously, she was an NSF Computing Innovation Fellow at UCLA, before which she completed her PhD at the University of Utah. Her ongoing research focuses on various facets of fairness and interpretability in NLP, including robust measurements of bias, cross-cultural understanding of concepts in NLP, and inclusive language representations.
March 7
Zoom
1 pm

Mukund Sundararajan

Principal Research Scientist
Google

YouTube Stream unavailable, please join via Zoom

Predicting cancer from XRays seemed great
Until we discovered the true reason.
The model, in its glory, did fixate
On radiologist markings – treason!

We found the issue with attribution:
By blaming pixels for the prediction (1,2,3,4,5,6).
A complement’ry way to attribute,
is to pay training data, a tribute (1).

If you are int’rested in FTC,
counterfactual theory, SGD
Or Shapley values and fine kernel tricks,
Please come attend, unless you have conflicts

Should you build deep models down the road,
Use attributions. Takes ten lines of code!

Bio:
There once was an RS called MS,
The models he studies are a mess,
A director at Google.
Accurate and frugal,
Explanations are what he likes best.
March 14
No Seminar (Finals Week)

Spring 2021

Standard

Live Stream for all Spring 2021 CML Seminars

March 29
No Seminar
April 5th
No Seminar
April 12th
Live Stream
1 pm

Sanmi Koyejo

Assistant Professor
Department of Computer Science
University of Illinois at Urbana-Champaign

YouTube Stream: https://youtu.be/Ehqsp8vRLis

Across healthcare, science, and engineering, we increasingly employ machine learning (ML) to automate decision-making that, in turn, affects our lives in profound ways. However, ML can fail, with significant and long-lasting consequences. Reliably measuring such failures is the first step towards building robust and trustworthy learning machines. Consider algorithmic fairness, where widely-deployed fairness metrics can exacerbate group disparities and result in discriminatory outcomes. Moreover, existing metrics are often incompatible. Hence, selecting fairness metrics is an open problem. Measurement is also crucial for robustness, particularly in federated learning with error-prone devices. Here, once again, models constructed using well-accepted robustness metrics can fail. Across ML applications, the dire consequences of mismeasurement are a recurring theme. This talk will outline emerging strategies for addressing the measurement gap in ML and how this impacts trustworthiness.

Bio: Sanmi (Oluwasanmi) Koyejo is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. Koyejo’s research interests are in developing the principles and practice of trustworthy machine learning. Additionally, Koyejo focuses on applications to neuroscience and healthcare. Koyejo completed his Ph.D. in Electrical Engineering at the University of Texas at Austin, advised by Joydeep Ghosh, and completed postdoctoral research at Stanford University. His postdoctoral research was primarily with Russell A. Poldrack and Pradeep Ravikumar. Koyejo has been the recipient of several awards, including a best paper award from the conference on uncertainty in artificial intelligence (UAI), a Sloan Fellowship, a Kavli Fellowship, an IJCAI early career spotlight, and a trainee award from the Organization for Human Brain Mapping (OHBM). Koyejo serves on the board of the Black in AI organization.
April 19th
Sponsored by the Steckler Center for Responsible, Ethical, and Accessible Technology (CREATE)
4 pm
(Note change in time)

Kate Crawford

Senior Principal Researcher, Microsoft Research, New York
Distinguished Visiting Fellow at the University of Melbourne

Where do the motivating ideas behind Artificial Intelligence come from and what do they imply? What claims to universality or particularity are made by AI systems? How do the movements of ideas, data, and materials shape the present and likely futures of AI development? Join us for a conversation with social scientist and AI scholar Kate Crawford about the intellectual history and geopolitical contexts of contemporary AI research and practice.

Bio: Kate Crawford is a leading scholar of the social and political implications of artificial intelligence. Over her 20-year career, her work has focused on understanding large-scale data systems, machine learning and AI in the wider contexts of history, politics, labor, and the environment. She is a Research Professor of Communication and STS at USC Annenberg, a Senior Principal Researcher at MSR-NYC, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris, In 2021, she will be the Miegunyah Distinguished Visiting Fellow at the University of Melbourne, and has been appointed an Honorary Professor at the University of Sydney. She previously co-founded the AI Now Institute at New York University. Kate has advised policy makers in the United Nations, the Federal Trade Commission, the European Parliament, and the White House. Her academic research has been published in journals such as Nature, New Media & Society, Science, Technology & Human Values and Information, Communication & Society. Beyond academic journals, Kate has also written for The New York Times, The Atlantic, Harpers’ Magazine, among others.
April 26th
Live Stream
1 pm

Yibo Yang

PhD Student
Department of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/1lXKUhBTHWc

Probabilistic machine learning, particularly deep learning, is reshaping the field of data compression. Recent work has established a close connection between lossy data compression and latent variable models such as variational autoencoders (VAEs), and VAEs are now the building blocks of many learning-based lossy compression algorithms that are trained on massive amounts of unlabeled data. In this talk, I give a brief overview of learned data compression, including the current paradigm of end-to-end lossy compression with VAEs, and present my research that addresses some of its limitations and explores other possibilities of learned data compression. First, I present algorithmic improvements inspired by variational inference that push the performance limits of VAE-based lossy compression, resulting in a new state-of-the-art performance on image compression. Then, I introduce a new algorithm that compresses the variational posteriors of pre-trained latent variable models, and allows for variable-bitrate lossy compression with a vanilla VAE. Lastly, I discuss ongoing work that explores fundamental bounds on the theoretical performance of lossy compression algorithms, using the tools of stochastic approximation and deep learning.

Bio: Yibo Yang is a PhD student advised by Stephan Mandt in the Computer Science department at UC Irvine. His research interests include probability theory, information theory, and their applications in statistical machine learning.
May 3rd
Live Stream
1 pm

Levi Lelis

Assistant Professor
Department of Computer Science
University of Alberta

YouTube Stream: https://youtu.be/76NFMs9pHEE

In this talk I will describe two tree search algorithms that use a policy to guide the search. I will start with Levin tree search (LTS), a best-first search algorithm that has guarantees on the number of nodes it needs to expand to solve state-space search problems. These guarantees are based on the quality of the policy it employs. I will then describe Policy-Guided Heuristic Search (PHS), another best-first search algorithm that uses both a policy and a heuristic function to guide the search. PHS also has guarantees on the number of nodes it expands, which are based on the quality of the policy and of the heuristic function employed. I will then present empirical results showing that LTS and PHS compare favorably with A*, Weighted A*, Greedy Best-First Search, and PUCT on a set of single-agent shortest-path problems.

Bio: Levi Lelis is an Assistant Professor at the University of Alberta, Canada, and a Professor on leave from Universidade Federal de Viçosa, Brazil. Levi is interested in heuristic search, machine learning, and program synthesis.
May 10th
Live Stream
1 pm

David Alvarez-Melis

Postdoctoral Researcher
Microsoft Research New England

YouTube Stream: https://youtu.be/52bQ_XUY2DQ

Abstract: Success stories in machine learning seem to be ubiquitous, but they tend to be concentrated on ‘ideal’ scenarios where clean labeled data are abundant, evaluation metrics are unambiguous, and operational constraints are rare — if at all existent. But machine learning in practice is rarely so ‘pristine’; clean data is often scarce, resources are limited, and constraints (e.g., privacy, transparency) abound in most real-life applications. In this talk we will explore how to reconcile these paradigms along two main axes: (i) learning with scarce or heterogeneous data, and (ii) making complex models, such as neural networks, interpretable. First, I will present various approaches that I have developed for ‘amplifying’ (e.g, merging, transforming, interpolating) datasets based on the theory of Optimal Transport. Through applications in machine translation, transfer learning, and dataset shaping, I will show that besides enjoying sound theoretical footing, these approaches yield efficient and high-performing algorithms. In the second part of the talk, I will present some of my work on designing methods to extract ‘explanations’ from complex models and on imposing on them some basic formal notions that I argue any interpretability method should satisfy, but which most lack. Finally, I will present a novel framework for interpretable machine learning that takes inspiration from the study of (human) explanation in the social sciences, and whose evaluation through user studies yields insights about the promise (and limitations) of interpretable AI tools.

Bio: David Alvarez-Melis is a postdoctoral researcher in the Machine Learning and Statistics Group at Microsoft Research, New England. He recently obtained a Ph.D. in computer science from MIT advised by Tommi Jaakkola, and holds B.Sc. and M.S. degrees in mathematics from ITAM and Courant Institute (NYU), respectively. He has previously spent time at IBM Research and is a recipient of CONACYT, Hewlett Packard, and AI2 awards.
May 17th
Live Stream
1 pm

Megan Peters

Assistant Professor
Department of Cognitive Sciences
UC Irvine

YouTube Stream: https://youtu.be/i9Cenn0stxE

Abstract: TBA

Bio: In March 2020 I joined the UCI Department of Cognitive Sciences. I’m also a Cooperating Researcher in the Department of Decoded Neurofeedback at Advanced Telecommunications Research Institute International in Kyoto, Japan. Prior to that, from 2017 I was on the faculty at UC Riverside in the Department of Bioengineering. I received my Ph.D. in computational cognitive neuroscience (psychology) from UCLA, and then was a postdoc there as well. My research aims to reveal how the brain represents and uses uncertainty, and performs adaptive computations based on noisy, incomplete information. I specifically focus on how these abilities support metacognitive evaluations of the quality of (mostly perceptual) decisions, and how these processes might relate to phenomenology and conscious awareness. I use neuroimaging, computational modeling, machine learning and neural stimulation techniques to study these topics.
May 24th
Live Stream
1 pm

Jing Zhang

Assistant Professor
Department of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/HPPq5Xvlr9c

The recent advances in sequencing technologies provide unprecedented opportunities to decipher the multi-scale gene regulatory grammars at diverse cellular states. Here, we will introduce our computational efforts on cell/gene representation learning to extract biologically meaningful information from high-dimensional, sparse, and noisy genomic data. First, we proposed a deep generative model, named SAILER, to learn the low-dimensional latent cell representations from single-cell epigenetic data for accurate cell state characterization. SAILER adopted the conventional encoder-decoder framework and imposed additional constraints for biologically robust cell embeddings invariant to confounding factors. Then at the network level, we developed TopicNet using latent Dirichlet allocation (LDA) to extract latent gene communities and quantify regulatory network connectivity changes (network “rewiring”) between diverse cell states. We applied our TopicNet model on 13 different cancer types and highlighted gene communities that impact patient prognosis in multiple cancer types.

Bio: Dr. Zhang is an Assistant Professor at UCI. Her research interests are in the areas of bioinformatics and computational biology. She graduated from USC Electrical Engineering under the supervision of Dr. Liang Chen and Dr. C.C Jay Kuo. She completed her postdoc training at Yale University in Dr. Mark Gerstein’s lab. During her postdoc, she has developed several computational methods to integrate novel high-throughput sequencing assays to decipher the gene regulation “grammar”. Her current research focuses on developing computational methods to predict the impact of genomic variations on genome function and phenotype at a single-cell resolution.
May 31
No Seminar (Memorial Day)
June 7th
No Seminar (Finals Week)

Winter 2021

Standard

Live Stream for all Winter 2021 CML Seminars

Jan. 4
No Seminar
Jan. 11
Live Stream
1 pm

Florian Wenzel

Postdoctoral Researcher
Google Brain Berlin

YouTube Stream: https://youtu.be/9n8_5tjt_Lw

Deep learning models are bad at detecting their failure. They tend to make over-confident mistakes, especially, under distribution shift. Making deep learning more reliable is important in safety-critical applications including health care, self-driving cars, and recommender systems. We discuss two approaches to reliable deep learning. First, we will focus on Bayesian neural networks that come with many promises to improved uncertainty estimation. However, why are they rarely used in industrial practice? In this talk, we will cast doubt on the current understanding of Bayes posteriors in deep networks. We show that Bayesian neural networks can be improved significantly through the use of a “cold posterior” that overcounts evidence and hence sharply deviates from the Bayesian paradigm. We will discuss several hypotheses that could explain cold posteriors. In the second part, we will discuss a classical approach to more robust predictions: ensembles. Deep ensembles combine the predictions of models trained from different initializations. We will show that the diversity of predictions can be improved by considering models with different hyperparameters. Finally, we present an efficient method that leverages hyperparameter diversity within a single model.

Bio: Florian Wenzel is a machine learning researcher who is currently on the job market. His research has focused on probabilistic deep learning, uncertainty estimation, and scalable inference methods. From October 2019 to October 2020 he was a postdoctoral researcher at Google Brain. He received his PhD from Humboldt University in Berlin and worked with Marius Kloft, Stephan Mandt, and Manfred Opper.
Jan. 18
No Seminar (Martin Luther King, Jr. Holiday)
Jan. 25
Live Stream
1 pm

Yezhou Yang

Assistant Professor
School of Computing, Informatics, and Decision Systems Engineering
Arizona State University

YouTube Stream: https://youtu.be/IcSUBZraB3s

The goal of Computer Vision, as coined by Marr, is to develop algorithms to answer What are Where at When from visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. This talk will present the speaker’s efforts over the last decade, ranging from 1) reasoning beyond appearance for visual question answering, image understanding and video captioning tasks, through 2) temporal knowledge distillation with incremental knowledge transfer, till 3) their roles in a Robotic visual learning framework via a Robotic Indoor Object Search task. The talk will also feature the Active Perception Group (APG)’s ongoing projects (NSF RI, NRI and CPS, DARPA KAIROS, and Arizona IAM) addressing emerging challenges of the nation in autonomous driving, AI security and healthcare domains, at the ASU School of Computing, Informatics, and Decision Systems Engineering (CIDSE).

Bio: Yezhou Yang is an Assistant Professor at School of Computing, Informatics, and Decision Systems Engineering, Arizona State University. He is directing the ASU Active Perception Group. His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots. Before joining ASU, Dr. Yang was a Postdoctoral Research Associate at the Computer Vision Lab and the Perception and Robotics Lab, with the University of Maryland Institute for Advanced Computer Studies. He is a recipient of Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018 and the Amazon AWS Machine Learning Research Award 2019. He receives his Ph.D. from University of Maryland at College Park, and B.E. from Zhejiang University, China.
Feb. 1
Live Stream
1 pm

Joe Marino

PhD Student
Computation and Neural Systems
California Institute of Technology

YouTube Stream: https://youtu.be/iVz6uwD7i6A

Unsupervised machine learning has recently dramatically improved our ability to model and extract structure from data. One such approach is deep latent variable models, which includes variational autoencoders (VAEs) [Kingma & Welling, 2014; Rezende et al., 2014]. These models can be traced back to the Helmholtz machine [Dayan et al., 1995], which, in turn, was inspired by ideas from theoretical neuroscience [Mumford, 1992]. In the intervening years, neuroscientists have further developed these ideas into a popular theory: predictive coding [Rao & Ballard, 1999; Friston, 2005]. Yet, the machine learning community remains largely unaware of these connections. In this talk, I discuss the links between modern deep latent variable models and predictive coding, yielding several striking implications for the correspondences between machine learning and neuroscience. This motivates a more nuanced view in connecting these fields, including the search for backpropagation in the brain.

Bio: Joe Marino is a PhD candidate in the Computation & Neural Systems program at Caltech, advised by Yisong Yue. His work focuses on improving probabilistic models and inference techniques, using neuroscience-inspired ideas, within the areas of generative modeling and reinforcement learning.
Feb. 8
Live Stream
1 pm

Junkyu Lee

AI Planning Group
IBM Research

YouTube Stream: https://youtu.be/p7X-L1T9ULk

Influence diagrams (IDs) extend Bayesian networks with decision variables and utility functions to model the interaction between an agent and a system to capture the preferences. The standard task in IDs is to compute the maximum expected utility (MEU) over the influence diagram and optimal policies. However, it is the most challenging task in graphical models. Therefore, computing upper bounds on the MEU is desirable because upper bounds can facilitate anytime-solutions by acting as heuristics to guide search or sampling-based methods. In this talk, I will present bounding schemes for solving IDs. The first approach builds on top of the tree decomposition scheme in probabilistic graphical models and extends variational decomposition bounds in marginal MAP. The second approach is a new tree decomposition method called submodel tree decomposition. The empirical evaluation results show that presented bounding schemes generate upper bounds that are orders of magnitude tighter than previous methods. Finally, I will conclude the talk with future directions.

Bio: Junkyu Lee received his Ph.D. from the CS department at UC Irvine, where Rina Dechter supervised him. Currently, he is a resident at the IBM Research AI planning group. His research focuses on graphical model inference and heuristic search for sequential decision making under uncertainty. He is also broadly interested in related areas such as planning and reinforcement learning.
Feb. 15
No Seminar (Presidents’ Holiday)
Feb. 22
No Seminar
March 1
Live Stream
1 pm

Robert Logan

PhD Student
Department of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/Mim1pmEn1UU

Recent progress in natural language processing (NLP) has been predominantly driven by the advent of large neural language models (e.g., GPT-2 and BERT) that are “pretrained” using a self-supervised learning objective on billions of tokens of text before being “finetuned” (i.e., transferred) to downstream tasks. The exceptional success of these models has motivated many NLP researchers to study what exactly these models are learning during pretraining that causes them to be more successful than their non-self-supervised counterparts. In this talk, we will describe the technique of prompting, an approach that answers this question by reformulating tasks as fill-in-the-blanks questions. We will begin by showing how prompts can be used to measure the amount of factual, linguistic, and task-specific knowledge contained in language models. We will then introduce an approach for automatically constructing prompts based on gradient-guided search that provides a scalable alternative to manually writing prompts by hand. Lastly, we will cover our ongoing work investigating whether prompting can be used as a replacement for finetuning of language models, describing some early results that demonstrate that prompting can indeed be more effective in few-shot learning scenarios while being substantially more parameter efficient.

Bio: Robert L. Logan IV is a 4th year PhD Candidate at UC Irvine, co-advised by Sameer Singh and Padhraic Smyth. His research focuses on leveraging external knowledge sources to measure and improve NLP models’ ability to reason with factual and common sense knowledge. He was selected as a Noyce Fellow and has been awarded the 2020 Rose Hills Foundation Scholarship. Robert received his B.A. in mathematics at the University of California, Santa Cruz, and has held research positions at Google and Diffbot.
March 8
No Seminar
March 15
Finals Week

Fall 2020

Standard

Live Stream for all Fall 2020 CML Seminars

Oct 5
No Seminar
Oct 12
Live Stream
1 pm

Forest Agostinelli

Assistant Professor
Computer Science and Engineering
University of South Carolina

YouTube Stream: https://youtu.be/shwYW9yEAIQ

Combination puzzles, such as the Rubik’s cube, pose unique challenges for artificial intelligence. Furthermore, solutions to such puzzles are directly linked to problems in the natural sciences. In this talk, I will present DeepCubeA, a deep reinforcement learning and search algorithm that can solve the Rubik’s cube, and six other puzzles, without domain specific knowledge. Next, I will discuss how solving combination puzzles opens up new possibilities for solving problems in the natural sciences. Finally, I will show how problems we encounter in the natural sciences motivate future research directions in areas such as theorem proving and education. A demonstration of our work can be seen at http://deepcube.igb.uci.edu/.

Bio: Forest Agostinelli is an assistant professor at the University of South Carolina. He received his B.S. from the Ohio State University, his M.S. from the University of Michigan, and his Ph.D. from UC, Irvine under Professor Pierre Baldi. His research interests include deep learning, reinforcement learning, search, bioinformatics, neuroscience, and chemistry.
Oct 19
Live Stream
1 pm

Stephan Mandt

Assistant Professor
Dept. of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/Z8juQKrCkmk

Neural image compression algorithms have recently outperformed their classical counterparts in rate-distortion performance and show great potential to also revolutionize video coding. In this talk, I will show how innovations from Bayesian machine learning and generative modeling can lead to dramatic performance improvements in compression. In particular, I will explain how sequential variational autoencoders can be converted into video codecs, how deep latent variable models can be compressed in post-processing with variable bitrates, and how iterative amortized inference can be used to achieve the world record in image compression performance.

Bio: Stephan Mandt is an Assistant Professor of Computer Science at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and Head of the statistical machine learning group at Disney Research, first in Pittsburgh and later in Los Angeles. He held previous postdoctoral positions at Columbia University and Princeton University. Stephan holds a Ph.D. in Theoretical Physics from the University of Cologne. He is a Fellow of the German National Merit Foundation, a Kavli Fellow of the U.S. National Academy of Sciences, and was a visiting researcher at Google Brain. Stephan regularly serves as an Area Chair for NeurIPS, ICML, AAAI, and ICLR, and is a member of the Editorial Board of JMLR. His research is currently supported by NSF, DARPA, Intel, and Qualcomm.
Oct 26
Live Stream
1 pm

Christoph Lippert

Professor
Hasso Plattner Institute
University of Potsdam

YouTube Stream: https://youtu.be/zElgAKf4AhE

At the Chair of Digital Health & Machine Learning, we are developing methods for the statistical analysis of large biomedical data. In particular imaging provides a powerful means for measuring phenotypic information at scale. While images are abundantly available in large repositories such as the UK Biobank, the analysis of imaging data poses new challenges for statistical methods development. In this talk, I will give an overview over some of our current efforts in using deep representation learning as a non-parametric way to model imaging phenotypes and for associating images to the genome.

References:
Kirchler, M., Khorasani, S., Kloft, M., & Lippert, C. (2020, June). Two-sample testing using deep learning. In International Conference on Artificial Intelligence and Statistics (pp. 1387-1398). PMLR.
Kirchler, M., Konigroski, S., Schurmann, C., Norden, M., Meltendorf, C., Kloft, M., Lippert, C. transferGWAS: GWAS of images using deep transfer learning. Manuscript in preparation.
Bio: Lippert studied bioinformatics from 2001–2008 in Munich and went on to earn his doctorate at the Max Planck Institutes for Intelligent Systems and for Developmental Biology in Tübingen in machine learning bioinformatics, with an emphasis on methods for genome-associated studies. In 2012, he accepted a Researcher position at Microsoft Research in Los Angeles and subsequently carried out work at Human Longevity, Inc. in Mountain View. In 2017, Lippert returned to Germany to head the research group “Statistical Genomics” at the Max Delbrück Center for Molecular Medicine in Berlin. In 2018, Lippert has been appointed Full Professor of “Digital Health & Machine Learning” in the joint Digital Engineering Faculty of the Hasso Plattner Institute and the University of Potsdam.
Nov 2
Live Stream
1 pm

Cory Scott

PhD Student
Dept. of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/CpGfCA92rMw

Microtubules are a primary constituent of the dynamic cytoskeleton in living cells, involved in many cellular processes whose study would benefit from scalable dynamic computational models. We define a novel machine learning model which aggregates information across multiple spatial scales to predict energy potentials measured from a simulation of a section of microtubule. Using projection operators which optimize an objective function related to the diffusion kernel of a graph, we sum information from local neighborhoods. This process is repeated recursively until the coarsest scale, and all scales are separately used as the input to a Graph Convolutional Network, forming our novel architecture: the Graph Prolongation Convolutional Network (GPCN). The GPCN outputs a prediction for each spatial scale, and these are combined using the inverse of the optimized projections. This fine-to-coarse mapping, and its inverse, create a model which is able to learn to predict energetic potentials more efficiently than other GCN ensembles which do not leverage multiscale information. We also compare the effect of training this ensemble in a coarse-to-fine fashion, and find that schedules adapted from the Algebraic Multigrid (AMG) literature further increase this efficiency. Since forces are derivatives of energies, we discuss the implications of this type of model for machine learning of multiscale molecular dynamics.

Reference: C.B. Scott and Eric Mjolsness. “Graph Prolongation Convolutional Networks: Explicitly Multiscale Machine Learning on Graphs with Applications to Modeling of Cytoskeleton”. In: Machine Learning: Science and Technology (2020). DOI: https://iopscience.iop.org/article/10.1088/2632-2153/abb6d2
Nov 9
Live Stream
1 pm

Lukas Ruff

PhD Student
Electrical Engineering and Computer Science
TU Berlin

YouTube Stream: https://youtu.be/Uncc5y7g8Is

Anomaly detection is the problem of identifying unusual observations in data. This problem is usually unsupervised and occurs in numerous applications such as industrial fault and damage detection, fraud detection in finance and insurance, intrusion detection in cybersecurity, scientific discovery, or medical diagnosis and disease detection. Many of these applications involve complex data such as images, text, graphs, or biological sequences, that is continually growing in size. This has sparked a great interest in developing deep learning approaches to anomaly detection.
In this talk, my aim is to provide a systematic and unifying overview of deep anomaly detection methods. We will discuss methods based on reconstruction, generative modeling, and one-class classification, where we identify common underlying principles and draw connections between traditional ‘shallow’ and novel deep methods. Furthermore, we will cover recent developments that include weakly and self-supervised approaches as well as techniques for explaining models that enable to reveal ‘Clever Hans’ detectors. Finally, I will conclude the talk by highlighting some open challenges and potential paths for future research.

Bio: Lukas Ruff is a third year PhD student in the Machine Learning Group headed by Klaus-Robert Müller at TU Berlin. His research covers robust and trustworthy machine learning, with a specific focus on deep anomaly detection. Lukas received a B.Sc. degree in Mathematical Finance from the University of Konstanz in 2015 and a joint M.Sc. degree in Statistics from HU, TU and FU Berlin in 2017.
Nov 16
Live Stream
1 pm

Karem Sakallah

Professor
Electrical Engineering and Computer Science
University of Michigan

YouTube Stream: https://youtu.be/5A5dTRo50EQ

Accidental research is when you’re an expert in some domain and seek to solve problem A in that domain. You soon discover that to solve A you need to also solve B which, however, comes from a domain in which you have little, or even no, expertise. You, thus, explore existing solutions to B but are disappointed to find that they just aren’t up to the task of solving A. Your options at this point are a) to abandon this futile project, or b) to try and find a solution to B that will help you solve A. While this might seem like a fool’s errand, you have the advantage over B experts of being unencumbered by their experience. You are a novice who does not, yet, appreciate the complexity of B, but are able to explore it from a fresh perspective. You also bring along expertise from your own domain to connect what you know with what you hope to learn. If you’re lucky, you may succeed in finding a solution to B that helps you solve A.
I will relate two cases in which this scenario played out: developing the GRASP conflict-driven clause-learning SAT solver in the context of performing timing analysis of very large scale integrated circuits, and developing the saucy graph automorphism program to find and break symmetries in large SAT problems. Ironically, in both cases solving problem B (GRASP, saucy) turned out to be much more impactful than solving problem A (timing analysis, breaking symmetries.) Without the trigger of problem A, however, neither GRASP nor saucy would have been conceived.

Bio: Karem A. Sakallah is a Professor of Electrical Engineering and Computer Science at the University of Michigan. He received the B.E. degree in electrical engineering from the American University of Beirut and the M.S. and Ph.D. degrees in electrical and computer engineering from Carnegie Mellon University. Prior to joining the University of Michigan, he headed the Analysis and Simulation Advanced Development Team at Digital Equipment Corporation. Besides his academic duties, he has served in a variety of professional roles including the establishment of a computing research institute in Qatar for which he took a leave to serve a term of three years as the Chief Scientist. His current research is focused on automating the formal verification of hardware, software, and distributed protocols. He is a fellow of the IEEE and the ACM and a co-recipient of the prestigious Computer-Aided Verification Award for “Fundamental contributions to the development of high-performance Boolean satisfiability solvers.”
Nov 23
Live Stream
1 pm

Ioannis Panageas

Assistant Professor
Dept. of Computer Science
University of California, Irvine

YouTube Stream: https://youtu.be/4cepfWDiL3A

In this talk we will give an overview of some results on the limiting behavior of first-order methods. In particular we will show that typical instantiations of first-order methods like gradient descent, coordinate descent, etc. avoid saddle points for almost all initializations. Moreover, we will provide applications of these results on Non-negative Matrix Factorization. The takeaway message is that such algorithms can be studied from a dynamical systems perspective in which appropriate instantiations of the Stable Manifold Theorem allow for a global stability analysis.

Bio: Ioannis is an Assistant Professor of Computer Science at UCI. He is interested in the theory of computation, machine learning and its interface with non-convex optimization, dynamical systems, probability and statistics. Before joining UCI, he was an Assistant Professor at Singapore University of Technology and Design. Prior to that he was a MIT postdoctoral fellow working with Constantinos Daskalakis. He received his PhD in Algorithms, Combinatorics and Optimization from Georgia Tech in 2016, a Diploma in EECS from National Technical University of Athens, and a M.Sc. in Mathematics from Georgia Tech. He is the recipient of the 2019 NRF fellowship for AI.
Nov 30
Live Stream
1 pm

Deqing Sun

Senior Research Scientist
Google

YouTube Stream: https://youtu.be/N3y_K1ewkL0

Optical flow provides important motion information about the dynamic world and is of fundamental importance to many tasks. Like other visual inference problems, it is critical to choose the representation to encode both the forward formation process and the prior knowledge of optical flow. In this talk, I will present my work on two different optical flow representations in the past decade. First, I will describe learning Markov random field (MRF) models and defining non-local conditional random field (CRF) models to recover motion boundaries. Second, I will talk about combining domain knowledge of optical flow with convolutional neural networks (CNNs) to develop a compact and effective model and some recent developments.

Bio: Deqing Sun is a senior research scientist at Google working on computer vision and machine learning. He received a Ph.D. degree in Computer Science from Brown University. He is a recipient of the PAMI Young Researcher award in 2020, the Longuet-Higgins prize at CVPR 2020, the best paper honorable mention award at CVPR 2018, and the first prize in the robust optical flow competition at CVPR 2018 and ECCV 2020. He served as an area chair for CVPR/ECCV/BMVC, and co-organized several workshops/tutorials at CVPR, ECCV, and SIGGRAPH.
Dec 7
No Seminar (NeurIPS Conference)
Dec 14
Finals week

Winter 2020

Standard

Spring 2020 Seminars Delayed

Following UCI guidance to limit social interactions during the COVID-19 outbreak, our CML seminar series is cancelled for the start of spring quarter. We hope to rejoin you later this year.


Jan. 6
No Seminar
Jan. 13
4011
Bren Hall
1 pm

Michael Campbell
Eureka (SAP)

We develop the rational dynamics for the long-term investor among boundedly rational speculators in the Carfì-Musolino speculative and hedging model. Numerical evidence is given that indicates there are various phases determined by the degree of non-rational behavior of speculators. The dynamics are shown to be influenced by speculator “noise”. This model has two types of operators: a real economic subject (Air, a long-term trader) and one or more investment banks (Bank, short-term speculators). It also has two markets: oil spot market and U.S. dollar futures. Bank agents react to Air and equilibrate much more quickly than Air, thus we consider rational, best-local-response dynamics for Air based on averaged values of equilibrated Bank variables. The averaged Bank variables are effectively parameters for Air dynamics that depend on deviations-from-rationality (temperature) and Air investment (external field). At zero field, below a critical temperature, there is a phase transition in the speculator system which creates two equilibriums for bank variables, hence in this regime the parameters for the dynamics of the long-term investor Air can undergo a rapid change, which is exactly what happens in the study of quenched dynamics for physical systems. It is also shown that large changes in strategy by the long-term Air investor are always preceded by diverging spatial volatility of Bank speculators. The phases resemble those for unemployment in the “Mark 0” macroeconomic model.
Jan. 20
Martin Luther King Junior Day
Jan. 27
No Seminar
Feb. 3
4011
Bren Hall
1 pm

Phanwadee Sinthong

Computer Science
University of California, Irvine

Analyzing the increasingly large volumes of data that are available today, possibly including the application of custom machine learning models, requires the utilization of distributed frameworks. This can result in serious productivity issues for “normal” data scientists. We introduce AFrame, a new scalable data analysis package powered by a Big Data management system that extends the data scientists’ familiar DataFrame operations to efficiently operate on managed data at scale. AFrame is implemented as a layer on top of Apache AsterixDB, transparently scaling out the execution of DataFrame operations and machine learning model invocation through a parallel, shared-nothing big data management system. AFrame allows users to interact with a very large volume of semi-structured data in the same way that Pandas DataFrames work against locally stored tabular data. Our AFrame prototype leverages lazy evaluation. AFrame operations are incrementally translated into AsterixDB SQL++ queries that are executed only when final results are called for. In order to evaluate our proposed approach, we also introduce an extensible micro-benchmark for use in evaluating DataFrame performance in both single-node and distributed settings via a collection of representative analytic operations.

Bio: Phanwadee (Gift) Sinthong is a fourth-year Ph.D. student in the CS Department at UC Irvine, advised by Professor Michael Carey. Her research interests are broadly in data management and distributed computation. Her current project is to deliver a scale-independent data science platform by incorporating database management capabilities with existing data science technologies to help support and enhance big data analysis.
Feb. 10
4011
Bren Hall
1 pm

Mingzhang Yin

Statistics and Data Sciences
University of Texas, Austin

Uncertainty estimation is one of the most unique features of biological systems, as we have to sense and act in noisy environments. In this talk, I will introduce semi-implicit variational inference (SIVI) as a new machine-learning framework to achieve accurate uncertainty estimation in general latent variable models. Semi-implicit distribution is introduced to expand the commonly used analytic variational family, by mixing the variational parameters with a highly flexible distribution. To cope with this new distribution family, a novel evidence lower bound is derived to achieve the accurate statistical inference. The theoretical properties of the proposed methods will be introduced from an information-theoretic perspective. With a substantially expanded variational family and a novel optimization algorithm, SIVI is shown to closely match the accuracy of MCMC in inferring the posterior while maintaining the merits of variational methods in a variety of Bayesian inference tasks.

Bio: Mingzhang Yin is a fifth year Ph.D. student in statistics at UT Austin. His research centers around Bayesian methods and machine learning, with a focus on approximate inference and structured data modeling.
Feb. 17
Presidents’ Day
Feb. 24
4011
Bren Hall
1 pm

Jaan Altosaar

Physics Department
Princeton University

Applied machine learning relies on translating the structure of a problem into a computational model. This arises in applications as diverse as statistical physics and food recommendation. The pattern of connectivity in an undirected graphical model or the fact that datapoints in food recommendation are unordered collections of features can inform the structure of a model. First, consider undirected graphical models from statistical physics like the ubiquitous Ising model. Basic research in statistical physics requires accurate and scalable simulations for comparing the behavior of these models to their experimental counterparts. The Ising model consists of binary random variables with local connectivity; interactions between neighboring nodes can lead to long-range correlations. Modeling these correlations is necessary to capture physical phenomena such as phase transitions. To mirror the local structure of these models, we use flow-based convolutional generative models that can capture long-range correlations. Combining flow-based models designed for continuous variables with recent work on hierarchical variational approximations enables the modeling of discrete random variables. Compared to existing variational inference methods, this approach scales to statistical physics models with tens of thousands of correlated random variables and uses fewer parameters. Just as computational choices can be made by considering the structure of an undirected graphical model, model construction itself can be guided by the structure of individual datapoints. Consider a recommendation task where datapoints consist of unordered sets, and the objective is to maximize top-K recall, a common recommendation metric. Simple results show that a classifier with zero worst-case error achieves maximum top-K recall. Further, the unordered structure of the data suggests the use of a permutation-invariant classifier for statistical and computational efficiency. We evaluate this recommendation model on a dataset of 55k users logging 16M meals on a food tracking app, where every meal is an unordered collection of ingredients. On this data, permutation-invariant classifiers outperform probabilistic matrix factorization methods.

Bio: Jaan Altosaar is a PhD Candidate in the Physics department at Princeton University where he is advised by David Blei and Shivaji Sondhi. He is a visiting academic at the Center for Data Science at New York University, where he works with Kyle Cranmer. His research focuses on machine learning methodology such as developing Bayesian deep learning techniques or variational inference methods for statistical physics. Prior to Princeton, Jaan earned his BSc in Mathematics and Physics from McGill University. He has interned at Google Brain and DeepMind, and his work has been supported by fellowships from the Natural Sciences and Engineering Research Council of Canada.
Mar. 2
6011
Bren Hall
1 pm

Oren Etzioni

CEO, Allen Institute for Artificial Intelligence (AI2)

Could we wake up one morning to find that AI is poised to take over the world? Is AI the technology of unfairness and bias? My talk will assess these concerns, and sketch a more optimistic view. We will have ample warning before the emergence of superintelligence, and in the meantime we have the opportunity to create Beneficial AI:
(1) AI that mitigates bias rather than amplifying it.
(2) AI that saves lives rather than taking them.
(3) AI that helps us to solve humanity’s thorniest problems.
My talk builds on work at the Allen Institute for AI, a non-profit research institute based in Seattle.

Bio: Oren Etzioni launched the Allen Institute for AI, and has served as its CEO since 2014. He has been a Professor at the University of Washington’s Computer Science department since 1991, publishing papers that have garnered over 2,300 highly influential citations on Semantic Scholar. He is also the founder of several startups including Farecast (acquired by Microsoft in 2008).
Mar. 9
4011
Bren Hall
12 pm

Ioannis Panageas

Singapore University of Technology and Design

Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been an important yet challenging question in deep learning and approximation theory. In a seminal paper, Telgarsky highlighted the benefits of depth by presenting a family of functions (based on simple triangular waves) for which DNNs achieve zero classification error, whereas shallow networks with fewer than exponentially many nodes incur constant error. Even though Telgarsky’s work reveals the limitations of shallow neural networks, it does not inform us on why these functions are difficult to represent and in fact he states it as a tantalizing open question to characterize those functions that cannot be well-approximated by smaller depths. In this talk, we will point to a new connection between DNNs expressivity and Sharkovsky’s Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing functions based on the presence of generalized notion of fixed points, called periodic points (a fixed point is a point of period 1). Motivated by our observation that the triangle waves used in Telgarsky’s work contain points of period 3 – a period that is special in that it implies chaotic behavior based on the celebrated result by Li-Yorke – we will give general lower bounds for the width needed to represent periodic functions as a function of the depth. Technically, the crux of our approach is based on an eigenvalue analysis of the dynamical system associated with such functions.

Bio: Ioannis Panageas is an Assistant Professor at Information Systems Department of SUTD since September 2018. Prior to that he was a MIT postdoctoral fellow working with Constantinos Daskalakis. He received his PhD in Algorithms, Combinatorics and Optimization from Georgia Institute of Technology in 2016, a Diploma in EECS from National Technical University of Athens (summa cum laude) and a M.Sc. in Mathematics from Georgia Institute of Technology. His work lies on the intersection of optimization, probability, learning theory, dynamical systems and algorithms. He is the recipient of the 2019 NRF fellowship for AI (analogue of NSF CAREER award).
Mar. 16
Finals Week
Mar. 23
Spring Break
TBD
4011
Bren Hall

Qiang Ning

Allen Institute for AI

The era of information explosion has opened up an unprecedented opportunity to study the social, political, financial and medical events described in natural language text. While the past decades have seen significant progress in deep learning and natural language processing (NLP), it is still extremely difficult to analyze textual data at the event-level, e.g., to understand what is going on, what is the cause and impact, and how things will unfold over time.
In this talk, I will mainly focus on a key component of event understanding: temporal relations. Understanding temporal relations is challenging due to the lack of explicit timestamps in natural language text, its strong dependence on background knowledge, and the difficulty of collecting high-quality annotations to train models. I will present a series of results addressing these problems from the perspective of structured learning, common sense knowledge acquisition, and data annotation. These efforts culminated in improving the state-of-the-art by approximately 20% in absolute F1. I will also discuss recent results on other aspects of event understanding and the incidental supervision paradigm. I will conclude my talk by describing my vision on future directions towards building next-generation event-based NLP techniques.

Bio: Qiang Ning is a research scientist on the AllenNLP team at the Allen Institute for AI (AI2). Qiang received his Ph.D. in Dec. 2019 from the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign (UIUC). He obtained his master’s degree in biomedical imaging from the same department in May 2016. Before coming to the United States, Qiang obtained two bachelor’s degrees from Tsinghua University in 2013, in Electronic Engineering and in Economics, respectively. He was an “Excellent Teacher Ranked by Their Students” across the university in 2017 (UIUC), a recipient of the YEE Fellowship in 2015, a finalist for the best paper in IEEE ISBI’15, and also won the National Scholarship at Tsinghua University in 2012.