AI/ML Seminar Series


Weekly Seminar in AI & Machine Learning
Sponsored by Cylance

Jan. 30
DBH 4011
1 pm

Maarten Bos

Lead Research Scientist
Snap Research

Corporate research labs aim to push the scientific and technological forefront of innovation outside traditional academia. Snap Inc. combines academia and industry by hiring academic researchers and doing application-driven research. In this talk I will give examples of research projects from my corporate research experience. My goal is to showcase the value of – and hurdles for – working both with and within corporate research labs, and how some of these values and hurdles are different from working in traditional academia.

Bio: Maarten Bos is a Lead Research Scientist at Snap Inc. After receiving his PhD in The Netherlands and postdoctoral training at Harvard University, he led a behavioral science group at Disney Research before joining Snap in 2018. His research interests range from decision science, to persuasion, and human-technology interaction. His work has been published in journals such as Science, Psychological Science, and the Journal of Marketing Research, and has been covered by the Wall Street Journal, Harvard Business Review, and The New York Times.
Feb. 6
DBH 4011
1 pm

Kolby Nottingham

PhD Student, Department of Computer Science
University of California, Irvine

While it’s common for other machine learning modalities to benefit from model pretraining, reinforcement learning (RL) agents still typically learn tabula rasa. Large language models (LLMs), trained on internet text, have been used as external knowledge sources for RL, but, on their own, they are noisy and lack the grounding necessary to reason in interactive environments. In this talk, we will cover methods for grounding LLMs in environment dynamics and applying extracted knowledge to training RL agents. Finally, we will demonstrate our newly proposed method for applying LLMs to improving RL sample efficiency through guided exploration. By applying LLMs to guiding exploration rather than using them as planners at execution time, our method remains robust to errors in LLM output while also grounding LLM knowledge in environment dynamics.

Bio: Kolby Nottingham is a PhD student at the University of California Irvine where he is coadvized by Professors Roy Fox and Sameer Singh. Kolby’s research interests lie at the intersection of reinforcement learning and natural language processing. His research applies recent advances in large language models to improving sequential decision making techniques.
Feb. 13
DBH 4011
1 pm

Noble Kennamer

PhD Student, Department of Computer Science
University of California, Irvine

Bayesian optimal experimental design is a sub-field of statistics focused on developing methods to make efficient use of experimental resources. Any potential design is evaluated in terms of a utility function, such as the (theoretically well-justified) expected information gain (EIG); unfortunately however, under most circumstances the EIG is intractable to evaluate. In this talk we build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the EIG. Past work focused on learning a new variational model from scratch for each new design considered. Here we present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs. To further improve computational efficiency, we also propose to train the variational model on a significantly cheaper-to-evaluate lower bound, and show empirically that the resulting model provides an excellent guide for more accurate, but expensive to evaluate bounds on the EIG. We demonstrate the effectiveness of our technique on generalized linear models, a class of statistical models that is widely used in the analysis of controlled experiments. Experiments show that our method is able to greatly improve accuracy over existing approximation strategies, and achieve these results with far better sample efficiency.

Bio: Noble Kennamer recently completed his PhD at UC Irvine under Alexander Ihler, where he worked on variational methods for optimal experimental design and applications of machine learning to the physical sciences. In March he will be starting as a Research Scientist at Netflix.
Feb. 20
No Seminar (Presidents’ Day)
Feb. 27
Seminar Canceled
Mar. 6
DBH 4011
1 pm

Shlomo Zilberstein

Professor of Computer Science
University of Massachusetts, Amherst

Competence is the ability to do something well. Competence awareness is the ability to represent and learn a model of self competence and use it to decide how to best use the agent’s own abilities as well as any available human assistance. This capability is critical for the success and safety of autonomous systems that operate in the open world. In this talk, I introduce two types of competence-aware systems (CAS), namely Type I and Type II CAS. The former refers to a stand-alone system that can learn its own competence and use it to fine-tune itself to the characteristics of the problem instance at hand, without human assistance. The latter is a human-aware system that can uses a self-competence model to optimize the utilization of costly human assistive actions. I describe recent results that demonstrate the benefits of the two types of competence awareness in different contexts, including autonomous vehicle decision making.

Bio: Shlomo Zilberstein is Professor of Computer Science and Associate Dean for Research and Engagement in the Manning College of Information and Computer Sciences at the University of Massachusetts, Amherst. He received a B.A. in Computer Science from the Technion, and a Ph.D. in Computer Science from the UC Berkeley. Zilberstein’s research focuses on the foundations and applications of resource-bounded reasoning techniques, which allow complex systems to make decisions while coping with uncertainty, missing information, and limited computational resources. His research interests include decision theory, reasoning under uncertainty, Markov decision processes, design of autonomous agents, heuristic search, real-time problem solving, principles of meta-reasoning, planning and scheduling, multi-agent systems, and reinforcement learning. Zilberstein is a Fellow of AAAI and the ACM. He is recipient of the University of Massachusetts Chancellor’s Medal (2019), the IFAAMAS Influential Paper Award (2019), the AAAI Distinguished Service Award (2019), a National Science Foundation CAREER Award (1996), and the Israel Defense Prize (1992). He received numerous Paper Awards from AAAI (2017,2021), IJCAI (2020), AAMAS (2003), ECAI (1998), ICAPS (2010), and SoCS (2022) among others. He is the past Editor-in-Chief of the Journal of Artificial Intelligence Research, former Chair of the AAAI Conference Committee, former President of ICAPS, a former Councilor of AAAI, and the Chairman of the AI Access Foundation.