AI/ML Seminar Series

Standard

Weekly Seminar in AI & Machine Learning
Sponsored by Cylance


Apr 10
Bren Hall 4011
1 pm
Mike Izbicki
PhD Candidate
Department of Computer Science
University of California, Riverside

I’ll present two algorithms that use divide and conquer techniques to speed up learning. The first algorithm (called OWA) is a communication efficient distributed learner. OWA uses only two rounds of communication, which is sufficient to achieve optimal learning rates. The second algorithm is a meta-algorithm for fast cross validation. I’ll show that for any divide and conquer learning algorithm, there exists a fast cross validation procedure whose run time is asymptotically independent of the number of cross validation folds.
Apr 17
Bren Hall 4011
1 pm
James Supancic
PhD Candidate
Department of Computer Science
University of California, Irvine

Cameras can naturally capture sequences of images, or videos. And when understanding videos, connecting the past with the present requires tracking. Sometimes tracking is easy. We focus on two challenges which make tracking harder: long-term occlusions and appearance variations. To handle total occlusion, a tracker must know when it has lost track and how to reinitialize tracking when the target reappears. Reinitialization requires good appearance models. We build appearance models for humans and hands, with a particular emphasis on robustness and occlusion. For the second challenge, appearance variation, the tracker must know when and how to re-learn (or update) an appearance model. This challenge leads to the classic problem of drift: aggressively learning appearance changes allows small errors to compound, as elements of the background environment pollute the appearance model. We propose two solutions. First, we consider self-paced learning, wherein a tracker begins by learning from frames it finds easy. As the tracker becomes better at recognizing the target, it begins to learn from harder frames. We also develop a data-driven approach: train a tracking policy to decide when and how to update an appearance model. To take this direct approach to “learning when to learn”, we exploit large-scale Internet data through reinforcement learning. We interpret the resulting policy and conclude with a generalization for tracking multiple objects.
Apr 24
Bren Hall 4011
1 pm
Dr. David R Thompson

Jet Propulsion Laboratory
California Institute of Technology

Imaging spectrometers enable quantitative maps of physical and chemical properties at high spatial resolution. They have a long history of deployments for mapping terrestrial and coastal aquatic ecosystems, geology, and atmospheric properties. They are also critical tools for exploring other planetary bodies. These high-dimensional spatio-spectral datasets pose a rich challenge for computer scientists and algorithm designers. This talk will provide an introduction to remote imaging spectroscopy in the Visible and Shortwave Infrared, describing the measurement strategy and data analysis considerations including atmospheric correction. We will describe historical and current instruments, software, and public datasets.

Bio: David R. Thompson is a researcher and Technical Group Lead in the Imaging Spectroscopy group at the NASA Jet Propulsion Laboratory. He is Investigation Scientist for the AVIRIS imaging spectrometer project. Other roles include software lead for the NEAScout mission, autonomy software lead for the PIXL instrument, and algorithm development for diverse JPL airborne imaging spectrometer campaigns. He is recipient of the NASA Early Career Achievement Medal and the JPL Lew Allen Award.

May 1
Bren Hall 4011
1 pm
Weining Shen
Assistant Professor
Department of Statistics
University of California, Irvine

Bayesian nonparametric (BNP) models have been widely used in modern applications. In this talk, I will discuss some recent theoretical results for the commonly used BNP methods from a frequentist asymptotic perspective. I will cover a set of function estimation and testing problems such as density estimation, high-dimensional partial linear regression, independence testing, and independent component analysis. Minimax optimal convergence rates, adaptation and Bernstein-von Mises theorem will be discussed.