PhD Students win Best Poster Awards

Standard

Congratulations to CML graduate students for recent poster awards at the 2017 Southern California Machine Learning Symposium held at USC.  Zhengli Zhao and Dheeru Dua (with advisor Sameer Singh) won best poster award for their work on generating natural adversarial examples and Eric Nalisnick (with advisor Padhraic Smyth) won honorable mention for his work on boosting variational inference. There were about 50 student posters presented and over 250 machine learning researchers attended the event. Next SoCal ML Symposium is scheduled for Fall 2018, to be hosted by UCLA.

New Faculty Member: Erik Sudderth

Standard

We are delighted to welcome new faculty member Erik Sudderth to the Center. Erik recently joined the Department of Computer Science at UCI as an Associate Professor. He is well-known for his research in machine learning, with interests in topics such as graphical models and Bayesian nonparametric methods. Erik’s research group is also active in the application of these ideas to artificial intelligence, vision, and the natural and social sciences. More information about Erik and his research group is available at Erik’s Webpage.

Fall 2017

Standard





Oct 9
No Seminar (Columbus Day)

Oct 16
Bren Hall 3011
1 pm
Bailey Kong
PhD Candidate
Department of Computer Science
University of California, Irvine

We investigate the problem of automatically determining what type of shoe left an impression found at a crime scene. This recognition problem is made difficult by the variability in types of crime scene evidence (ranging from traces of dust or oil on hard surfaces to impressions made in soil) and the lack of comprehensive databases of shoe outsole tread patterns. We find that mid-level features extracted by pre-trained convolutional neural nets are surprisingly effective descriptors for these specialized domains. However, the choice of similarity measure for matching exemplars to a query image is essential to good performance. For matching multi-channel deep features, we propose the use of multi-channel normalized cross-correlation and analyze its effectiveness. Finally, we introduce a discriminatively trained variant and fine-tune our system end-to-end, obtaining state-of-the-art performance.
Oct 23
Bren Hall 3011
1 pm
Geng Ji
PhD Candidate
Department of Computer Science
University of California, Irvine

We propose a hierarchical generative model that captures the self-similar structure of image regions as well as how this structure is shared across image collections. Our model is based on a novel, variational interpretation of the popular expected patch log-likelihood (EPLL) method as a model for randomly positioned grids of image patches. While previous EPLL methods modeled image patches with finite Gaussian mixtures, we use nonparametric Dirichlet process (DP) mixtures to create models whose complexity grows as additional images are observed. An extension based on the hierarchical DP then captures repetitive and self-similar structure via image-specific variations in cluster frequencies. We derive a structured variational inference algorithm that adaptively creates new patch clusters to more accurately model novel image textures. Our denoising performance on standard benchmarks is superior to EPLL and comparable to the state-of-the-art, and we provide novel statistical justifications for common image processing heuristics. We also show accurate image inpainting results.
Oct 30
Bren Hall 4011
1 pm
Qi Lou
PhD Candidate
Department of Computer Science
University of California, Irvine

Computing the partition function is a key inference task in many graphical models. In this paper, we propose a dynamic importance sampling scheme that provides anytime finite-sample bounds for the partition function. Our algorithm balances the advantages of the three major inference strategies, heuristic search, variational bounds, and Monte Carlo methods, blending sampling with search to refine a variationally defined proposal. Our algorithm combines and generalizes recent work on anytime search and probabilistic bounds of the partition function. By using an intelligently chosen weighted average over the samples, we construct an unbiased estimator of the partition function with strong finite-sample confidence intervals that inherit both the rapid early improvement rate of sampling with the long-term benefits of an improved proposal from search. This gives significantly improved anytime behavior, and more flexible trade-offs between memory, time, and solution quality. We demonstrate the effectiveness of our approach empirically on real-world problem instances taken from recent UAI competitions.
Nov 6
Bren Hall 3011
1 pm
Vladimir Minin
Professor
Department of Statistics
University of California, Irvine

Estimating evolutionary trees, called phylogenies or genealogies, is a fundamental task in modern biology. Once phylogenetic reconstruction is accomplished, scientists are faced with a challenging problem of interpreting phylogenetic trees. In certain situations, a coalescent process, a stochastic model that randomly generates evolutionary trees, comes to rescue by probabilistically connecting phylogenetic reconstruction with the demographic history of the population under study. An important application of the coalescent is phylodynamics, an area that aims at reconstructing past population dynamics from genomic data. Phylodynamic methods have been especially successful in analyses of genetic sequences from viruses circulating in human populations. From a Bayesian hierarchal modeling perspective, the coalescent process can be viewed as a prior for evolutionary trees, parameterized in terms of unknown demographic parameters, such as the population size trajectory. I will review Bayesian nonparametric techniques that can accomplish phylodynamic reconstruction, with a particular attention to analysis of genetic data sampled serially through time.
Nov 20
No Seminar (Thanksgiving Week)

Dec 4
No Seminar (NIPS Conference)

Dec 13
Bren Hall 4011
1 pm
Yutian Chen
Research Scientist
Google DeepMind

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.

Singh talk, OC ACM Chapter

Standard

Center member Prof. Sameer Singh will discuss his research on “Explaining Black-Box Machine Learning Predictions,” which addresses the important and challenging problem of enabling people to understand, predict and trust the behavior of machine learning models and algorithms. More information and online registration is available on the Orange County ACM Chapter Meetup Event page.

Spring 2017

Standard

Apr 10
Bren Hall 4011
1 pm
Mike Izbicki
PhD Candidate
Department of Computer Science
University of California, Riverside

I’ll present two algorithms that use divide and conquer techniques to speed up learning. The first algorithm (called OWA) is a communication efficient distributed learner. OWA uses only two rounds of communication, which is sufficient to achieve optimal learning rates. The second algorithm is a meta-algorithm for fast cross validation. I’ll show that for any divide and conquer learning algorithm, there exists a fast cross validation procedure whose run time is asymptotically independent of the number of cross validation folds.
Apr 17
Bren Hall 4011
1 pm
James Supancic
PhD Candidate
Department of Computer Science
University of California, Irvine

Cameras can naturally capture sequences of images, or videos. And when understanding videos, connecting the past with the present requires tracking. Sometimes tracking is easy. We focus on two challenges which make tracking harder: long-term occlusions and appearance variations. To handle total occlusion, a tracker must know when it has lost track and how to reinitialize tracking when the target reappears. Reinitialization requires good appearance models. We build appearance models for humans and hands, with a particular emphasis on robustness and occlusion. For the second challenge, appearance variation, the tracker must know when and how to re-learn (or update) an appearance model. This challenge leads to the classic problem of drift: aggressively learning appearance changes allows small errors to compound, as elements of the background environment pollute the appearance model. We propose two solutions. First, we consider self-paced learning, wherein a tracker begins by learning from frames it finds easy. As the tracker becomes better at recognizing the target, it begins to learn from harder frames. We also develop a data-driven approach: train a tracking policy to decide when and how to update an appearance model. To take this direct approach to “learning when to learn”, we exploit large-scale Internet data through reinforcement learning. We interpret the resulting policy and conclude with a generalization for tracking multiple objects.
Apr 24
Bren Hall 4011
1 pm
David R Thompson

Jet Propulsion Laboratory
California Institute of Technology

Imaging spectrometers enable quantitative maps of physical and chemical properties at high spatial resolution. They have a long history of deployments for mapping terrestrial and coastal aquatic ecosystems, geology, and atmospheric properties. They are also critical tools for exploring other planetary bodies. These high-dimensional spatio-spectral datasets pose a rich challenge for computer scientists and algorithm designers. This talk will provide an introduction to remote imaging spectroscopy in the Visible and Shortwave Infrared, describing the measurement strategy and data analysis considerations including atmospheric correction. We will describe historical and current instruments, software, and public datasets.

Bio: David R. Thompson is a researcher and Technical Group Lead in the Imaging Spectroscopy group at the NASA Jet Propulsion Laboratory. He is Investigation Scientist for the AVIRIS imaging spectrometer project. Other roles include software lead for the NEAScout mission, autonomy software lead for the PIXL instrument, and algorithm development for diverse JPL airborne imaging spectrometer campaigns. He is recipient of the NASA Early Career Achievement Medal and the JPL Lew Allen Award.

May 1
Bren Hall 4011
1 pm
Weining Shen
Assistant Professor
Department of Statistics
University of California, Irvine

Bayesian nonparametric (BNP) models have been widely used in modern applications. In this talk, I will discuss some recent theoretical results for the commonly used BNP methods from a frequentist asymptotic perspective. I will cover a set of function estimation and testing problems such as density estimation, high-dimensional partial linear regression, independence testing, and independent component analysis. Minimax optimal convergence rates, adaptation and Bernstein-von Mises theorem will be discussed.
May 8
Bren Hall 4011
1 pm
P. Anandan
VP for Research
Adobe Systems

During the last two decades the experience of consumers has been undergoing a fundamental and dramatic transformation – giving a rich variety of informed choices, online shopping, consumption of news and entertainment on the go, and personalized shopping experiences. All of this has been powered by the massive amounts of data that is continuously being collected and the application of machine learning, data science and AI techniques to it.

Adobe is a leader the Digital Marketing and is the leading provider of solutions to enterprises that are serving customers both in the B2B and B2C space. In this talk, we will outline the current state of the industry and the technology that is behind it, how Data Science and Machine Learning are gradually beginning to transform the experiences of the consumer as well as the marketer. We will also speculate on how recent developments in Artificial Intelligence will lead to deep personalization and richer experiences for the consumer as well as more powerful and tailored end-to-end capabilities for the marketer.

Bio: Dr. P. Anandan is Vice President in Adobe Research, responsible for developing research strategy for Adobe, especially in Digital Marketing, and Leading the Adobe India Research lab. An emphasis of this lab is on Big Data Experience and Intelligence. At Adobe, he is also leading efforts in applying A.I. to Big Data. Dr. Anandan is an expert in Computer Vision with more than 60 publications that have earned 14,500 citations in Google Scholar. His research areas include visual motion analysis, video surveillance, and 3D scene modeling from images and video. His papers have won multiple awards including the Helmholtz Prize, for long term fundamental contributions to computer vision research. Prior to joining Adobe Dr. Anandan had a long tenure with Microsoft Research in Redmond, WA, and became a Distinguished Scientist. He was the Managing Director of Microsoft Research India, which he founded. Most recently he was the Managing Director of Microsoft Research’s Worldwide Outreach. He earned a PhD from the University of Massachusetts specializing in Computer Vision and Artificial Intelligence. He started as an assistant professor at Yale University before moving on to work in Video Information Processing at the David Sarnoff Research Center. His research has been used in DARPA’s Video Surveillance and Monitoring program as well as in creating special effects in the movies “What Dreams May Come”, “Prince of Egypt,” and “The Matrix.” Dr. Anandan is the recipient of Distinguished Alumnus awards from both University of Massachusetts and the Indian Institute of Technology Madras, where he earned a B. Tech. in Electrical Engineering. He was inducted into the Nebraska Hall of Computing by the University of Nebraska, from where he obtained an MS in Computer Science. He is currently a member of the Board of Governors of IIT Madras.

May 15
Bren Hall 4011
1 pm
Ndapa Nakashole
Assistant Professor
Computer Science and Engineering
University of California, San Diego

Zero-shot learning is used in computer vision, natural language, and other domains to induce mapping functions that project vectors from one vector space to another. This is a promising approach to learning, when we do not have labeled data for every possible label we want a system to recognize. This setting is common when doing NLP for low-resource languages, where labeled data is very scare. In this talk, I will present our work on improving zero-shot learning methods for the task of word-level translation.

Bio: Ndapa Nakashole is an Assistant Professor in the Department of Computer Science and Engineering at the University of California, San Diego. Prior to UCSD, she was a Postdoctoral Fellow in the Machine Learning Department at Carnegie Mellon University. She obtained her PhD from Saarland University, Germany, for work done at the Max Planck Institute for Informatics at Saarbrücken.

May 22
Bren Hall 4011
1 pm
Batya Kenig
Postdoctoral Scholar
Department of Information Systems Engineering
Technion – Israel Institute of Technology

We propose a novel framework wherein probabilistic preferences can be naturally represented and analyzed in a probabilistic relational database. The framework augments the relational schema with a special type of a relation symbol, a preference symbol. A deterministic instance of this symbol holds a collection of binary relations. Abstractly, the probabilistic variant is a probability space over databases of the augmented form (i.e., probabilistic database). Effectively, each instance of a preference symbol can be represented as a collection of parametric preference distributions such as Mallows. We establish positive and negative complexity results for evaluating Conjunctive Queries (CQs) over databases where preferences are represented in the Repeated Insertion Model (RIM), Mallows being a special case. We show how CQ evaluation reduces to a novel inference problem (of independent interest) over RIM, and devise a solver with polynomial data complexity.
May 29
No Seminar (Memorial Day)

Jun 5
Bren Hall 4011
1 pm
Yonatan Bisk
Postdoctoral Scholar
Information Sciences Institute
University of Southern California

The future of self-driving cars, personal robots, smart homes, and intelligent assistants hinges on our ability to communicate with computers. The failures and miscommunications of Siri-style systems are untenable and become more problematic as machines become more pervasive and are given more control over our lives. Despite the creation of massive proprietary datasets to train dialogue systems, these systems still fail at the most basic tasks. Further, their reliance on big data is problematic. First, successes in English cannot be replicated in most of the 6,000+ languages of the world. Second, while big data has been a boon for supervised training methods, many of the most interesting tasks will never have enough labeled data to actually achieve our goals. It is therefore important that we build systems which can learn from naturally occurring data and grounded situated interactions.

In this talk, I will discuss work from my thesis on the unsupervised acquisition of syntax which harnesses unlabeled text in over a dozen languages. This exploration leads us to novel insights into the limits of semantics-free language learning. Having isolated these stumbling blocks, I’ll then present my recent work on language grounding where we attempt to learn the meaning of several linguistic constructions via interaction with the world.

Bio: Yonatan Bisk’s research focuses on Natural Language Processing from naturally occurring data (unsupervised and weakly supervised data). He is a postdoc researcher with Daniel Marcu at USC’s Information Sciences Institute. Previously, he received his Ph.D. from the University of Illinois at Urbana-Champaign under Julia Hockenmaier and his BS from the University of Texas at Austin.

Winter 2017

Standard

Jan 16
No Seminar (MLK Day)

Jan 23
Bren Hall 4011
1 pm
Mohammad Ghavamzadeh
Senior Analytics Researcher
Adobe Research

In online advertisement as well as many other fields such as health informatics and computational finance, we often have to deal with the situation in which we are given a batch of data generated by the current strategy(ies) of the company (hospital, investor), and we are asked to generate a good or an optimal strategy. Although there are many techniques to find a good policy given a batch of data, there are not much results to guarantee that the obtained policy will perform well in the real system without deploying it. On the other hand, deploying a policy might be risky, and thus, requires convincing the product (hospital, investment) manager that it is not going to harm the business. This is why it is extremely important to devise algorithms that generate policies with performance guarantees.

In this talk, we discuss four different approaches to this fundamental problem, we call them model-based, model-free, online, and risk-sensitive. In the model-based approach, we first use the batch of data and build a simulator that mimics the behavior of the dynamical system under studies (online advertisement, hospital’s ER, financial market), and then use this simulator to generate data and learn a policy. The main challenge here is to have guarantees on the performance of the learned policy, given the error in the simulator. This line of research is closely related to the area of robust learning and control. In the model-free approach, we learn a policy directly from the batch of data (without building a simulator), and the main question is whether the learned policy is guaranteed to perform at least as well as a baseline strategy. This line of research is related to off-policy evaluation and control. In the online approach, the goal is to control the exploration of the algorithm in a way that never during its execution the loss of using it instead of the baseline strategy is more than a given margin. In the risk-sensitive approach, the goal is to learn a policy that manages risk by minimizing some measure of variability in the performance in addition to maximizing a standard criterion. We present algorithms based on these approaches and demonstrate their usefulness in real-world applications such as personalized ad recommendation, energy arbitrage, traffic signal control, and American option pricing.

Bio:Mohammad Ghavamzadeh received a Ph.D. degree in Computer Science from the University of Massachusetts Amherst in 2005. From 2005 to 2008, he was a postdoctoral fellow at the University of Alberta. He has been a permanent researcher at INRIA in France since November 2008. He was promoted to first-class researcher in 2010, was the recipient of the “INRIA award for scientific excellence” in 2011, and obtained his Habilitation in 2014. He is currently (from October 2013) on a leave of absence from INRIA working as a senior analytics researcher at Adobe Research in California, on projects related to digital marketing. He has been an area chair and a senior program committee member at NIPS, IJCAI, and AAAI. He has been on the editorial board of Machine Learning Journal (MLJ), has published over 50 refereed papers in major machine learning, AI, and control journals and conferences, and has organized several tutorials and workshops at NIPS, ICML, and AAAI. His research is mainly focused on sequential decision-making under uncertainty, reinforcement learning, and online learning.

Jan 27
Bren Hall 6011
11:00am
Ruslan Salakhutdinov
Associate Professor
Machine Learning Department
Carnegie Mellon University

In this talk, I will first introduce a broad class of unsupervised deep learning models and show that they can learn useful hierarchical representations from large volumes of high-dimensional data with applications in information retrieval, object recognition, and speech perception. I will next introduce deep models that are capable of extracting a unified representation that fuses together multiple data modalities and present the Reverse Annealed Importance Sampling Estimator (RAISE) for evaluating these deep generative models. Finally, I will discuss models that can generate natural language descriptions (captions) of images and generate images from captions using attention, as well as introduce multiplicative and fine-grained gating mechanisms with application to reading comprehension.

Bio: Ruslan Salakhutdinov received his PhD in computer science from the University of Toronto in 2009. After spending two post-doctoral years at the Massachusetts Institute of Technology Artificial Intelligence Lab, he joined the University of Toronto as an Assistant Professor in the Departments of Statistics and Computer Science. In 2016 he joined the Machine Learning Department at Carnegie Mellon University as an Associate Professor. Ruslan’s primary interests lie in deep learning, machine learning, and large-scale optimization. He is an action editor of the Journal of Machine Learning Research and served on the senior programme committee of several learning conferences including NIPS and ICML. He is an Alfred P. Sloan Research Fellow, Microsoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the Early Researcher Award, Google Faculty Award, Nvidia’s Pioneers of AI award, and is a Senior Fellow of the Canadian Institute for Advanced Research.

Jan 30
Bren Hall 4011
1 pm
Pierre Baldi & Peter Sadowski
Chancellor’s Professor
Department of Computer Science
University of California, Irvine

Learning in the Machine is a style of machine learning that takes into account the physical constraints of learning machines, from brains to neuromorphic chips. Taking into account these constraints leads to new insights into the foundations of learning systems, and occasionally leads also to improvements for machine learning performed on digital computers. Learning in the Machine is particularly useful when applied to message passing algorithms such as backpropagation and belief propagation, and leads to the concepts of local learning and learning channel. These concepts in turn will be applied to random backpropagation and several new variants. In addition to simulations corroborating the remarkable robustness of these algorithms, we will present new mathematical results establishing interesting connections between machine learning and Hilbert 16th problem.
Feb 6
Bren Hall 4011
1 pm
Miles Stoudenmire
Research Scientist
Department of Physics
University of California, Irvine

Tensor networks are a technique for factorizing tensors with hundreds or thousands of indices into a contracted network of low-order tensors. Originally developed at UCI in the 1990’s, tensor networks have revolutionized major areas of physics are starting to be used in applied math and machine learning. I will show that tensor networks fit naturally into a certain class of non-linear kernel learning models, such that advanced optimization techniques from physics can be applied straightforwardly (arxiv:1605.05775). I will discuss many advantages and future directions of tensor network models, for example adaptive pruning of weights and linear scaling with training set size (compared to at least quadratic scaling when using the kernel trick).
Feb 13
Bren Hall 4011
1 pm
Qi Lou
PhD Candidate
Department of Computer Science
University of California, Irvine

Bounding the partition function is a key inference task in many graphical models. In this paper, we develop an anytime anyspace search algorithm taking advantage of AND/OR tree structure and optimized variational heuristics to tighten deterministic bounds on the partition function. We study how our priority-driven best-first search scheme can improve on state-of-the-art variational bounds in an anytime way within limited memory resources, as well as the effect of the AND/OR framework to exploit conditional independence structure within the search process within the context of summation. We compare our resulting bounds to a number of existing methods, and show that our approach offers a number of advantages on real-world problem instances taken from recent UAI competitions.
Feb 20
No Seminar (Presidents Day)

Feb 27
Bren Hall 4011
1 pm
Eric Nalisnick
PhD Candidate
Department of Computer Science
University of California, Irvine

Deep generative models (such as the Variational Autoencoder) efficiently couple the expressiveness of deep neural networks with the robustness to uncertainty of probabilistic latent variables. This talk will first give an overview of deep generative models, their applications, and approximate inference strategies for them. Then I’ll discuss our work on placing Bayesian Nonparametric priors on their latent space, which allows the hidden representations to grow as the data necessitates.
Mar 6
Bren Hall 4011
1 pm
Omer Levy
Postdoctoral Researcher
Department of Computer Science & Engineering
University of Washington

Neural word embeddings, such as word2vec (Mikolov et al., 2013), have become increasingly popular in both academic and industrial NLP. These methods attempt to capture the semantic meanings of words by processing huge unlabeled corpora with methods inspired by neural networks and the recent onset of Deep Learning. The result is a vectorial representation of every word in a low-dimensional continuous space. These word vectors exhibit interesting arithmetic properties (e.g. king – man + woman = queen) (Mikolov et al., 2013), and seemingly outperform traditional vector-space models of meaning inspired by Harris’s Distributional Hypothesis (Baroni et al., 2014). Our work attempts to demystify word embeddings, and understand what makes them so much better than traditional methods at capturing semantic properties.

Our main result shows that state-of-the-art word embeddings are actually “more of the same”. In particular, we show that skip-grams with negative sampling, the latest algorithm in word2vec, is implicitly factorizing a word-context PMI matrix, which has been thoroughly used and studied in the NLP community for the past 20 years. We also identify that the root of word2vec’s perceived superiority can be attributed to a collection of hyperparameter settings. While these hyperparameters were thought to be unique to neural-network inspired embedding methods, we show that they can, in fact, be ported to traditional distributional methods, significantly improving their performance. Among our qualitative results is a method for interpreting these seemingly-opaque word-vectors, and the answer to why king – man + woman = queen.

Bio: Omer Levy is a post-doc in the Department of Computer Science & Engineering at the University of Washington, working with Prof. Luke Zettlemoyer. Previously, he completed his BSc and MSc at Technion – Israel Institute of Technology with the guidance of Prof. Shaul Markovitch, and got his PhD at Bar-Ilan University with the supervision of Prof. Ido Dagan and Dr. Yoav Goldberg. Omer is interested in realizing high-level semantic applications such as question answering and summarization to help people cope with information overload. At the heart of these applications are challenges in textual entailment, semantic similarity, and reading comprehension, which form the core of my current research. He is also interested in the current advances in deep learning and how they can facilitate semantic applications.

PhD Research Fellowships

Standard

The Computer Science department at UC Irvine is seeking applicants for PhD research fellowships in artificial intelligence, machine learning, and their related applications, including topics such as deep learning, statistical learning, graphical models, information extraction, computer vision, high-dimensional data analysis, and more.

Please see this flier for more information.

MidCareer Faculty Positions at UC Irvine

Standard

MidCareer Faculty Positions at UC Irvine

Application deadline: Dec 9th 2016 (Applications received by November 9, 2016 will receive fullest consideration.)

Apply online at: https://recruit.ap.uci.edu/apply/JPF03719

The University of California, Irvine (UCI) is engaged in a multi-year campuswide strategic expansion and seeks to hire midcareer faculty (advanced assistant, tenured associate, to early full professors) in the area of information and computer sciences who have distinguished publication records and upward trajectories in their research profiles.

Qualified applicants with interests in artificial intelligence, computer vision, machine learning, natural language processing, bioinformatics and related topics are encouraged to apply for these positions. UCI has a very active group of faculty in these areas including Anima Anandkumar, Pierre Baldi, Rina Dechter, Charless Fowlkes, Alex Ihler, Rick Lathrop, Eric Mjolsness, Sameer Singh, Padhraic Smyth, Erik Sudderth, and Xiaohui Xie – primarily in the computer science department, with strong interdisciplinary connections to departments such as cognitive science, informatics, and statistics.

Recently celebrating its 50th anniversary, UCI is part of the premier public university system in the world. It was recently named by U.S. News & World Report as a top ten public university and by the New York Times as No. 1 among U.S. universities that do the most for low-income students. UCI is located in one of the world’s safest and most economically vibrant communities and is Orange County’s second-largest employer, contributing $4.8 billion annually to the local economy.

The University of California, Irvine is an Equal Opportunity/Affirmative Action Employer advancing inclusive excellence. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, age, protected veteran status, or other protected categories covered by the UC nondiscrimination policy.

Fall 2016

Standard

Sep 22
Bren Hall 4011
1 pm
Burr Settles
Duolingo

Duolingo is a language education platform that teaches 20 languages to more than 150 million students worldwide. Our free flagship learning app is the \#1 way to learn a language online, and is the most-downloaded education app for both Android and iOS devices. In this talk, I will describe the Duolingo system and several of our empirical research projects to date, which combine machine learning with computational linguistics and psychometrics to improve learning, engagement, and even language proficiency assessment through our products.
Sep 26
Bren Hall 4011
1 pm
Golnaz Ghiasi
PhD Candidate
Department of Computer Science
University of California, Irvine

Convolutional Neural Net (CNN) architectures have terrific recognition performance but rely on spatial pooling which makes it difficult to adapt them to tasks that require dense, pixel-accurate labeling. We make two contributions to solving this problem: (1) We demonstrate that while the apparent spatial resolution of convolutional feature maps is low, the high-dimensional feature representation contains significant sub-pixel localization information. (2) We describe a multi-resolution reconstruction architecture based on a Laplacian pyramid that uses skip connections from higher resolution feature maps and multiplicative gating to successively refine segment boundaries reconstructed from lower-resolution maps. This approach yields state-of-the-art semantic segmentation results on the PASCAL VOC and Cityscapes segmentation benchmarks without resorting to more complex random-field inference or instance detection driven architectures.
Oct 3
Bren Hall 4011
1 pm
Shuang Zhao
Assistant Professor
Department of Computer Science
University of California, Irvine

Despite the rapid development of computer graphics during the recent years, complex materials such as fabrics, fur, and human hair remain largely lacking in the virtual worlds. This is due to both the lack of high-fidelity data and the inability to efficiently describe these complicated objects via mathematical/statistical models.

In this talk, I will present my research that introduces new means to acquire, model, and render complex materials that are essential to our daily lives with a focus on fabrics. Leveraging detailed geometric information and sophisticated optical model, our work has led to computer generated imagery with a new level of accuracy and fidelity. In particular, we measure real-world samples using volume imaging (e.g., computed micro-tomography) to obtain detailed datasets on their micro-geometries. We then fit sophisticated statistical models to the measured data, yielding highly compact yet realistic representations. Lastly, we show how to recover a sample’s optical properties (e.g., colors) using optimization.

Oct 10
No Seminar (Columbus Day)

Oct 17
Bren Hall 4011
1 pm
Stefano Ermon
Assistant Professor of Computer Science
Fellow of the Woods Institute for the Environment
Stanford University

Recent technological developments are creating new spatio-temporal data streams that contain a wealth of information relevant to sustainable development goals. Modern AI techniques have the potential to yield accurate, inexpensive, and highly scalable models to inform research and policy. As a first example, I will present a machine learning method we developed to predict and map poverty in developing countries. Our method can reliably predict economic well-being using only high-resolution satellite imagery. Because images are passively collected in every corner of the world, our method can provide timely and accurate measurements in a very scalable end economic way, and could revolutionize efforts towards global poverty eradication. As a second example, I will present some ongoing work on monitoring agricultural and food security outcomes from space.
Oct 24
No Seminar (cancelled)

Oct 31
Bren Hall 4011
1 pm
Matt Harding
Associate Professor
Department of Economics
University of California, Irvine

This talks explores recent uses of machine learning to large proprietary consumer transaction datasets. These are datasets which record barcode level transaction information on individual items purchased grouped by shopping trip and customer. Recent innovations in data collection allow us to go beyond the supermarket scanner to collect such data and include recent efforts to digitize the universe of customers’ receipts across all channels from supermarkets to online purchases. Additionally, passive wifi tracking allows us to record search behavior in stores and model how it translates into sales. It also gives us the opportunity to create real time interventions to nudge consumer shopping behavior. We will explore some of the challenges of modeling consumer behavior using these data and discuss methods such as tensor decompositions for count data, discrete choice modeling with Dirichlet Process Mixtures, and the use of deep autoencoders for producing interpretable statistical hypotheses.
Nov 7
Bren Hall 4011
1 pm
Wei Ping
PhD Candidate
Department of Computer Science
University of California, Irvine

This talk investigates the restricted Boltzmann machine (RBM), which is the building block for many deep probabilistic models. We propose an infinite RBM model, whose maximum likelihood estimation corresponds to a constrained convex optimization. We consider the Frank-Wolfe algorithm to solve the program, which provides a sparse solution that can be interpreted as inserting a hidden unit at each iteration. As a side benefit, this can be used to easily and efficiently identify an appropriate number of hidden units during the optimization. We also investigate different learning algorithms for conditional RBMs. There is a pervasive opinion that loopy belief propagation does not work well on RBM-based models, especially for learning. We demonstrate that, in the conditional setting, learning RBM-based models with belief propagation and its variants can provide much better results than the state-of-the-art contrastive divergence algorithms.
Nov 14
Bren Hall 4011
1 pm
Cheng Zhang
PhD Candidate
Department of Mathematics
University of California, Irvine

Traditionally, the field of computational Bayesian statistics has been divided into two main subfields: variational inference and Markov chain Monte Carlo (MCMC). In recent years, however, several methods have been proposed based on combining variational Bayesian inference and MCMC simulation in order to improve their overall accuracy and computational efficiency. This marriage of fast evaluation and flexible approximation provides a promising means of designing scalable Bayesian inference methods. In this work, we explore the possibility of incorporating variational approximation into a state-of-the-art MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required expensive computation involved in the sampling procedure, which is the bottleneck for many applications of HMC in big data problems. To this end, we exploit the regularity in parameter space to construct a free-form approximation of the target distribution by a fast and flexible surrogate function using an optimized additive model of proper random basis. The surrogate provides sufficiently accurate approximation while allowing for fast computation, resulting in an efficient approximate inference algorithm. We demonstrate the advantages of our method on both synthetic and real data problems.
Nov 16
Bren Hall 4011
4pm
Arindam Banerjee
Associate Professor
Department of Computer Science and Engineering
University of Minnesota

Many machine learning problems, especially scientific problems in areas such as ecology, climate science, and brain sciences, operate in the so-called `low samples, high dimensions’ regime. Such problems typically have numerous possible predictors or features, but the number of training examples is small, often much smaller than the number of features. In this talk, we will discuss recent advances in general formulations and estimators for such problems. These formulations generalize prior work such as the Lasso and the Dantzig selector. We will discuss the geometry underlying such formulations, and how the geometry helps in establishing finite sample properties of the estimators. We will also discuss applications of such results in structure learning in probabilistic graphical models, along with real world applications in ecology and climate science.

This is joint work with Soumyadeep Chatterjee, Sheng Chen, Farideh Fazayeli, Andre Goncalves, Jens Kattge, Igor Melnyk, Peter Reich, Franziska Schrodt, Hanhuai Shan, and Vidyashankar Sivakumar.

Nov 21
Bren Hall 4011
1 pm
Qiang Liu
Assistant Professor
Department of Computer Science
Dartmouth College

Stein’s method provides a remarkable theoretical tool in probability theory but has not been widely known or used in practical machine learning. In this talk, we try to bright this gap and show that some of the key ideas of Stein’s method can be naturally combined with practical machine learning and probabilistic inference techniques such as kernel method, variational inference and variance reduction, which together form a new general framework for deriving new algorithms for handling the kind of highly complex, structured probabilistic models widely used in modern (deep) machine learning. The new algorithms derived in this way often have a simple, untraditional form and have significant advantages over the traditional methods. I will show several applications, including goodness-of-fit tests for evaluating models without knowing the normalization constants, scalable Bayesian inference that combines the advantages of variational inference, Monte Carlo and gradient-based optimization, and approximate maximum likelihood training of deep generative models that can generate realistic-looking images.
Nov 28
Bren Hall 4011
1 pm
Wolfgang Gatterbauer
Assistant Professor
Tepper School of Business
Carnegie Mellon University

We develop upper and lower bounds for the probability of Boolean functions by treating multiple occurrences of variables as independent and assigning them new individual probabilities. We call this approach “dissociation” and give an exact characterization of optimal oblivious bounds, i.e. when the new probabilities are chosen independent of the probabilities of all other variables.

Our motivation comes from the weighted model counting problem (or, equivalently, the problem of computing the probability of a Boolean function), which is \#P-hard in general. By performing several dissociations, one can transform a Boolean formula whose probability is difficult to compute, into one whose probability is easy to compute, and which is guaranteed to provide an upper or lower bound on the probability of the original formula by choosing appropriate probabilities for the dissociated variables. Our new bounds shed light on the connection between previous relaxation-based and model-based approximations and unify them as concrete choices in a larger design space. We also show how our theory allows a standard relational database management system to both upper and lower bound hard probabilistic queries in guaranteed polynomial time. (Based on joint work with Dan Suciu from TODS 2014, VLDB 2015, and VLDBJ 2016: http://arxiv.org/pdf/1409.6052,http://arxiv.org/pdf/1412.1069, http://arxiv.org/pdf/1310.6257)

Dec 5
No Seminar
Finals Week

Workshop On Interacting with Robots Through Touch

Standard

Center affiliate Prof. Jeff Krichmar is co-organizing a workshop on “Interacting with Robots Through Touch” at UC Irvine on September 13, 2016:

Workshop On Interacting with Robots Through Touch at UC Irvine
September 13, 2016
9AM – 6PM
1517 Social and Behavioral Sciences Gateway, University of California, Irvine.

Register at: http://www.socsci.uci.edu/~jkrichma/haptics_workshop.html

Description:
Robots and autonomous systems are increasingly becoming a part of our everyday life. In particular co-Robots, in which robots have a symbiotic relationship with people, have the potential to increase social well-being and open up new socioeconomic opportunities. For example, Human-Robot Interaction (HRI), co-Robotics, and Socially Assistive Robots (SARs) are increasingly being used for entertainment, education, telepresence, rehabilitation and therapy. SARs have the potential to help children with developmental disorders, such as autism or attention deficit disorders. Social robots can act as digital ethnographers by: automatically detecting what robot-generated activities children enjoy most, monitoring development of social structure within the classroom. To date, most of these co-robots focus on eye contact (e.g., shared attention, shared gaze, etc.) and auditory cues (e.g., catch phrases and music), but tend to neglect other sensory systems important for social behavior,!
such as tactile interaction.

The purpose of this workshop is to explore the use of tactile sensing in HRI and SARs. The day will include talks by invited speakers and a poster session. If you are interested in presenting a poster on this topic, send your abstract to: jkrichma@uci.edu

Confirmed Speakers:

  • Andrea Chiba, University of California, San Diego
  • Deborah Forster, University of California, San Diego
  • William Harwin, University of Reading
  • Guy Hoffman, Cornell University
  • Jeffrey L. Krichmar, University of California, Irvine
  • Francis McGlone, Liverpool JM University
  • David J. Reinkensmeyer, University of California, Irvine
  • Veronica J. Santos, University of California, Los Angeles
  • Michael Tolley, University of California, San Diego