Spring 2018

Standard



Apr 2
No Seminar
Apr 9
Bren Hall 4011
1 pm
Sabino Miranda, Ph.D
CONACyT Researcher
Center for Research and Innovation in Information and Communication Technologies


Sentiment Analysis is a research area concerned with the computational analysis of people’s feelings or beliefs expressed in texts such as emotions, opinions, attitudes, appraisals, etc. At the same time, with the growth of social media data (review websites, microblogging sites, etc.) on the Web, Twitter has received particular attention because it is a huge source of opinionated information with potential applications to decision-making tasks from business applications to the analysis of social and political events. In this context, I will present the multilingual and error-robust approaches developed in our group to tackle sentiment analysis as a classification problem, mainly for informal written text such as Twitter. Our approaches have been tested in several benchmark contests such as SemEval (International Workshop on Semantic Evaluation), TASS (Workshop for Sentiment Analysis Focused on Spanish), and PAN (Workshop on Digital Text Forensics).
Apr 16
Bren Hall 4011
1 pm
Professor of Mathematics
University of California, Irvine

A simple way to generate a Boolean function in n variables is to take the sign of some polynomial. Such functions are called polynomial threshold functions. How many low-degree polynomial threshold functions are there? This problem was solved for degree d=1 by Zuev in 1989 and has remained open for any higher degrees, including d=2, since then. In a joint work with Pierre Baldi (UCI), we settled the problem for all degrees d>1. The solution explores connections of Boolean functions to additive combinatorics and high-dimensional probability. This leads to a program of extending random matrix theory to random tensors, which is mostly an uncharted territory at present.
Apr 23
Bren Hall 4011
1 pm
PhD Candidate, Computer Science
Brown University

We develop new representations and algorithms for three-dimensional (3D) scene understanding from images and videos. In cluttered indoor scenes, RGB-D images are typically described by local geometric features of the 3D point cloud. We introduce descriptors that account for 3D camera viewpoint, and use structured learning to perform 3D object detection and room layout prediction. We also extend this work by using latent support surfaces to capture style variations of 3D objects and help detect small objects. Contextual relationships among categories and layout are captured via a cascade of classifiers, leading to holistic scene hypotheses with improved accuracy. In outdoor autonomous driving applications, given two consecutive frames from a pair of stereo cameras, 3D scene flow methods simultaneously estimate the 3D geometry and motion of the observed scene. We incorporate semantic segmentation in a cascaded prediction framework to more accurately model moving objects by iteratively refining segmentation masks, stereo correspondences, 3D rigid motion estimates, and optical flow fields.
Apr 30
Cancelled
May 7
Bren Hall 4011
1 pm
Assistant Professor
University of Utah

Natural language processing (NLP) sees potential applicability in a broad array of user-facing applications. To realize this potential, however, we need to address several challenges related to representations, data availability and scalability.

In this talk, I will discuss these concerns and how we may overcome them. First, as a motivating example of NLP’s broad reach, I will present our recent work on using language technology to improve mental health treatment. Then, I will focus on some of the challenges that need to be addressed. The choice of representations can make a big difference in our ability to reason about text; I will discuss recent work on developing rich semantic representations. Finally, I will touch upon the problem of systematically speeding up the entire NLP pipeline without sacrificing accuracy. As a concrete example, I will present a new algebraic characterization of the process of feature extraction, as a direct consequence of which, we can make trained classifiers significantly faster.

May 14
Bren Hall 4011
1 pm
PhD Candidate, Computer Science
University of California, Irvine

Objects may appear at arbitrary scales in perspective images of a scene, posing a challenge for recognition systems that process images at a fixed resolution. We propose a depth-aware gating module that adaptively selects the pooling field size (by fusing multi-scale pooled features) in a convolutional network architecture according to the object scale (inversely proportional to the depth) so that small details are preserved for distant objects while larger receptive fields are used for those nearby. The depth gating signal is provided by stereo disparity or estimated directly from monocular input. We further integrate this depth-aware gating into a recurrent convolutional neural network to refine semantic segmentation, and show state-of-the-art performance on several benchmarks.

Moreover, rather than fusing mutli-scale pooled features based on estimated depth, we show the “correct” size of pooling field for each pixel can be decided in an attentional fashion by our Pixel-wise Attentional Gating unit (PAG), which learns to choose the pooling size for each pixel. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily “plugged in” to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improves model performance without the extra computation cost, and 2) learning a dynamic computation policy for each pixel to decrease total computation while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-of-the-art performance on these tasks. We also show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe that PAG reduces computation by 10% without noticeable loss in accuracy, and performance degrades gracefully when imposing stronger computational constraints.

May 21
Bren Hall 4011
1 pm
Principal Researcher
Microsoft Research

In machine learning often a tradeoff must be made between accuracy and intelligibility: the most accurate models usually are not very intelligible (e.g., deep nets, boosted trees and random forests), and the most intelligible models usually are less accurate (e.g., logistic regression and decision lists). This tradeoff often limits the accuracy of models that can be safely deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and ultimately trust a learned model is important. We have been working on a learning method based on generalized additive models (GAMs) that is often as accurate as full complexity models, but even more intelligible than linear models. This makes it easy to understand what a model has learned, and also makes it easier to edit the model when it learns inappropriate things because of unanticipated problems with the data. Making it possible for experts to understand a model and repair it is critical because most data has unanticipated landmines. In the talk I’ll present two healthcare cases studies where these high-accuracy GAMs discover surprising patterns in the data that would have made deploying a black-box model risky. I’ll also briefly show how we’re using these models to detect bias in domains where fairness and transparency are paramount.
May 28
Memorial Day
Jun 4
Bren Hall 4011
1 pm
Stephen McAleer (Pierre Baldi‘s group)
Graduate Student, Computer Science
University of California, Irvine

We will present a novel approach to solving the Rubik’s cube effectively without any human knowledge using several ingredients including deep learning, reinforcement learning, and Monte Carlo searches.

At the end, if time permits, we will describe several extensions to the neuronal Boolean complexity results presented by Roman Vershynin a few weeks ago.

Jun 11
No Seminar (finals week)

Winter 2018

Standard



Jan 15
No Seminar (MLK Day)

 

Jan 22
Bren Hall 4011
1 pm
Shufeng Kong
PhD Candidate
Centre for Quantum Software and Information, FEIT
University of Technology Sydney, Australia

The Simple Temporal Problem (STP) is a fundamental temporal
reasoning problem and has recently been extended to
the Multiagent Simple Temporal Problem (MaSTP). In this
paper we present a novel approach that is based on enforcing
arc-consistency (AC) on the input (multiagent) simple temporal
network. We show that the AC-based approach is sufficient
for solving both the STP and MaSTP and provide efficient
algorithms for them. As our AC-based approach does
not impose new constraints between agents, it does not violate
the privacy of the agents and is superior to the state-ofthe-art
approach to MaSTP. Empirical evaluations on diverse
benchmark datasets also show that our AC-based algorithms
for STP and MaSTP are significantly more efficient than existing
approaches.
Jan 29
Bren Hall 4011
1 pm
Postdoctoral Scholar
Paul Allen School of Computer Science and Engineering
University of Washington

Deep learning is one of the most important techniques used in natural language processing (NLP). A central question in deep learning for NLP is how to design a neural network that can fully utilize the information from training data and make accurate predictions. A key to solving this problem is to design a better network architecture.

In this talk, I will present two examples from my work on how structural information from natural language helps design better neural network models. The first example shows adding coreference structures of entities not only helps different aspects of text modeling, but also improves the performance of language generation; the second example demonstrates structures of organizing sentences into coherent texts can help neural networks build better representations for various text classification tasks. Along the lines of this topic, I will also propose a few ideas for future work and discuss some potential challenges.

February 5
No Seminar (AAAI)

 

February 12
Bren Hall 4011
1 pm
PhD Candidate
Computer Science
University of California, Irvine

Bayesian inference for complex models—the kinds needed to solve complex tasks such as object recognition—is inherently intractable, requiring analytically difficult integrals be solved in high dimensions. One solution is to turn to variational Bayesian inference: a parametrized family of distributions is proposed, and optimization is carried out to find the member of the family nearest to the true posterior. There is an innate trade-off within VI between expressive vs tractable approximations. We wish the variational family to be as rich as possible so as it might include the true posterior (or something very close), but adding structure to the approximation increases the computational complexity of optimization. As a result, there has been much interest in efficient optimization strategies for mixture model approximations. In this talk, I’ll return to the problem of using mixture models for VI. First, to motivate our approach, I’ll discuss the distinction between averaging vs combining variational models. We show that optimization objectives aimed at fitting mixtures (i.e. model combination), in practice, are relaxed into performing something between model combination and averaging. Our primary contribution is to formulate a novel training algorithm for variational model averaging by adapting Stein variational gradient descent to operate on the parameters of the approximating distribution. Then, through a particular choice of kernel, we show the algorithm can be adapted to perform something closer to model combination, providing a new algorithm for optimizing (finite) mixture approximations.
February 19
No Seminar (President’s Day)

 

February 26
Bren Hall 4011
1 pm
Research Scientist
ISI/USC

Knowledge is an essential ingredient in the quest for artificial intelligence, yet scalable and robust approaches to acquiring knowledge have challenged AI researchers for decades. Often, the obstacle to knowledge acquisition is massive, uncertain, and changing data that obscures the underlying knowledge. In such settings, probabilistic models have excelled at exploiting the structure in the domain to overcome ambiguity, revise beliefs and produce interpretable results. In my talk, I will describe recent work using probabilistic models for knowledge graph construction and information extraction, including linking subjects across electronic health records, fusing background knowledge from scientific articles with gene association studies, disambiguating user browsing behavior across platforms and devices, and aligning structured data sources with textual summaries. I also highlight several areas of ongoing research, fusing embedding approaches with probabilistic modeling and building models that support dynamic data or human-in-the-loop interactions.

Bio:
Jay Pujara is a research scientist at the University of Southern California’s Information Sciences Institute whose principal areas of research are machine learning, artificial intelligence, and data science. He completed a postdoc at UC Santa Cruz, earned his PhD at the University of Maryland, College Park and received his MS and BS at Carnegie Mellon University. Prior to his PhD, Jay spent six years at Yahoo! working on mail spam detection, user trust, and contextual mail experiences, and he has also worked at Google, LinkedIn and Oracle. Jay is the author of over thirty peer-reviewed publications and has received three best paper awards for his work. He is a recognized authority on knowledge graphs, and has organized the Automatic Knowledge Base Construction (AKBC) and Statistical Relational AI (StaRAI) workshops, has presented tutorials on knowledge graph construction at AAAI and WSDM, and has had his work featured in AI Magazine.

March 5
Bren Hall 4011
1 pm
Assistant Professor
UC Riverside

Tensors and tensor decompositions have been very popular and effective tools for analyzing multi-aspect data in a wide variety of fields, ranging from Psychology to Chemometrics, and from Signal Processing to Data Mining and Machine Learning. Using tensors in the era of big data presents us with a rich variety of applications, but also poses great challenges such as the one of scalability and efficiency. In this talk I will first motivate the effectiveness of tensor decompositions as data analytic tools in a variety of exciting, real-world applications. Subsequently, I will discuss recent techniques on tackling the scalability and efficiency challenges by parallelizing and speeding up tensor decompositions, especially for very sparse datasets, including the scenario where the data are continuously updated over time. Finally, I will discuss open problems in unsupervised tensor mining and quality assessment of the results, and present work-in-progress addressing that problem with very encouraging results.
March 12
Bren Hall 4011
1 pm
PhD Student
UC Los Angeles

I will describe the basic elements of the Emergence Theory of Deep Learning, that started as a general theory for representations, and is comprised of three parts: (1) We formalize the desirable properties that a representation should possess, based on classical principles of statistical decision and information theory: invariance, sufficiency, minimality, disentanglement. We then show that such an optimal representation of the data can be learned by minimizing a specific loss function which is related to the notion of Information Bottleneck and Variational Inference. (2) We analyze common empirical losses employed in Deep Learning (such as empirical cross-entropy), and implicit or explicit regularizers, including Dropout and Pooling, and show that they bias the network toward recovering such an optimal representation. Finally, (3) we show that minimizing a suitably (implicitly or explicitly) regularized loss with SGD with respect to the weights of the network implies implicit optimization of the loss described in (1), with relates instead to the activations of the network. Therefore, even when we optimize a DNN as a black-box classifier, we are always biased toward learning minimal, sufficient and invariant representation. The link between (implicit or explicit) regularization of the classification loss and learning of optimal representations is specific to the architecture of deep networks, and is not found in a general classifier. The theory is related to a new version of the Information Bottleneck that studies the weights of a network, rater than the activation, and can also be derived using PAC-Bayes or Kolmogorov complexity arguments, providing independent validation.
March 19
No Seminar (Finals Week)

 

Fall 2017

Standard





Oct 9
No Seminar (Columbus Day)

Oct 16
Bren Hall 3011
1 pm
Bailey Kong
PhD Candidate
Department of Computer Science
University of California, Irvine

We investigate the problem of automatically determining what type of shoe left an impression found at a crime scene. This recognition problem is made difficult by the variability in types of crime scene evidence (ranging from traces of dust or oil on hard surfaces to impressions made in soil) and the lack of comprehensive databases of shoe outsole tread patterns. We find that mid-level features extracted by pre-trained convolutional neural nets are surprisingly effective descriptors for these specialized domains. However, the choice of similarity measure for matching exemplars to a query image is essential to good performance. For matching multi-channel deep features, we propose the use of multi-channel normalized cross-correlation and analyze its effectiveness. Finally, we introduce a discriminatively trained variant and fine-tune our system end-to-end, obtaining state-of-the-art performance.
Oct 23
Bren Hall 3011
1 pm
Geng Ji
PhD Candidate
Department of Computer Science
University of California, Irvine

We propose a hierarchical generative model that captures the self-similar structure of image regions as well as how this structure is shared across image collections. Our model is based on a novel, variational interpretation of the popular expected patch log-likelihood (EPLL) method as a model for randomly positioned grids of image patches. While previous EPLL methods modeled image patches with finite Gaussian mixtures, we use nonparametric Dirichlet process (DP) mixtures to create models whose complexity grows as additional images are observed. An extension based on the hierarchical DP then captures repetitive and self-similar structure via image-specific variations in cluster frequencies. We derive a structured variational inference algorithm that adaptively creates new patch clusters to more accurately model novel image textures. Our denoising performance on standard benchmarks is superior to EPLL and comparable to the state-of-the-art, and we provide novel statistical justifications for common image processing heuristics. We also show accurate image inpainting results.
Oct 30
Bren Hall 4011
1 pm
Qi Lou
PhD Candidate
Department of Computer Science
University of California, Irvine

Computing the partition function is a key inference task in many graphical models. In this paper, we propose a dynamic importance sampling scheme that provides anytime finite-sample bounds for the partition function. Our algorithm balances the advantages of the three major inference strategies, heuristic search, variational bounds, and Monte Carlo methods, blending sampling with search to refine a variationally defined proposal. Our algorithm combines and generalizes recent work on anytime search and probabilistic bounds of the partition function. By using an intelligently chosen weighted average over the samples, we construct an unbiased estimator of the partition function with strong finite-sample confidence intervals that inherit both the rapid early improvement rate of sampling with the long-term benefits of an improved proposal from search. This gives significantly improved anytime behavior, and more flexible trade-offs between memory, time, and solution quality. We demonstrate the effectiveness of our approach empirically on real-world problem instances taken from recent UAI competitions.
Nov 6
Bren Hall 3011
1 pm
Vladimir Minin
Professor
Department of Statistics
University of California, Irvine

Estimating evolutionary trees, called phylogenies or genealogies, is a fundamental task in modern biology. Once phylogenetic reconstruction is accomplished, scientists are faced with a challenging problem of interpreting phylogenetic trees. In certain situations, a coalescent process, a stochastic model that randomly generates evolutionary trees, comes to rescue by probabilistically connecting phylogenetic reconstruction with the demographic history of the population under study. An important application of the coalescent is phylodynamics, an area that aims at reconstructing past population dynamics from genomic data. Phylodynamic methods have been especially successful in analyses of genetic sequences from viruses circulating in human populations. From a Bayesian hierarchal modeling perspective, the coalescent process can be viewed as a prior for evolutionary trees, parameterized in terms of unknown demographic parameters, such as the population size trajectory. I will review Bayesian nonparametric techniques that can accomplish phylodynamic reconstruction, with a particular attention to analysis of genetic data sampled serially through time.
Nov 20
No Seminar (Thanksgiving Week)

Dec 4
No Seminar (NIPS Conference)

Dec 13
Bren Hall 4011
1 pm
Yutian Chen
Research Scientist
Google DeepMind

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.

Spring 2017

Standard

Apr 10
Bren Hall 4011
1 pm
Mike Izbicki
PhD Candidate
Department of Computer Science
University of California, Riverside

I’ll present two algorithms that use divide and conquer techniques to speed up learning. The first algorithm (called OWA) is a communication efficient distributed learner. OWA uses only two rounds of communication, which is sufficient to achieve optimal learning rates. The second algorithm is a meta-algorithm for fast cross validation. I’ll show that for any divide and conquer learning algorithm, there exists a fast cross validation procedure whose run time is asymptotically independent of the number of cross validation folds.
Apr 17
Bren Hall 4011
1 pm
James Supancic
PhD Candidate
Department of Computer Science
University of California, Irvine

Cameras can naturally capture sequences of images, or videos. And when understanding videos, connecting the past with the present requires tracking. Sometimes tracking is easy. We focus on two challenges which make tracking harder: long-term occlusions and appearance variations. To handle total occlusion, a tracker must know when it has lost track and how to reinitialize tracking when the target reappears. Reinitialization requires good appearance models. We build appearance models for humans and hands, with a particular emphasis on robustness and occlusion. For the second challenge, appearance variation, the tracker must know when and how to re-learn (or update) an appearance model. This challenge leads to the classic problem of drift: aggressively learning appearance changes allows small errors to compound, as elements of the background environment pollute the appearance model. We propose two solutions. First, we consider self-paced learning, wherein a tracker begins by learning from frames it finds easy. As the tracker becomes better at recognizing the target, it begins to learn from harder frames. We also develop a data-driven approach: train a tracking policy to decide when and how to update an appearance model. To take this direct approach to “learning when to learn”, we exploit large-scale Internet data through reinforcement learning. We interpret the resulting policy and conclude with a generalization for tracking multiple objects.
Apr 24
Bren Hall 4011
1 pm
David R Thompson

Jet Propulsion Laboratory
California Institute of Technology

Imaging spectrometers enable quantitative maps of physical and chemical properties at high spatial resolution. They have a long history of deployments for mapping terrestrial and coastal aquatic ecosystems, geology, and atmospheric properties. They are also critical tools for exploring other planetary bodies. These high-dimensional spatio-spectral datasets pose a rich challenge for computer scientists and algorithm designers. This talk will provide an introduction to remote imaging spectroscopy in the Visible and Shortwave Infrared, describing the measurement strategy and data analysis considerations including atmospheric correction. We will describe historical and current instruments, software, and public datasets.

Bio: David R. Thompson is a researcher and Technical Group Lead in the Imaging Spectroscopy group at the NASA Jet Propulsion Laboratory. He is Investigation Scientist for the AVIRIS imaging spectrometer project. Other roles include software lead for the NEAScout mission, autonomy software lead for the PIXL instrument, and algorithm development for diverse JPL airborne imaging spectrometer campaigns. He is recipient of the NASA Early Career Achievement Medal and the JPL Lew Allen Award.

May 1
Bren Hall 4011
1 pm
Weining Shen
Assistant Professor
Department of Statistics
University of California, Irvine

Bayesian nonparametric (BNP) models have been widely used in modern applications. In this talk, I will discuss some recent theoretical results for the commonly used BNP methods from a frequentist asymptotic perspective. I will cover a set of function estimation and testing problems such as density estimation, high-dimensional partial linear regression, independence testing, and independent component analysis. Minimax optimal convergence rates, adaptation and Bernstein-von Mises theorem will be discussed.
May 8
Bren Hall 4011
1 pm
P. Anandan
VP for Research
Adobe Systems

During the last two decades the experience of consumers has been undergoing a fundamental and dramatic transformation – giving a rich variety of informed choices, online shopping, consumption of news and entertainment on the go, and personalized shopping experiences. All of this has been powered by the massive amounts of data that is continuously being collected and the application of machine learning, data science and AI techniques to it.

Adobe is a leader the Digital Marketing and is the leading provider of solutions to enterprises that are serving customers both in the B2B and B2C space. In this talk, we will outline the current state of the industry and the technology that is behind it, how Data Science and Machine Learning are gradually beginning to transform the experiences of the consumer as well as the marketer. We will also speculate on how recent developments in Artificial Intelligence will lead to deep personalization and richer experiences for the consumer as well as more powerful and tailored end-to-end capabilities for the marketer.

Bio: Dr. P. Anandan is Vice President in Adobe Research, responsible for developing research strategy for Adobe, especially in Digital Marketing, and Leading the Adobe India Research lab. An emphasis of this lab is on Big Data Experience and Intelligence. At Adobe, he is also leading efforts in applying A.I. to Big Data. Dr. Anandan is an expert in Computer Vision with more than 60 publications that have earned 14,500 citations in Google Scholar. His research areas include visual motion analysis, video surveillance, and 3D scene modeling from images and video. His papers have won multiple awards including the Helmholtz Prize, for long term fundamental contributions to computer vision research. Prior to joining Adobe Dr. Anandan had a long tenure with Microsoft Research in Redmond, WA, and became a Distinguished Scientist. He was the Managing Director of Microsoft Research India, which he founded. Most recently he was the Managing Director of Microsoft Research’s Worldwide Outreach. He earned a PhD from the University of Massachusetts specializing in Computer Vision and Artificial Intelligence. He started as an assistant professor at Yale University before moving on to work in Video Information Processing at the David Sarnoff Research Center. His research has been used in DARPA’s Video Surveillance and Monitoring program as well as in creating special effects in the movies “What Dreams May Come”, “Prince of Egypt,” and “The Matrix.” Dr. Anandan is the recipient of Distinguished Alumnus awards from both University of Massachusetts and the Indian Institute of Technology Madras, where he earned a B. Tech. in Electrical Engineering. He was inducted into the Nebraska Hall of Computing by the University of Nebraska, from where he obtained an MS in Computer Science. He is currently a member of the Board of Governors of IIT Madras.

May 15
Bren Hall 4011
1 pm
Ndapa Nakashole
Assistant Professor
Computer Science and Engineering
University of California, San Diego

Zero-shot learning is used in computer vision, natural language, and other domains to induce mapping functions that project vectors from one vector space to another. This is a promising approach to learning, when we do not have labeled data for every possible label we want a system to recognize. This setting is common when doing NLP for low-resource languages, where labeled data is very scare. In this talk, I will present our work on improving zero-shot learning methods for the task of word-level translation.

Bio: Ndapa Nakashole is an Assistant Professor in the Department of Computer Science and Engineering at the University of California, San Diego. Prior to UCSD, she was a Postdoctoral Fellow in the Machine Learning Department at Carnegie Mellon University. She obtained her PhD from Saarland University, Germany, for work done at the Max Planck Institute for Informatics at Saarbrücken.

May 22
Bren Hall 4011
1 pm
Batya Kenig
Postdoctoral Scholar
Department of Information Systems Engineering
Technion – Israel Institute of Technology

We propose a novel framework wherein probabilistic preferences can be naturally represented and analyzed in a probabilistic relational database. The framework augments the relational schema with a special type of a relation symbol, a preference symbol. A deterministic instance of this symbol holds a collection of binary relations. Abstractly, the probabilistic variant is a probability space over databases of the augmented form (i.e., probabilistic database). Effectively, each instance of a preference symbol can be represented as a collection of parametric preference distributions such as Mallows. We establish positive and negative complexity results for evaluating Conjunctive Queries (CQs) over databases where preferences are represented in the Repeated Insertion Model (RIM), Mallows being a special case. We show how CQ evaluation reduces to a novel inference problem (of independent interest) over RIM, and devise a solver with polynomial data complexity.
May 29
No Seminar (Memorial Day)

Jun 5
Bren Hall 4011
1 pm
Yonatan Bisk
Postdoctoral Scholar
Information Sciences Institute
University of Southern California

The future of self-driving cars, personal robots, smart homes, and intelligent assistants hinges on our ability to communicate with computers. The failures and miscommunications of Siri-style systems are untenable and become more problematic as machines become more pervasive and are given more control over our lives. Despite the creation of massive proprietary datasets to train dialogue systems, these systems still fail at the most basic tasks. Further, their reliance on big data is problematic. First, successes in English cannot be replicated in most of the 6,000+ languages of the world. Second, while big data has been a boon for supervised training methods, many of the most interesting tasks will never have enough labeled data to actually achieve our goals. It is therefore important that we build systems which can learn from naturally occurring data and grounded situated interactions.

In this talk, I will discuss work from my thesis on the unsupervised acquisition of syntax which harnesses unlabeled text in over a dozen languages. This exploration leads us to novel insights into the limits of semantics-free language learning. Having isolated these stumbling blocks, I’ll then present my recent work on language grounding where we attempt to learn the meaning of several linguistic constructions via interaction with the world.

Bio: Yonatan Bisk’s research focuses on Natural Language Processing from naturally occurring data (unsupervised and weakly supervised data). He is a postdoc researcher with Daniel Marcu at USC’s Information Sciences Institute. Previously, he received his Ph.D. from the University of Illinois at Urbana-Champaign under Julia Hockenmaier and his BS from the University of Texas at Austin.

Winter 2017

Standard

Jan 16
No Seminar (MLK Day)

Jan 23
Bren Hall 4011
1 pm
Mohammad Ghavamzadeh
Senior Analytics Researcher
Adobe Research

In online advertisement as well as many other fields such as health informatics and computational finance, we often have to deal with the situation in which we are given a batch of data generated by the current strategy(ies) of the company (hospital, investor), and we are asked to generate a good or an optimal strategy. Although there are many techniques to find a good policy given a batch of data, there are not much results to guarantee that the obtained policy will perform well in the real system without deploying it. On the other hand, deploying a policy might be risky, and thus, requires convincing the product (hospital, investment) manager that it is not going to harm the business. This is why it is extremely important to devise algorithms that generate policies with performance guarantees.

In this talk, we discuss four different approaches to this fundamental problem, we call them model-based, model-free, online, and risk-sensitive. In the model-based approach, we first use the batch of data and build a simulator that mimics the behavior of the dynamical system under studies (online advertisement, hospital’s ER, financial market), and then use this simulator to generate data and learn a policy. The main challenge here is to have guarantees on the performance of the learned policy, given the error in the simulator. This line of research is closely related to the area of robust learning and control. In the model-free approach, we learn a policy directly from the batch of data (without building a simulator), and the main question is whether the learned policy is guaranteed to perform at least as well as a baseline strategy. This line of research is related to off-policy evaluation and control. In the online approach, the goal is to control the exploration of the algorithm in a way that never during its execution the loss of using it instead of the baseline strategy is more than a given margin. In the risk-sensitive approach, the goal is to learn a policy that manages risk by minimizing some measure of variability in the performance in addition to maximizing a standard criterion. We present algorithms based on these approaches and demonstrate their usefulness in real-world applications such as personalized ad recommendation, energy arbitrage, traffic signal control, and American option pricing.

Bio:Mohammad Ghavamzadeh received a Ph.D. degree in Computer Science from the University of Massachusetts Amherst in 2005. From 2005 to 2008, he was a postdoctoral fellow at the University of Alberta. He has been a permanent researcher at INRIA in France since November 2008. He was promoted to first-class researcher in 2010, was the recipient of the “INRIA award for scientific excellence” in 2011, and obtained his Habilitation in 2014. He is currently (from October 2013) on a leave of absence from INRIA working as a senior analytics researcher at Adobe Research in California, on projects related to digital marketing. He has been an area chair and a senior program committee member at NIPS, IJCAI, and AAAI. He has been on the editorial board of Machine Learning Journal (MLJ), has published over 50 refereed papers in major machine learning, AI, and control journals and conferences, and has organized several tutorials and workshops at NIPS, ICML, and AAAI. His research is mainly focused on sequential decision-making under uncertainty, reinforcement learning, and online learning.

Jan 27
Bren Hall 6011
11:00am
Ruslan Salakhutdinov
Associate Professor
Machine Learning Department
Carnegie Mellon University

In this talk, I will first introduce a broad class of unsupervised deep learning models and show that they can learn useful hierarchical representations from large volumes of high-dimensional data with applications in information retrieval, object recognition, and speech perception. I will next introduce deep models that are capable of extracting a unified representation that fuses together multiple data modalities and present the Reverse Annealed Importance Sampling Estimator (RAISE) for evaluating these deep generative models. Finally, I will discuss models that can generate natural language descriptions (captions) of images and generate images from captions using attention, as well as introduce multiplicative and fine-grained gating mechanisms with application to reading comprehension.

Bio: Ruslan Salakhutdinov received his PhD in computer science from the University of Toronto in 2009. After spending two post-doctoral years at the Massachusetts Institute of Technology Artificial Intelligence Lab, he joined the University of Toronto as an Assistant Professor in the Departments of Statistics and Computer Science. In 2016 he joined the Machine Learning Department at Carnegie Mellon University as an Associate Professor. Ruslan’s primary interests lie in deep learning, machine learning, and large-scale optimization. He is an action editor of the Journal of Machine Learning Research and served on the senior programme committee of several learning conferences including NIPS and ICML. He is an Alfred P. Sloan Research Fellow, Microsoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the Early Researcher Award, Google Faculty Award, Nvidia’s Pioneers of AI award, and is a Senior Fellow of the Canadian Institute for Advanced Research.

Jan 30
Bren Hall 4011
1 pm
Pierre Baldi & Peter Sadowski
Chancellor’s Professor
Department of Computer Science
University of California, Irvine

Learning in the Machine is a style of machine learning that takes into account the physical constraints of learning machines, from brains to neuromorphic chips. Taking into account these constraints leads to new insights into the foundations of learning systems, and occasionally leads also to improvements for machine learning performed on digital computers. Learning in the Machine is particularly useful when applied to message passing algorithms such as backpropagation and belief propagation, and leads to the concepts of local learning and learning channel. These concepts in turn will be applied to random backpropagation and several new variants. In addition to simulations corroborating the remarkable robustness of these algorithms, we will present new mathematical results establishing interesting connections between machine learning and Hilbert 16th problem.
Feb 6
Bren Hall 4011
1 pm
Miles Stoudenmire
Research Scientist
Department of Physics
University of California, Irvine

Tensor networks are a technique for factorizing tensors with hundreds or thousands of indices into a contracted network of low-order tensors. Originally developed at UCI in the 1990’s, tensor networks have revolutionized major areas of physics are starting to be used in applied math and machine learning. I will show that tensor networks fit naturally into a certain class of non-linear kernel learning models, such that advanced optimization techniques from physics can be applied straightforwardly (arxiv:1605.05775). I will discuss many advantages and future directions of tensor network models, for example adaptive pruning of weights and linear scaling with training set size (compared to at least quadratic scaling when using the kernel trick).
Feb 13
Bren Hall 4011
1 pm
Qi Lou
PhD Candidate
Department of Computer Science
University of California, Irvine

Bounding the partition function is a key inference task in many graphical models. In this paper, we develop an anytime anyspace search algorithm taking advantage of AND/OR tree structure and optimized variational heuristics to tighten deterministic bounds on the partition function. We study how our priority-driven best-first search scheme can improve on state-of-the-art variational bounds in an anytime way within limited memory resources, as well as the effect of the AND/OR framework to exploit conditional independence structure within the search process within the context of summation. We compare our resulting bounds to a number of existing methods, and show that our approach offers a number of advantages on real-world problem instances taken from recent UAI competitions.
Feb 20
No Seminar (Presidents Day)

Feb 27
Bren Hall 4011
1 pm
Eric Nalisnick
PhD Candidate
Department of Computer Science
University of California, Irvine

Deep generative models (such as the Variational Autoencoder) efficiently couple the expressiveness of deep neural networks with the robustness to uncertainty of probabilistic latent variables. This talk will first give an overview of deep generative models, their applications, and approximate inference strategies for them. Then I’ll discuss our work on placing Bayesian Nonparametric priors on their latent space, which allows the hidden representations to grow as the data necessitates.
Mar 6
Bren Hall 4011
1 pm
Omer Levy
Postdoctoral Researcher
Department of Computer Science & Engineering
University of Washington

Neural word embeddings, such as word2vec (Mikolov et al., 2013), have become increasingly popular in both academic and industrial NLP. These methods attempt to capture the semantic meanings of words by processing huge unlabeled corpora with methods inspired by neural networks and the recent onset of Deep Learning. The result is a vectorial representation of every word in a low-dimensional continuous space. These word vectors exhibit interesting arithmetic properties (e.g. king – man + woman = queen) (Mikolov et al., 2013), and seemingly outperform traditional vector-space models of meaning inspired by Harris’s Distributional Hypothesis (Baroni et al., 2014). Our work attempts to demystify word embeddings, and understand what makes them so much better than traditional methods at capturing semantic properties.

Our main result shows that state-of-the-art word embeddings are actually “more of the same”. In particular, we show that skip-grams with negative sampling, the latest algorithm in word2vec, is implicitly factorizing a word-context PMI matrix, which has been thoroughly used and studied in the NLP community for the past 20 years. We also identify that the root of word2vec’s perceived superiority can be attributed to a collection of hyperparameter settings. While these hyperparameters were thought to be unique to neural-network inspired embedding methods, we show that they can, in fact, be ported to traditional distributional methods, significantly improving their performance. Among our qualitative results is a method for interpreting these seemingly-opaque word-vectors, and the answer to why king – man + woman = queen.

Bio: Omer Levy is a post-doc in the Department of Computer Science & Engineering at the University of Washington, working with Prof. Luke Zettlemoyer. Previously, he completed his BSc and MSc at Technion – Israel Institute of Technology with the guidance of Prof. Shaul Markovitch, and got his PhD at Bar-Ilan University with the supervision of Prof. Ido Dagan and Dr. Yoav Goldberg. Omer is interested in realizing high-level semantic applications such as question answering and summarization to help people cope with information overload. At the heart of these applications are challenges in textual entailment, semantic similarity, and reading comprehension, which form the core of my current research. He is also interested in the current advances in deep learning and how they can facilitate semantic applications.

Fall 2016

Standard

Sep 22
Bren Hall 4011
1 pm
Burr Settles
Duolingo

Duolingo is a language education platform that teaches 20 languages to more than 150 million students worldwide. Our free flagship learning app is the \#1 way to learn a language online, and is the most-downloaded education app for both Android and iOS devices. In this talk, I will describe the Duolingo system and several of our empirical research projects to date, which combine machine learning with computational linguistics and psychometrics to improve learning, engagement, and even language proficiency assessment through our products.
Sep 26
Bren Hall 4011
1 pm
Golnaz Ghiasi
PhD Candidate
Department of Computer Science
University of California, Irvine

Convolutional Neural Net (CNN) architectures have terrific recognition performance but rely on spatial pooling which makes it difficult to adapt them to tasks that require dense, pixel-accurate labeling. We make two contributions to solving this problem: (1) We demonstrate that while the apparent spatial resolution of convolutional feature maps is low, the high-dimensional feature representation contains significant sub-pixel localization information. (2) We describe a multi-resolution reconstruction architecture based on a Laplacian pyramid that uses skip connections from higher resolution feature maps and multiplicative gating to successively refine segment boundaries reconstructed from lower-resolution maps. This approach yields state-of-the-art semantic segmentation results on the PASCAL VOC and Cityscapes segmentation benchmarks without resorting to more complex random-field inference or instance detection driven architectures.
Oct 3
Bren Hall 4011
1 pm
Shuang Zhao
Assistant Professor
Department of Computer Science
University of California, Irvine

Despite the rapid development of computer graphics during the recent years, complex materials such as fabrics, fur, and human hair remain largely lacking in the virtual worlds. This is due to both the lack of high-fidelity data and the inability to efficiently describe these complicated objects via mathematical/statistical models.

In this talk, I will present my research that introduces new means to acquire, model, and render complex materials that are essential to our daily lives with a focus on fabrics. Leveraging detailed geometric information and sophisticated optical model, our work has led to computer generated imagery with a new level of accuracy and fidelity. In particular, we measure real-world samples using volume imaging (e.g., computed micro-tomography) to obtain detailed datasets on their micro-geometries. We then fit sophisticated statistical models to the measured data, yielding highly compact yet realistic representations. Lastly, we show how to recover a sample’s optical properties (e.g., colors) using optimization.

Oct 10
No Seminar (Columbus Day)

Oct 17
Bren Hall 4011
1 pm
Stefano Ermon
Assistant Professor of Computer Science
Fellow of the Woods Institute for the Environment
Stanford University

Recent technological developments are creating new spatio-temporal data streams that contain a wealth of information relevant to sustainable development goals. Modern AI techniques have the potential to yield accurate, inexpensive, and highly scalable models to inform research and policy. As a first example, I will present a machine learning method we developed to predict and map poverty in developing countries. Our method can reliably predict economic well-being using only high-resolution satellite imagery. Because images are passively collected in every corner of the world, our method can provide timely and accurate measurements in a very scalable end economic way, and could revolutionize efforts towards global poverty eradication. As a second example, I will present some ongoing work on monitoring agricultural and food security outcomes from space.
Oct 24
No Seminar (cancelled)

Oct 31
Bren Hall 4011
1 pm
Matt Harding
Associate Professor
Department of Economics
University of California, Irvine

This talks explores recent uses of machine learning to large proprietary consumer transaction datasets. These are datasets which record barcode level transaction information on individual items purchased grouped by shopping trip and customer. Recent innovations in data collection allow us to go beyond the supermarket scanner to collect such data and include recent efforts to digitize the universe of customers’ receipts across all channels from supermarkets to online purchases. Additionally, passive wifi tracking allows us to record search behavior in stores and model how it translates into sales. It also gives us the opportunity to create real time interventions to nudge consumer shopping behavior. We will explore some of the challenges of modeling consumer behavior using these data and discuss methods such as tensor decompositions for count data, discrete choice modeling with Dirichlet Process Mixtures, and the use of deep autoencoders for producing interpretable statistical hypotheses.
Nov 7
Bren Hall 4011
1 pm
Wei Ping
PhD Candidate
Department of Computer Science
University of California, Irvine

This talk investigates the restricted Boltzmann machine (RBM), which is the building block for many deep probabilistic models. We propose an infinite RBM model, whose maximum likelihood estimation corresponds to a constrained convex optimization. We consider the Frank-Wolfe algorithm to solve the program, which provides a sparse solution that can be interpreted as inserting a hidden unit at each iteration. As a side benefit, this can be used to easily and efficiently identify an appropriate number of hidden units during the optimization. We also investigate different learning algorithms for conditional RBMs. There is a pervasive opinion that loopy belief propagation does not work well on RBM-based models, especially for learning. We demonstrate that, in the conditional setting, learning RBM-based models with belief propagation and its variants can provide much better results than the state-of-the-art contrastive divergence algorithms.
Nov 14
Bren Hall 4011
1 pm
Cheng Zhang
PhD Candidate
Department of Mathematics
University of California, Irvine

Traditionally, the field of computational Bayesian statistics has been divided into two main subfields: variational inference and Markov chain Monte Carlo (MCMC). In recent years, however, several methods have been proposed based on combining variational Bayesian inference and MCMC simulation in order to improve their overall accuracy and computational efficiency. This marriage of fast evaluation and flexible approximation provides a promising means of designing scalable Bayesian inference methods. In this work, we explore the possibility of incorporating variational approximation into a state-of-the-art MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required expensive computation involved in the sampling procedure, which is the bottleneck for many applications of HMC in big data problems. To this end, we exploit the regularity in parameter space to construct a free-form approximation of the target distribution by a fast and flexible surrogate function using an optimized additive model of proper random basis. The surrogate provides sufficiently accurate approximation while allowing for fast computation, resulting in an efficient approximate inference algorithm. We demonstrate the advantages of our method on both synthetic and real data problems.
Nov 16
Bren Hall 4011
4pm
Arindam Banerjee
Associate Professor
Department of Computer Science and Engineering
University of Minnesota

Many machine learning problems, especially scientific problems in areas such as ecology, climate science, and brain sciences, operate in the so-called `low samples, high dimensions’ regime. Such problems typically have numerous possible predictors or features, but the number of training examples is small, often much smaller than the number of features. In this talk, we will discuss recent advances in general formulations and estimators for such problems. These formulations generalize prior work such as the Lasso and the Dantzig selector. We will discuss the geometry underlying such formulations, and how the geometry helps in establishing finite sample properties of the estimators. We will also discuss applications of such results in structure learning in probabilistic graphical models, along with real world applications in ecology and climate science.

This is joint work with Soumyadeep Chatterjee, Sheng Chen, Farideh Fazayeli, Andre Goncalves, Jens Kattge, Igor Melnyk, Peter Reich, Franziska Schrodt, Hanhuai Shan, and Vidyashankar Sivakumar.

Nov 21
Bren Hall 4011
1 pm
Qiang Liu
Assistant Professor
Department of Computer Science
Dartmouth College

Stein’s method provides a remarkable theoretical tool in probability theory but has not been widely known or used in practical machine learning. In this talk, we try to bright this gap and show that some of the key ideas of Stein’s method can be naturally combined with practical machine learning and probabilistic inference techniques such as kernel method, variational inference and variance reduction, which together form a new general framework for deriving new algorithms for handling the kind of highly complex, structured probabilistic models widely used in modern (deep) machine learning. The new algorithms derived in this way often have a simple, untraditional form and have significant advantages over the traditional methods. I will show several applications, including goodness-of-fit tests for evaluating models without knowing the normalization constants, scalable Bayesian inference that combines the advantages of variational inference, Monte Carlo and gradient-based optimization, and approximate maximum likelihood training of deep generative models that can generate realistic-looking images.
Nov 28
Bren Hall 4011
1 pm
Wolfgang Gatterbauer
Assistant Professor
Tepper School of Business
Carnegie Mellon University

We develop upper and lower bounds for the probability of Boolean functions by treating multiple occurrences of variables as independent and assigning them new individual probabilities. We call this approach “dissociation” and give an exact characterization of optimal oblivious bounds, i.e. when the new probabilities are chosen independent of the probabilities of all other variables.

Our motivation comes from the weighted model counting problem (or, equivalently, the problem of computing the probability of a Boolean function), which is \#P-hard in general. By performing several dissociations, one can transform a Boolean formula whose probability is difficult to compute, into one whose probability is easy to compute, and which is guaranteed to provide an upper or lower bound on the probability of the original formula by choosing appropriate probabilities for the dissociated variables. Our new bounds shed light on the connection between previous relaxation-based and model-based approximations and unify them as concrete choices in a larger design space. We also show how our theory allows a standard relational database management system to both upper and lower bound hard probabilistic queries in guaranteed polynomial time. (Based on joint work with Dan Suciu from TODS 2014, VLDB 2015, and VLDBJ 2016: http://arxiv.org/pdf/1409.6052,http://arxiv.org/pdf/1412.1069, http://arxiv.org/pdf/1310.6257)

Dec 5
No Seminar
Finals Week

Spring 2016

Standard

Apr 4
No Seminar (Cancelled)

Apr 11
Bren Hall 4011
1 pm
Venkat Chandrasekaran
Assistant Professor
Computing and Mathematical Sciences & Electrical Engineering
California Institute of Technology

Extracting structured planted subgraphs from large graphs is a fundamental question that arises in a range of application domains. We describe a computationally tractable approach based on convex optimization to recover certain families of structured graphs that are embedded in larger graphs containing spurious edges. Our method relies on tractable semidefinite descriptions of majorization inequalities on the spectrum of a matrix, and we give conditions on the eigenstructure of a planted graph in relation to the noise level under which our algorithm succeeds. (Joint work with Utkan Candogan.)
Apr 18
Bren Hall 4011
1 pm
Zach Chase Lipton
PhD Candidate
Department of Computer Science
University of California, San Deigo

Clinical medical data, especially in the intensive care unit (ICU), consist of multivariate time series of observations. For each patient visit (or episode), sensor data and lab test results are recorded in the patient’s Electronic Health Record (EHR). While potentially containing a wealth of insights, the data is difficult to mine effectively, owing to varying length, irregular sampling and missing data. Recurrent Neural Networks (RNNs), particularly those using Long Short-Term Memory (LSTM) hidden units, are powerful and increasingly popular models for learning from sequence data. They effectively model varying length sequences and capture long range dependencies. We present the first study to empirically evaluate the ability of LSTMs to recognize patterns in multivariate time series of clinical measurements. Specifically, we consider multilabel classification of diagnoses, training a model to classify 128 diagnoses given 13 frequently but irregularly sampled clinical measurements. First, we establish the effectiveness of a simple LSTM network for modeling clinical data. Then we demonstrate a straightforward and effective training strategy in which we replicate targets at each sequence step. Trained only on raw time series, our models outperform several strong baselines, including a multilayer perceptron trained on hand-engineered features.
Apr 25
Bren Hall 4011
1 pm
Jasper Vrugt
Associate Professor
Department of Environmental Engineering
University of California, Irvine

Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. In this talk I will review the basic elements of the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm developed by Vrugt et al. (2008,2009) and used for Bayesian inference in fields ranging from physics, chemistry and engineering, to ecology, hydrology, and geophysics. I will also discuss recent developments of DREAM, including the development of a diagnostic model evaluation framework using likelihood free inference, and the use of dimensionality reduction techniques for calibration of CPU-intensive system models. Practical examples are used from many different fields of study.
May 2
Bren Hall 4011
1 pm
Jeffrey Mark Siskind
Associate Professor
Department of Electrical & Computer Engineering
Purdue University

Humans can describe observations and act upon requests. This requires that language be grounded in perception and motor control. I will present several components of my long-term research program to understand the vision-language-motor interface in the human brain and emulate such on computers.

In the first half of the talk, I will present fMRI investigation of the vision-language interface in the human brain. Subjects were presented with stimuli in different modalities—spoken sentences, textual presentation of sentences, and video clips depicting activity that can be described by sentences—while undergoing fMRI. The scan data is analyzed to allow readout of individual constituent concepts and words—people/names, objects/nouns, actions/verbs, and spatial-relations/prepositions—as well as phrases and entire sentences. This can be done across subjects and across modality; we use classifiers trained on scan data for one subject to read out from another subject and use classifiers trained on scan data for one modality, say text, to read out from scans of another modality, say video or speech. Analysis of this indicates that the brain regions involved in processing the different kinds of constituents are largely disjoint but also largely shared across subjects and modality. Further, we can determine the predication relations; when the stimuli depict multiple people, objects, and actions, we can read out which people are performing which actions with which objects. This points to a compositional mental semantic representation common across subjects and modalities.

In the second half of the talk, I will use this work to motivate the development of three computational systems. First, I will present a system that can use sentential description of human interaction with previously unseen objects in video to automatically find and track those objects. This is done without any annotation of the objects and without any pretrained object detectors. Second, I will present a system that learns the meanings of nouns and prepositions from video and tracks of a mobile robot navigating through its environment paired with sentential descriptions of such activity. Such a learned language model then supports both generation of sentential description of new paths driven in new environments as well as automatic driving of paths to satisfy navigational instructions specified with new sentences in new environments. Third, I will present a system that can play a physically grounded game of checkers using vision to determine game state and robotic arms to change the game state by reading the game rules from natural-language instructions.

Joint work with Andrei Barbu, Daniel Paul Barrett, Charles Roger Bradley, Seth Benjamin Scott Alan Bronikowski, Zachary Burchill, Wei Chen, N. Siddharth, Caiming Xiong, Haonan Yu, Jason J. Corso, Christiane D. Fellbaum, Catherine Hanson, Stephen Jose Hanson, Sebastien Helie, Evguenia Malaia, Barak A. Pearlmutter, Thomas Michael Talavage, and Ronnie B. Wilbur.

Bio: Jeffrey M. Siskind received the B.A. degree in computer science from the Technion, Israel Institute of Technology, Haifa, in 1979, the S.M. degree in computer science from the Massachusetts Institute of Technology (M.I.T.), Cambridge, in 1989, and the Ph.D. degree in computer science from M.I.T. in 1992. He did a postdoctoral fellowship at the University of Pennsylvania Institute for Research in Cognitive Science from 1992 to 1993. He was an assistant professor at the University of Toronto Department of Computer Science from 1993 to 1995, a senior lecturer at the Technion Department of Electrical Engineering in 1996, a visiting assistant professor at the University of Vermont Department of Computer Science and Electrical Engineering from 1996 to 1997, and a research scientist at NEC Research Institute, Inc. from 1997 to 2001. He joined the Purdue University School of Electrical and Computer Engineering in 2002 where he is currently an associate professor. His research interests include computer vision, robotics, artificial intelligence, neuroscience, cognitive science, computational linguistics, child language acquisition, automatic differentiation, and programming languages and compilers.

May 9
Bren Hall 4011
1 pm
Forest Agostinelli
PhD Candidate
Department of Computer Science
University of California, Irvine

Circadian rhythms date back to the origins of life, are found in virtually every species and every cell, and play fundamental roles in functions ranging from metabolism to cognition. Modern high-throughput technologies allow the measurement of concentrations of transcripts, metabolites, and other species along the circadian cycle creating novel computational challenges and opportunities, including the problems of inferring whether a given species oscillate in circadian fashion or not, and inferring the time at which a set of measurements was taken. Due to the expensive process of taking these measurements, inferring whether a given species oscillate in circadian fashion has proven to be a challenge. The sparse data with only a few replicates makes many existing methods unreliable. In addition, many differential gene expression experiments–such as those contained in the GEO repository, have been carried at single time points without taking into account circadian oscillations which can act as confounding factors. To solve these problems we introduce two deep learning methods: BIO_CYCLE and BIO_CLOCK. BIO_CYCLE takes advantage of synthetic data to determine whether or not a signal oscillates in a circadian fashion, and infer periods, amplitudes, and phases. BIO_CLOCK, using a specialized cost function and real-world data, imputes the time at which a sample was taken, from the corresponding gene expression measurements. These tools are a necessary step forward to better understand circadian rhythms at the molecular level and their applications to precision medicine.
May 16
Bren Hall 4011
1 pm
Aparna Chandramowlishwaran
Assistant Professor
Department of Electrical Engineering
University of California, Irvine


In this talk, I’ll present my group’s work on addressing two key challenges in developing parallel algorithms and software for the class of N-body problems on current and future platforms. The first challenge is reducing the apparent gap in performance between code generated from high-level forms and that of hand-tuned code, which we address using extensive characterization of the optimization space for these computations and automating the process through domain specific code generators. These application-specific compilers provide the domain scientists the ability to productively harness the power of these large machines and to enable large-scale scientific simulations and big data applications.

The second challenge is analyzing and designing algorithms. We are entering the era of exascale. The number of cores are growing at a much faster rate than bandwidth per node. What implications does this trend have in designing algorithms for future systems? If we were to model computation and communication costs, what inferences can we derive from such a model for the time to execute an algorithm? Our model suggests a new kind of high level analytical co-design of the algorithm and architecture and similar analysis can be applied in designing algorithms in general.

May 23
Bren Hall 4011
1 pm
Divijotham Krishnamurthy
Postdoctoral Fellow
Center for Mathematics of Information
California Institute of Technology

Several problems arising in the design, analysis and efficient operation of power systems are naturally posed as graph-structured optimization problems. Due to the nonlinear nature of the physical equations describing the power grid, these problems are often nonconvex and NP-hard. However, practical instances of several graph-structured optimization problems have been solved successfully in the graphical models literature by exploiting graph structure and using message-passing or belief propagation techniques. In this work, we show that a similar approach can be successfully applied to power systems, leading to theoretically and practically efficient algorithms. I will discuss two applications in detail: a) Solving mixed-integer optimal power flow problems on distribution networks, and b) Detecting and mitigating market manipulation by aggregators of renewable generation in a distribution-level market. I will also discuss possible extensions of these approaches to other power system/infrastructure network problems.
Based on joint work with Misha Chertkov, Sidhant Misra, Marc Vuffray, Pascal Van Hentenryck, Niangjun Chen, Navid Azizan Ruhi and Adam Wierman.
May 30
No Seminar (Memorial Day)

Winter 2016

Standard

Jan 11
Bren Hall 4011
1 pm
Padhraic Smyth
Professor
Department of Computer Science
University of California, Irvine

Social network analysis has a long and successful history in the social sciences, often with a focus on relatively small survey-based data sets. In the past decade, driven by the ease of automatically collecting large-scale network data sets, there has been significant interest in developing new statistical and machine learning techniques for network analysis. In this talk we will focus on two general modeling themes in this context: the use of latent variables for low-dimensional vector-based network representations models and event-based models for temporal network data. We will review the representational capabilities of these models from a generative perspective, discuss some of the challenges of parameter estimation that arise, and emphasize the role of predictive evaluation. The talk will conclude with a brief discussion of future directions in this general area.

Based on joint work with Zach Butler, Chris DuBois, Jimmy Foulds, and Carter Butts

Jan 18
No Seminar (MLK Day)

Jan 25
Bren Hall 4011
1 pm
James Foulds
Postdoctoral Fellow
Department of Computer Science
University of California, San Diego

Topic models have become increasingly prominent text-analytic machine learning tools for research in the social sciences and the humanities. In particular, custom topic models can be developed to answer specific research questions. The design of these models requires a nontrivial amount of effort and expertise, motivating general-purpose topic modeling frameworks. In this talk I will introduce latent topic networks, a flexible class of richly structured topic models designed to facilitate applied research. Custom models can straightforwardly be developed in this framework with an intuitive first-order logical probabilistic programming language. Latent topic networks admit scalable training via a parallelizable EM algorithm which leverages ADMM in the M-step. I demonstrate the broad applicability of the models with case studies on modeling influence in citation networks, and U.S. Presidential State of the Union addresses. This talk is based on joint work with Lise Getoor and Shachi Kumar from the University of California, Santa Cruz, published at ICML 2015.
Feb 1
Bren Hall 4011
1 pm
Furong Huang
PhD Candidate
Department of Electrical Engineering
University of California, Irvine

Latent or hidden variable models have applications in almost every domain, e.g., social network analysis, natural language processing, computer vision and computational biology. Training latent variable models is challenging due to non-convexity of the likelihood objective function. An alternative method is based on the spectral decomposition of low order moment matrices and tensors. This versatile framework is guaranteed to estimate the correct model consistently. I will discuss my results on convergence to globally optimal solution for stochastic gradient descent, despite non-convexity of the objective. I will then discuss large-scale implementations (which are highly parallel and scalable) of spectral methods, carried out on CPU/GPU and Spark platforms. We obtain a gain in both accuracies and in running times by several orders of magnitude compared to the state-of-art variational methods. I will discuss the following applications in detail: (1) learning hidden user commonalities (communities) in social networks, and (2) learning sentence embeddings for paraphrase detection using convolutional models. More generally, I have applied the methods to a variety of problems such as text and social network analysis, healthcare analytics, and cataloging neuronal cell types in neuroscience.
Feb 8
Bren Hall 4011
1 pm
Majid Janzamin
PhD Candidate
Department of Electrical Engineering
University of California, Irvine

Optimization lies at the core of machine learning. However, most machine learning problems entail non-convex optimization. In this talk, I will show how spectral and tensor methods can yield guaranteed convergence to globally optimal solutions under transparent conditions for a range of machine learning problems.

In the first part, I will explain how tensor methods are useful for learning latent variable models in an unsupervised manner. The focus of my work is on overcomplete regime where the hidden dimension is larger than the observed dimensionality. I describe how tensor methods enable us to learn these models in the overcomplete regime with theoretical guarantees in recovering the parameters of the model. I also provide efficient sample complexity results for training these models. Next, I will describe a new method for training neural networks for which we provide theoretical guarantees on the performance of the algorithm. We have developed a computationally efficient algorithm for training a two-layer neural network using method-of-moment and tensor decomposition techniques.

Feb 10
Bren Hall 3011
3 pm
Yining Wang
PhD Student
Machine Learning Department
CMU

I will discuss subsampling and sketching with their applications and analysis in machine learning. They can be viewed not only as tools to improve computational and storage efficiency of existing learning algorithms, but also as settings that characterize data measurement/availability/privacy constraints in modern machine learning applications. In this talk I will introduce my recent work, which analyze subsampling and sketching settings in three popular machine learning algorithms: tensor factorization, subspace clustering and linear regression.
Feb 15
No Seminar (Presidents Day)

Feb 22
Bren Hall 4011
1 pm
Julian McAuley
Assistant Professor
Computer Science & Engineering
UC San Diego

Understanding the semantics of preferences and behavior is incredibly complicated, especially in settings where the visual appearance of items influences our decisions. Three challenges that I’ll discuss in this talk include (1) how can we uncover the semantics of visual preferences, especially in sparse or long-tailed data, where new items are constantly introduced? (2) How can we use visual data to understand the relationships between items, and in particular what makes two items “visually compatible”? And (3) how can we understand the temporal dynamics of visual preferences, in order to uncover how “fashions” have evolved over time?
Feb 29
No Seminar (Cancelled)

Mar 7
Bren Hall 4011
1 pm
William Lam
PhD Candidate
Department of Computer Science
University of California, Irvine

We investigate the potential of look-ahead in the context of AND/OR search in graphical models using the mini-bucket heuristic for combinatorial optimization tasks (e.g. MAP/MPE or weighted CSPs.) We present and analyze the complexity of computing the residual (a.k.a. Bellman update) of the mini-bucket heuristic, which we call “bucket errors” and show how this can be used to identify which parts of the search space are more likely to benefit from look-ahead, therefore facilitating a method to bound its overhead. We also rephrase the look-ahead computation as a graphical model to make use of structure exploiting inference schemes. In our empirical results, we demonstrate that our methods can be used to cost-effectively increase the power of branch-and-bound search.

In the second part of the talk, we show how bucket errors can be used to improve the performance of AND/OR best-first search algorithms for providing lower bounds on the min-sum problem. In our preliminary experiments, we show that when expanding nodes for the AO* algorithm, using bucket errors as a subproblem ordering heuristic can allow us to expand fewer nodes to arrive at the optimal solution compared to the existing ordering approach.

Fall 2015

Standard

Sep 16
Bren Hall 4011
1 pm
Hanie Sedghi
Graduate Student
Department of Electrical Engineering
University of Southern California

Learning with big data is a challenging task that requires smart and efficient methods to extract useful information from data. Optimization methods, both convex and nonconvex are promising approaches to do this. In this talk I will review two classes of my work on prominent problems in convex and nonconvex optimization.

Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Method: Neural networks provide a versatile tool for approximating functions of various inputs. Despite exciting achievements in application, a theoretical understanding of them is mostly lacking. Training a neural network is a highly nonconvex problem and backpropagation can get stuck in local optima. For the first time, we have a computationally efficient method for training neural networks that also has guaranteed generalization. This is part of our recently proposed general framework based on method-of-moments and tensor decomposition to efficiently learn different models such as neural networks and mixture of classifiers.

Breaking Curse of Dimensionality: Stochastic Optimization in high dimensions: We have designed an efficient stochastic optimization method based on ADMM that is fast and cheap to implement, can be performed in parallel and can be used for any regularized optimization framework with some mild assumptions. We have proved that our algorithm obtains minimax optimal convergence rates for sparse optimization and robust PCA framework. Experiment results show that in the aforementioned scenarios, our method outperforms state-of-the-art, i.e., yields smaller error with equal time.

Oct 5
Bren Hall 4011
1 pm
Gokcan Karakus
Graduate Student
Department of Civil Engineering
Caltech

We are proposing an algorithm to test the accuracy of the predictions by earthquake early warning systems. Most warning systems predict the location and the magnitude of an ongoing earthquake via the early-arriving seismic wave data. Our algorithm uses logarithm of ratios between observed ground motion envelopes and Virtual Seismologist’s (Cua G. and Heaton T.) predicted envelopes to assess the validity of system predictions. We quantify the uncertainty attached to our parameters using Bayesian probability approach.
Oct 12
Bren Hall 4011
1 pm
Alexander Ihler
Associate Professor
Department of Computer Science
University of California, Irvine

Importance sampling (IS) and its variant, annealed IS (AIS) have been widely used for estimating the partition function in graphical models, such as Markov random fields and deep generative models. However, IS tends to underestimate the partition function and is subject to high variance when the proposal distribution is more peaked than the target distribution. On the other hand, “reverse” versions of IS and AIS tend to overestimate the partition function, and degenerate when the target distribution is more peaked than the proposal distribution. We present a simple, general method that gives much more reliable and robust estimates than either IS (AIS) or reverse IS (AIS). Our method works by converting the estimation problem into a simple classification problem that discriminates between the samples drawn from the target and the proposal. We give both theoretical and empirical justification, and show that an annealed version of our method significantly outperforms both AIS and reverse AIS (Burda et al., 2015), which has been the state-of-the-art for likelihood evaluation in deep generative models. Joint work with Qiang Liu, Jian Peng, and John Fisher.
Oct 19
Bren Hall 4011
1 pm
Zhiying Wang
Assistant Professor
Department of Electrical Engineering
University of California, Irvine

In this talk, we propose the multi-version coding problem for distributed storage. We consider a setting where there are n servers that aim to store v versions of a message, and there is a total ordering on the versions from the earliest to the latest. We assume that each message version has a given number of bits. Each server can receive any subset of the v versions and stores a function of the message versions it receives. The multi-version code we consider ensures that, a decoder that connects to any c out of the n servers can recover the message corresponding to the latest common version stored among those servers, or a message corresponding to a version that is later than the latest common version. We describe a simple and explicit achievable scheme, as well as an information-theoretic converse. Moreover, we apply the multi-version code to one of the problems in distributed algorithms – the emulation of atomic shared memory in a message-passing network – and improve upon previous algorithms up to a half in terms of storage cost.
Oct 26
Bren Hall 4011
1 pm
Soheil Feizi
Graduate Student
CSAIL
MIT

Network models provide a unifying framework for understanding dependencies among variables in medical, biological, and other sciences. Networks can be used to reveal underlying data structures, infer functional modules, and facilitate experiment design. In practice, however, size, uncertainty and complexity of the underlying associations render these applications challenging.

In this talk, we illustrate the use of spectral, combinatorial, and statistical inference techniques in several significant network science problems. First, we consider the problem of network alignment where the goal is to find a bijective mapping between nodes of two networks to maximize their overlapping edges while minimizing mismatches. To solve this combinatorial problem, we present a new scalable spectral algorithm, and establish its efficiency theoretically and experimentally over several synthetic and real networks. Next, we introduce network maximal correlation (NMC) as an essential measure to capture nonlinear associations in networks. We characterize NMC using geometric properties of Hilbert spaces and illustrate its application in learning network topology when variables have unknown nonlinear dependencies. Finally, we discuss the problem of learning low dimensional structures (such as clusters) in large networks, where we introduce logistic Random Dot Product Graphs, a new class of networks which includes most stochastic block models as well as other low dimensional structures. Using this model, we propose a spectral network clustering algorithm that possesses robust performance under different clustering setups. In all of these problems, we examine underlying fundamental limits and present efficient algorithms for solving them. We also highlight applications of the proposed algorithms to data-driven problems such as functional and regulatory genomics of human diseases, and cancer.

Bio: Soheil Feizi is a PhD candidate at Massachusetts Institute of Technology (MIT), co-supervised by Prof. Muriel Médard and Prof. Manolis Kellis. His research interests include analysis of complex networks and the development of inference and learning methods based on Optimization, Information Theory, Machine Learning, Statistics, and Probability, with applications in Computational Biology, and beyond. He completed his B.Sc. at Sharif University of Technology, awarded as the best student of his class. He received the Jacobs Presidential Fellowship and EECS Great Educators Fellowship, both from MIT. He has been a finalist in the Qualcomm Innovation contest. He received an Ernst Guillemin Award for his Master of Science Thesis in the department of Electrical Engineering and Computer Science at MIT.

Nov 2
Bren Hall 4011
1 pm
Surya Ganguli
Assistant Professor
Department of Applied Physics
Stanford University

Neuronal networks have enjoyed a resurgence both in the worlds of neuroscience, where they yield mathematical frameworks for thinking about complex neural datasets, and in machine learning, where they achieve state of the art results on a variety of tasks, including machine vision, speech recognition, and language translation. Despite their empirical success, a mathematical theory of how deep neural circuits, with many layers of cascaded nonlinearities, learn and compute remains elusive. We will discuss three recent vignettes in which ideas from statistical physics can shed light on this issue. In particular, we show how dynamical criticality can help in neural learning, how the non-intuitive geometry of high dimensional error landscapes can be exploited to speed up learning, and how modern ideas from non-equilibrium statistical physics, like the Jarzynski equality, can be extended to yield powerful algorithms for modeling complex probability distributions. Time permitting, we will also discuss the relationship between neural network learning dynamics and the developmental time course of semantic concepts in infants.
Nov 9
Bren Hall 4011
1 pm
Javier Larrosa
Professor
Llenguatges i Sistemes Informàtics
Universitat Politècnica de Catalunya

Weighted Max-SAT is an extension of SAT in which each clause has an associated cost. The goal is to minimize the cost of falsified clauses. Max-SAT has been successfully applied to a number of domains including Bioinformatics, Telecommunications and Scheduling.

In this talk I will introduce the Max-SAT framework and discuss the main solving approaches. In particular, I will present Max-resolution and will show how it can be effectively used in the context of Depth-first Branch-and-Bound.

Nov 16
Bren Hall 4011
1 pm
Golnaz Ghiasi
Graduate Student
Department of Computer Science
University of California, Irvine

Occlusion poses a significant difficulty for detecting and localizing object keypoints and subsequent fine-grained identification. In this talk, I will describe a hierarchical deformable part model for face detection and keypoint localization that explicitly models part occlusion. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. However, this model does not exploit bottom-up cues such as detection of occluding contours and image segments. I will talk about how to modify the proposed model to utilize bottom-up class-specific segmentation in order to jointly detect and segment out the foreground pixels belonging to the face.
Nov 23
Thankgiving week
(no seminar)

Nov 30
Bren Hall 4011
1 pm
Dimitrios Kotzias
Graduate Student
Department of Computer Science
University of California, Irvine

In many classification problems labels are relatively scarce. One context in which this occurs is where we have labels for groups of instances but not for the instances themselves, as in multi-instance learning. Past work on this problem has typically focused on learning classifiers to make predictions at the group level. In this paper we focus on the problem of learning classifiers to make predictions at the instance level. To achieve this we propose a new objective function that encourages smoothness of inferred instance-level labels based on instance-level similarity, while at the same time respecting group-level label constraints. We apply this approach to the problem of predicting labels for sentences given labels for reviews, using a convolutional neural network to infer sentence similarity. The approach is evaluated using three large review data sets from IMDB, Yelp, and Amazon, and we demonstrate the proposed approach is both accurate and scalable compared to various alternatives.
Dec 7
Finals week
(no seminar)

Spring 2015

Standard

Mar 30
Bren Hall 4011
1 pm
Pierre Baldi
Chancellor’s Professor
Department of Computer Science
UC Irvine

In a physical neural system, where storage and processing are intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons. We propose a systematic framework to define and study the space of local learning rules where one must first define the nature of the local variables, and then the functional form that ties them together into a learning rule. We consider polynomial learning rules and analyze their behavior and capabilities in both linear and non-linear networks. As a byproduct, we also show how this framework enables the discovery of new learning rules and important relationships between learning rules and group symmetries.

Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, we show however that it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is propagated to the deep layers. The complexity of the propagated information about the targets and the channel through which this information is propagated partition the space of learning algorithms and highlight the remarkable power of the backpropagation algorithm. The theory clarifies the concept of Hebbian learning, what is learnable by Hebbian learning, and explains the sparsity of the space of learning rules discovered so far.

Apr 6
Bren Hall 4011
1 pm
Maryam M. Shanechi
Assistant Professor
Department of Electrical Engineering and Computer Science
University of Southern California

A brain-machine-interface (BMI) is a system that interacts with the brain either to allow the brain to control an external device or to control the brain’s state. While these two BMI types are for different applications, they can both be viewed as closed-loop control systems. In this talk, I present our work on developing both these types of BMIs, specifically motor BMIs for restoring movement in paralyzed patients and a new BMI for control of the brain state under anesthesia. Motor BMIs have largely used standard signal processing techniques. However, devising novel algorithmic solutions that are tailored to the neural system can significantly improve the performance of these BMIs. Here, I develop a novel BMI paradigm for restoration of motor function that incorporates an optimal feedback-control model of the brain and directly processes the spiking activity using point process modeling. I show that this paradigm significantly outperforms the state-of-the-art in closed-loop primate experiments. In addition to motor BMIs, I construct a new BMI that controls the state of the brain under anesthesia. This is done by designing stochastic controllers that infer the brain’s anesthetic state from non-invasive observations of neural activity and control the real-time rate of drug administration to achieve a target brain state. I show the reliable performance of this BMI in rodent experiments.

Bio:

Maryam Shanechi is an assistant professor in the Ming Hsieh Department of Electrical Engineering at the University of Southern California (USC). Prior to joining USC, she was an assistant professor in the School of Electrical and Computer Engineering at Cornell University. She received the B.A.Sc. degree in Engineering Science from the University of Toronto in 2004 and the S.M. and Ph.D. degrees in Electrical Engineering and Computer Science from MIT in 2006 and 2011, respectively. She is the recipient of the NSF CAREER Award and has been named by the MIT Technology Review as one of the world’s top 35 innovators under the age of 35 (TR35) for her work on brain-machine interfaces.

Apr 13
Bren Hall 4011
1 pm
Michael Carey
Professor
Department of Computer Science
UC Irvine

AsterixDB is a new BDMS (Big Data Management System) with a feature set that sets it apart from other Big Data platforms in today’s open source ecosystem. Its features make it well-suited to applications including web data warehousing, social data storage and analysis, and other use cases related to Big Data. AsterixDB has a flexible NoSQL style data model; a query language that supports a wide range of queries, a scalable runtime; partitioned, LSM-based data storage and indexing (including B+ tree, R tree, and text indexes); support for external as well as native data; a rich set of built-in types, including spatial, temporal, and textual types; support for fuzzy, spatial, and temporal queries; a built-in notion of data feeds for ingestion of data; and transaction support akin to that of a NoSQL store.

Development of AsterixDB began in 2009 and led to a mid-2013 initial open source release. This talk will provide an overview of the resulting system. Time permitting, the talk will cover the system’s data model, its query language, and its basic architecture. Also included will be a summary of the current status of the project and a discussion of some of the “plug-in points” where AsterixDB can be made to interoperate with ML technologies. The talk will conclude with some thoughts on opportunities for future ML-related collaborations related to AsterixDB.

Bio:

Michael J. Carey is a Bren Professor of Information and Computer Sciences at UC Irvine. Before joining UCI in 2008, he worked at BEA Systems for seven years and led the development of BEA’s AquaLogic Data Services Platform product for virtual data integration. He also spent a dozen years teaching at the University of Wisconsin-Madison, five years at the IBM Almaden Research Center working on object-relational databases, and a year and a half at e-commerce platform startup Propel Software during the infamous 2000-2001 Internet bubble. Carey is an ACM Fellow, a member of the National Academy of Engineering, and a recipient of the ACM SIGMOD E.F. Codd Innovations Award. His current interests center around data-intensive computing and scalable data management (a.k.a. Big Data).

Apr 20
Bren Hall 4011
1 pm
Cris Cecka
Research Scientist
NVIDIA Research

N-body problems are ubiquitous with applications ranging from linear algebra to scientific computing and machine learning. N-body methods were identified as one of the original 7 dwarves or motifs of computation and are believed to be important in the next decade. These methods include FMMs, Treecodes, H-matrices, Butterfly algorithms, and geometric shattering. The relationship between these approaches is understood, but many of the demonstrated tools for developing and applying these algorithms remain ad-hoc, inaccessible, or inefficient.

We present recent developments towards a codebase that is abstracted over the primary domains of research in this field and is optimized for modern multicore systems. Core components including tree construction, tree traversal, and low-rank operators are developed independently and parallelized for multicore CPUs and GPUs. Applications include dense problems in machine learning and computational geometry (k-nearest neighbors, range search, kernel density estimation, Gaussian processes, and RBF kernels), treecode and fast multipole methods in computational physics (gravitational potentials, screened Coulomb interactions, Stokes flow, and Helmholtz equations), and matrix compression, computation, and inversion (PLR, HODLR, H2, and Butterfly).

In this presentation, we will review a high-level perspective of the research domain, the abstraction and parallelization strategies, and how these methods can be made more practical.

Bio:

Cris received his PhD from Stanford University in Computational and Mathematical Engineering in 2011. As a lecturer and research scientist with the new Institute for Applied Computational Science at Harvard University, he developed core courses on parallel computing and robust software development for scientific computing. In 2014, Cris joined the Mathematics Department at the Massachusetts Institute of Technology as a research associate where he focused on developing and applying generalized N-body methods to dense linear algebra using hierarchical methods. Currently, he works in NVIDIA Research to continue to make these techniques accessible with modern parallel programming models. You can read more about his research on his Harvard web page.

Apr 27
Cancelled
(no seminar)

May 4
Bren Hall 4011
1 pm
Roi Weiss
PhD student
Department of Computer Science
Ben Gurion University of the Negev

Hidden Markov models (HMMs) are a standard tool in the modeling and analysis of time series with a wide variety of applications. Yet, learning their parameters remain a challenging problem. In the first part of the talk I will present a novel approach to learning an HMM whose outputs are distributed according to a parametric family. This is done by decoupling the learning task into two steps: first estimating the output parameters, and then estimating the hidden states transition probabilities. The first step is accomplished by fitting a mixture model to the output stationary distribution. Given the parameters of this mixture model, the second step is formulated as the solution of an easily solvable convex quadratic program. We provide an error analysis for the estimated transition probabilities and show they are robust to small perturbations in the estimates of the mixture parameters.

The above approach (and other recently proposed spectral/tensor methods) strongly depends on the assumption that all states have different output distributions. In various applications, however, some of the hidden states are aliased, having identical output distributions. The minimality, identifiability and learnability of such aliased HMMs have been long standing problems, with only partial solutions provided thus far. In the second part of the talk, as a first step, I will focus on parametric-output HMMs that have exactly two aliased states. For this class, we present a complete characterization of their minimality and identifiability. Furthermore, we derive computationally efficient and statistically consistent algorithms to detect the presence of aliasing and learn the aliased HMM transition parameters. We illustrate our theoretical analysis by several simulations.

A joint work with Boaz Nadler and Aryeh Kontorovich.

May 11
Bren Hall 4011
1 pm
Ananda Theertha Suresh
PhD student
Department of Electrical Engineering
UC San Diego

Many statistical and machine-learning applications call for estimating Gaussian mixtures using a limited number of samples and computational time. PAC (proper) learning estimates a distribution in a class by some distribution in the same class to a desired accuracy. Using spectral projections we show that spherical Gaussian mixtures in d-dimensions can be PAC learned with O*(d) samples, and that the same applies for learning the distribution’s parameters. Our algorithm is information theoretically near-optimal and significantly improves previously known time and sample complexities.
May 18
Bren Hall 4011
1 pm
Saeed Saremi
Postdotoral Fellow
The Computational Neurobiology Laboratory
Salk Institute

Natural images are scale invariant with structures at all length scales. After a tutorial on critical phenomena and percolation theory, I will talk about formulating a geometric view of scale invariance. In this model, the scale invariance of natural images is understood as a second-order percolation phase transition. It is further quantified by fractal dimensions, and by the scale-free distribution of clusters in natural images. This formulation leads to a method for identifying clusters in images and a starting point for image segmentation.

Bio:

Saeed Saremi received the Ph.D. degree in theoretical physics from MIT. He then joined the lab of Terry Sejnowski at the Salk Institute as a postdoctoral fellow. His research blends machine learning, statistical mechanics, and computational neuroscience, with the long-term goal of understanding the principles for achieving artificial intelligence.

June 1
Bren Hall 4011
1 pm
Leandro Soriano Marcolino
PhD student
Viterbi School of Engineering
University of Southern California

Teams of voting agents have great potential in finding optimal solutions, and they have been used in many important domains, such as: machine learning, crowdsourcing, forecasting systems, and even board games. Voting is popular since it is highly parallelizable, easy to implement and provide theoretical guarantees. However, there are three fundamental challenges: (i) Selecting a limited number of agents to compose a team; (ii) Combining the opinions of the team members; (iii) Assessing the performance of a given team. In this talk, I address all these challenges, showing both theoretical and experimental results. I explore three different domains: Computer Go, HIV prevention via influencing social networks and architectural design.

Bio:

Leandro Soriano Marcolino is a PhD student at University of Southern California (USC), advised by Milind Tambe. He has published in several prestigious conferences in AI, robotics and machine learning, such as AAAI, AAMAS, IJCAI, NIPS, ICRA and IROS. He received the best research assistant award from the Computer Science Department at USC, had a paper nominated for best paper from the leading multi-agent conference AAMAS, and had his undergraduate work selected as the best one by the Brazilian Computer Science Society. He has been researching continuously about teamwork and cooperation, and obtained his masters degree in Japan, with the highly-competitive Monbukagakusho scholarship. Over his career, Leandro has published about a variety of domains, such as swarm robotics, computer Go, social networks, bioinformatics and architectural design.

June 8
Bren Hall 4011
1 pm
Quentin Berthet
CMI Postdoctoral Fellow
Computing + Mathematical Sciences, Annenberg Center
California Institute of Technology

Statistical estimation in many contemporary settings involves the acquisition, analysis, and aggregation of datasets from multiple sources, which can have significant differences in character and in value. Due to these variations, the effectiveness of employing a given resource – e.g., a sensing device or computing power – for gathering or processing data from a particular source depends on the nature of that source. As a result, the appropriate division and assignment of a collection of resources to a set of data sources can substantially impact the overall performance of an inferential strategy. In this expository article, we adopt a general view of the notion of a resource and its effect on the quality of a data source, and we describe a framework for the allocation of a given set of resources to a collection of sources in order to optimize a specified metric of statistical efficiency. We discuss several stylized examples involving inferential tasks such as parameter estimation and hypothesis testing based on heterogeneous data sources, in which optimal allocations can be computed either in closed form or via efficient numerical procedures based on convex optimization. Joint work with V. Chandrasekaran.