logo

European Institute for Statistics, Probability, Stochastic Operations Research
and their Applications

About | Research | Events | People | Reports | Alumni | ContactHome
 


Variational Inference: Foundations and Innovations
David Blei, Columbia University

One of the core problems of modern statistics and machine learning is
to approximate difficult-to-compute probability distributions. This
problem is especially important in probabilistic modeling, which
frames all inference about unknown quantities as a calculation about a
conditional distribution. In this tutorial I review and discuss
variational inference (VI), a method a that approximates probability
distributions through optimization. VI has been used in myriad
applications in machine learning and tends to be faster than more
traditional methods, such as Markov chain Monte Carlo sampling.

Brought into machine learning in the 1990s, recent advances in
improved fidelity and simplified implementation have renewed interest
and application of this class of methods. This tutorial aims to
provide both an introduction to VI, a modern view of the field, and an
overview of the role that probabilistic inference plays in many of the
central areas of machine learning.

First, I will provide a broad review of variational inference. This
serves as an introduction (or review) of its central concepts. Second,
I develop and connect some of the pivotal tools for VI that have been
developed in the last few years, tools like Monte Carlo gradient
estimation, black box variational inference, stochastic variational
inference, and variational autoencoders. These methods have lead to a
resurgence of research and applications of VI. Finally, I discuss some
of the unsolved problems in VI and point to promising research
directions.

Slides:
Blei_lectures.pdf


Home | Recent Changes | To protected page


Last change: Wed Sep-06-17 15:00:32
Eurandom 2012