SCHEDULE - TALK DETAIL


← Back to the schedule

Keynote | Business

Bayesian inference and big data: are we there yet?

Friday 17th | 12:40 - 13:10 | Theatre 20


One-liner summary:

Bayesian inference has a reputation of being complex and only applicable to fairly small datasets. Very recent developments may be chaining that, and we are starting to see some successful real world application of Bayesian inference to very large datasets, and some tools to make it accessible to many more data science practitioners.

Keywords defining the session:

- Bayesian methods

- Inference

- Big Data

Bayesian inference offers many great features and arguably allow us to extract as much information as possible from real world data, but it has the reputation of being complex and limited to fairly small datasets. In this talk I will try to challenge both assumptions. First, I will discuss the “traditional” interpretation of Bayesian probabilities (“a measure of how much we know based on the evidence we’ve been presented”) versus the “modern” interpretation by some authors (“a measure of how trained a model is”), and the corresponding view of Bayesian models as machine learning tools, and inference as model training (LDA being an example of an algorithm that many machine learning practitioners don’t even realize it’s just Bayesian inference over a simple generative model) Then, I will show the two tradiditional approaches to Bayesian inference, namely Monte Carlo methods and variational methods, and their (traditional) drawbacks and limitations. I will explain some advances to overcome those limitations: MCMC, HMC, stable Monte Carlo methods with subsampling, usage of automatic differentiation, NUTS MCMC, variational inference, ADVI, ADVI with batch processing, etc. A trend seems to appear that Monte Carlo methods might become suited for “wide” data (when each sample consists in a lot of data) and variational methods for “tall” data (when the number of samples is very large).