AI Seminar: Deep latent variable models: estimation and missing data imputation
Speaker
Jes Frellsen, Associate Professor at Computer Science Department, IT University of Copenhagen.
Abstract
Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models. In this talk, we will first discuss how these models are estimated: variational methods are commonly used for inference; however, the exact likelihood of these models has been largely overlooked. We will show that most unconstrained models used for continuous data have an unbounded likelihood function and discuss how to ensure the existence of maximum likelihood estimates. Then we will present a simple variational method, called MIWAE, for training DLVMs, when the training set contains missing-at-random data. Finally, we present Monte Carlo algorithms for missing data imputation using the exact conditional likelihood of DLVMs: a Metropolis-within-Gibbs sampler for DLVMs trained on comple datasets and an importance sampler for DLVMs trained on incomplete data sets. For complete training sets, our algorithm consistently and significantly outperforms the usual imputation scheme used for DLVMs. For incomplete training set, we show that MIWAE trained models provide accurate single and multiple imputations, and are highly competitive with state-of-the-art methods. This is joint work with Pierre-Alexandre Mattei.
This seminar is a part of the AI Seminar Series organised by SCIENCE AI Centre. The series highlights advances and challenges in research within Machine Learning, Data Science, and AI. Like the AI Centre itself, the seminar series has a broad scope, covering both new methodological contributions, ground-breaking applications, and impacts on society.