September, 15 2017
David Steinberg
Kalman Filters are a popular and influential approach for modeling time-varying phenomena. They admit an intuitive probabilistic interpretation, have a simple functional form, and have been successfully applied in a wide variety of disciplines. The classic Kalman filter is a generative dynamic model in which the state of the system evolves over time and can be indirectly observed, in which both the system evolution and the observation follow normal linear models. Many real applications need more complex models and/or non-Gaussian noise. This need has spurred research in extending the Kalman filter.
The article by Krishnan, Shalit and Sontag takes advantage of recent advances in variational methods for learning deep generative models. They introduce a unified algorithm that can efficiently learn a broad spectrum of Kalman filters. They show how these ideas can be used to build good models for counterfactual inference, introducing the “Healing MNIST” dataset where long-term structure, noise and actions are applied to sequences of digits. They also show how the model can be used for counterfactual inference for patients, based on electronic health record data of 8,000 patients over 4.5 years.
Read the paper:
Deep Kalman Filters. Rahul G. Krishnan, Uri Shalit, David Sontag