Bayesian estimation of Bayes net parameters
(1.5 hours to learn)
Summary
Bayesian parameter estimation techniques can be applied to learning Bayes net parameters. This leads to more stable estimates in situations with limited data and can improve generalization performance.
Context
This concept has the prerequisites:
Goals
- Derive the formulas for Bayesian estimation of Bayes net parameters, and for the predictive distribution over new data, in the simplest case where all variables are fully observed.
- In particular, see why posterior inference and prediction can both decompose into independent problems associated with each CPT. What has to be true of the prior for the problem to decompose this way?
- Be able to represent the Bayesian parameter estimation problem itself as a Bayes net, i.e. build a Bayes net where the parameters and data are represented as separate sets of variables. This ties together the problems of learning and inference.
Core resources (read/watch one of the following)
-Free-
→ Coursera: Probabilistic Graphical Models (2013)
An online course on probabilistic graphical models.
Other notes:
- Click on "Preview" to see the videos.
-Paid-
→ Probabilistic Graphical Models: Principles and Techniques
A very comprehensive textbook for a graduate-level course on probabilistic AI.
Location:
Section 17.4, "Bayesian parameter estimation in Bayesian networks," not including 17.4.1, "MAP estimation," pages 741-751
See also
- These techniques can be used to learn the Bayes net structures , i.e. the pattern of nodes and edges.