It was long realized that for parametric inference problems, posterior distributions based on a large class of reasonable prior distributions possess very desirable large sample convergence properties, even if viewed from purely frequentist angles. For nonparametric or semiparametric problems, the story gets complicated, but still good frequentist convergence properties are enjoyed by Bayesian methods if a prior distribution is carefully constructed. The last ten years have witnessed the most significant progress in the study of consistency, convergence rates and finer frequentist properties. It is now well understood that the properties are controlled by the concentration of prior mass near the true value, as well as the effective size of the model, measured in terms of the metric entropy. Results have poured in for independent and identically distributed data, independent and non-identically distributed data and dependent data, as well as for a wide spectrum of inference problems such as density estimation, nonparametric regression, classification, and so on. Nonparametric mixtures, random series and Gaussian processes play particularly significant roles in the construction of the “right” priors. In this talk, we try to outline the most significant developments that took place in the last decade. In particular, we emphasize the ability of the posterior distribution to effortlessly choose the right model and adapt to the unknown level of smoothness.
Keywords: Posterior consistency; Posterior convergence rate; Adaptation; Bernstein-von Mises theorem
Biography: Subhashis Ghosal is Professor of Statistics at NC State University in Raleigh. His research topics are asymptotics, Bayesian inference and nonparametric models with applications in image processing, biomedical research, multiple testing, survival analysis, variable selection etc. His research has been funded by National Science Foundation of USA. He is a Fellow of IMS, ASA and obtained various other awards.