Consider estimating a regression function that can be well approximated by a linear combination of terms in a dictionary with the coefficients satisfying a lq constraint for some q in [0,1]. Under such a constraint, the coefficient vector is sparse in the sense that only a few large (in absolute value) components matter for estimating the regression function. We construct model-selection-based estimators that simultaneously achieve the minimax rate of convergence over the whole range of q in [0,1]. We will also discuss how to separate the case of q=0 (strictly sparse) from the rest based on a parametricness index.
The talk is based on a joint work with Zhan Wang, Sandra Paterlini, and Frank Gao.
Keywords: Adaptive estimation; Minimax rate of convergence; Sparse estimation
Biography: Yuhong is professor in School of Statistics, U of Minnesota. Since he graduated from Yale University with Ph.D. degree, he taught at Iowa State University before moving to Minnesota. His research interests include model selection, model combination and high-dimendional data analysis.