首页> 外文学位 >Regularization parameter selection for variable selection in high-dimensional modelling.
【24h】

Regularization parameter selection for variable selection in high-dimensional modelling.

机译:用于高维建模中变量选择的正则化参数选择。

获取原文
获取原文并翻译 | 示例

摘要

Variable selection is an important issue in statistical modelling. Classical approaches select models by applying a penalty related to the size of the candidate model. Exhaustive search is required for these classical methods which is impractical in high-dimensional modelling. Adopting continuous penalties such as the LASSO and the SCAD made it possible to cope with the high-dimensionality. Alike in classical methods, the size of regularization plays a crucial rule in their asymptotic properties. For classical methods, it is well known that AIC-like criteria are asymptotically loss efficient in the sense that they choose the minimum loss model when the true model is infinite dimensional. On the contrary, when there is a finite dimensional correct model, BIC-like criteria are consistent in the sense that they choose the smallest correct model with probability tending to one. Parallel properties for the penalized estimators are studied in this thesis. Extending the results of Wang, Li, and Tsai (2007a), we show that the consistent tuning parameter selector results in a penalized estimator that is also consistent in a general likelihood setting. On the other hand, it is shown that the tuning parameter selector constructed from an efficient criterion is also asymptotically loss efficient for linear regression. Under the conditions imposed in this thesis, the efficiency result can also be extended to generalized linear models in terms of Kullback-Leibler loss. Our simulation studies suggest the finite sample performances are in line with the theories we present. A real data application is discussed to advocate the use of penalized likelihood variable selection procedures.
机译:变量选择是统计建模中的重要问题。经典方法通过应用与候选模型的大小有关的惩罚来选择模型。这些经典方法需要穷举搜索,这在高维建模中是不切实际的。不断采用诸如LASSO和SCAD之类的惩罚措施可以应对高尺寸。与经典方法一样,正则化的大小在其渐近性质中起着至关重要的规则。对于经典方法,众所周知,类似AIC的准则在渐进式损失有效的意义上在于,当真实模型为无限维时,它们选择最小损失模型。相反,当存在有限维的正确模型时,类似BIC的标准在它们选择趋于一个概率的最小正确模型的意义上是一致的。本文研究了惩罚估计量的并行性质。扩展了Wang,Li和Tsai(2007a)的结果,我们证明了一致的调整参数选择器会导致在总体似然设置中也一致的惩罚估计量。另一方面,示出了根据有效准则构造的调谐参数选择器对于线性回归也是渐近地损失有效的。在本文提出的条件下,效率结果还可以扩展到广义线性模型的Kullback-Leibler损失。我们的模拟研究表明,有限的样本性能与我们目前的理论是一致的。讨论了一种实际的数据应用程序,以提倡使用惩罚似然变量选择程序。

著录项

  • 作者

    Zhang, Yiyun.;

  • 作者单位

    The Pennsylvania State University.;

  • 授予单位 The Pennsylvania State University.;
  • 学科 Statistics.
  • 学位 Ph.D.
  • 年度 2009
  • 页码 104 p.
  • 总页数 104
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 统计学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号