首页> 外文学位 >ESTIMATION OF REGRESSION PARAMETERS IN LINEAR REGRESSION MODEL WITH AUTOCORRELATED ERRORS
【24h】

ESTIMATION OF REGRESSION PARAMETERS IN LINEAR REGRESSION MODEL WITH AUTOCORRELATED ERRORS

机译:具有自相关误差的线性回归模型中回归参数的估计

获取原文
获取原文并翻译 | 示例

摘要

A frequent, if not typical, problem in applied Econometrics is the autocorrelated disturbance terms of a linear regression model. We consider the problem of estimation and inference of the regression parameters when the errors are autocorrelated.;For any stochastic model, the general interest is not only to produce the "best" possible estimator of the regression parameter but also to use it to make inference. The finite sample distribution being usually unknown, it has been a common practice in Econometrics to use an estimated asymptotic distribution to make inference in small sample. Obviously, this will give misleading inference unless the estimated asymptotic distribution can well approximate the exact finite sample distribution.;Given the present practice of making inference in the absence of our knowledge of the finite sample distribution of an estimator, if we have to choose an estimator from a set of estimators, it seems one should choose the one for which the estimated asymptotic distribution is closest to its exact distribution.;We examined the five most used estimators of the regression coefficients of the regression model with the error following the first-order autoregressive process, namely Ordinary Least Squares (OLS), Cochrane-Orcutt (CO), Cochrane-Orcutt modified by Prais and Winston (PW), Durbin and Maximum Likelihood (ML) estimators. Adopting the well known measure of distance between two distributions by Kolmogorov and Smirnov as a measure of closeness of the two distributions, we computed the distance between the asymptotic distribution and its small sample estimate for each of these estimators. Due to analytical complexity, we resorted to Monte Carlo study. General conclusion that can be drawn from our study is that OLS should never be preferred, and even though all other estimators are comparable over the entire range of the autocorrelation parameter, PW seems to be preferable. This can be contrasted to the well known conclusion using MSE as the choice criterion that OLS may be preferred when the order of the autocorrelation coefficient does not exceed .30 and for larger values of this coefficient, all other estimators are comparable, but possibly the ML estimator may be preferred.;We also examined the small sample first and second moment properties of these estimators except the ML estimator. Analytical difficulties were the main reasons for not studying the ML estimator. The first and the second moments of the OLS estimators of the regression parameter were easily derived. For the remaining methods, these moments were approximately computed. We found that the CO, PW and Durbin estimators are all unbiased. The computation of the variances of these estimators require numerical integration. Convergence problems which occurred with the Durbin method restricted further our results to the OLS, CO and PW methods only. For these three methods, the variances of both the regression parameter estimators were found to be monotonically increasing function of the true value of the autoregressive parameter. This result contradicts previous studies and general intuition that the variances of the regression parameters should be minimal when the autoregressive coefficient is close to zero. The second important result is that the OLS method is as good or better than any other method when the autoregressive coefficient is small, possibly when its absolute value does not exceed .3. And this agrees with previous studies. But very surprisingly OLS is found to be better than the CO method except possibly for extreme negative values of (rho). Also in agreement with a result from the distance point of view, the PW method seems to be preferable to all other methods.
机译:应用计量经济学中的一个常见问题(如果不是典型问题)是线性回归模型的自相关扰动项。当误差是自相关时,我们考虑回归参数的估计和推断问题。;对于任何随机模型,总的兴趣不仅在于产生回归参数的“最佳”可能估计量,而且要用它来进行推断。有限的样本分布通常是未知的,在计量经济学中,通常使用估计的渐近分布来推断小样本。显然,除非估计的渐近分布可以很好地近似精确的有限样本分布,否则这将产生误导性推论;考虑到当前的推论做法,在我们不知道估计器的有限样本分布的情况下,如果我们必须选择一个从一组估计量的估计量中,似乎应该选择一个估计的渐近分布最接近其精确分布的估计量;我们研究了回归模型回归系数的五个最常用的估计量,其误差遵循第一个-阶自回归过程,即普通最小二乘(OLS),Cochrane-Orcutt(CO),由Prais和Winston修改的Cochrane-Orcutt(PW),Durbin和最大似然(ML)估计量。采用Kolmogorov和Smirnov的两个分布之间的距离的公知度量作为两个分布的紧密度的度量,我们针对这些估计量中的每一个,计算了渐近分布与其小样本估计之间的距离。由于分析的复杂性,我们求助于蒙特卡洛研究。从我们的研究中可以得出的一般结论是,永远不应首选OLS,即使在自相关参数的整个范围内所有其他估计量都具有可比性,PW也似乎是首选。这可以与使用MSE作为选择标准的众所周知的结论形成对比,该结论认为,当自相关系数的阶数不超过.30且对于该系数的较大值时,所有其他估计量都是可比较的,但是ML ;我们还研究了除ML估计器外,这些估计器的小样本第一矩和第二矩特性。分析上的困难是不研究ML估计量的主要原因。回归参数的OLS估计量的第一和第二矩很容易得出。对于其余方法,这些矩是近似计算的。我们发现CO,PW和Durbin估计量都是无偏的。这些估计量方差的计算需要数值积分。 Durbin方法发生的收敛性问题将我们的结果进一步限制为OLS,CO和PW方法。对于这三种方法,发现两个回归参数估计量的方差都是自回归参数真实值的单调递增函数。该结果与先前的研究和一般直觉相矛盾,即当自回归系数接近零时,回归参数的方差应最小。第二个重要结果是,当自回归系数小时(可能其绝对值不超过0.3),OLS方法会比任何其他方法都好或更好。这与以前的研究一致。但非常令人惊讶的是,发现OLS可能比CO方法更好,但可能是(rho)的极负值。同样从距离的观点来看,PW方法似乎比所有其他方法都更好。

著录项

  • 作者

    TRUONG, THUAN VAN.;

  • 作者单位

    University of Kentucky.;

  • 授予单位 University of Kentucky.;
  • 学科 Economics.
  • 学位 Ph.D.
  • 年度 1980
  • 页码 122 p.
  • 总页数 122
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号