Exam 17: The Theory of Linear Regression With One Regressor

arrow
  • Select Tags
search iconSearch Question
  • Select Tags

Asymptotic distribution theory is

Free
(Multiple Choice)
4.7/5
(36)
Correct Answer:
Verified

D

The OLS estimator is a linear estimator, β^1=i=1na^iYi\hat { \beta } _ { 1 } = \sum _ { i = 1 } ^ { n } \hat { a } _ { i } Y _ { i } , where a^i=\hat { a } _ { i } = a. XiXˉj=1n(XjXˉ)2\frac { X _ { i } - \bar { X } } { \sum _ { j = 1 } ^ { n } \left( X _ { j } - \bar { X } \right) ^ { 2 } } . b. 1n\frac { 1 } { n } . c. XiXˉj=1n(XjXˉ)\frac { X _ { i } - \bar { X } } { \sum _ { j = 1 } ^ { n } \left( X _ { j } - \bar { X } \right) } . d. Xij=1n(XjXˉ)2\frac { X _ { i } } { \sum _ { j = 1 } ^ { n } \left( X _ { j } - \bar { X } \right) ^ { 2 } } .

Free
(Short Answer)
4.7/5
(33)
Correct Answer:
Verified

A

The WLS estimator is called infeasible WLS estimator when

Free
(Multiple Choice)
4.9/5
(39)
Correct Answer:
Verified

B

Discuss the properties of the OLS estimator when the regression errors are homoskedastic and normally distributed.What can you say about the distribution of the OLS estimator when these features are absent?

(Essay)
4.9/5
(49)

The advantage of using heteroskedasticity-robust standard errors is that

(Multiple Choice)
4.7/5
(35)

For this question you may assume that linear combinations of normal variates are themselves normally distributed.Let a, b, and c be non-zero constants. (a) X and Y are independently distributed as N(a,σ2). What is the distribution of (bX+cY) ? X \text { and } Y \text { are independently distributed as } N \left( a , \sigma ^ { 2 } \right) \text {. What is the distribution of } ( b X + c Y ) \text { ? }

(Essay)
4.9/5
(35)

When the errors are heteroskedastic, then

(Multiple Choice)
4.7/5
(40)

"I am an applied econometrician and therefore should not have to deal with econometric theory.There will be others who I leave that to.I am more interested in interpreting the estimation results." Evaluate.

(Essay)
4.9/5
(30)

Feasible WLS does not rely on the following condition:

(Multiple Choice)
4.9/5
(46)

Under the five extended least squares assumptions, the homoskedasticity-only t- distribution in this chapter a. has a Student tt distribution with nn -2 degrees of freedom. b. has a normal distribution. c. converges in distribution to a χn22\chi _ { n - 2 } ^ { 2 } distribution. d. has a Student tt distribution with nn degrees of freedom.

(Short Answer)
4.9/5
(40)

E(1n2i=1nu^i2)E \left( \frac { 1 } { n - 2 } \sum _ { i = 1 } ^ { n } \hat { u } _ { i } ^ { 2 } \right)

(Multiple Choice)
4.8/5
(38)

Your textbook states that an implication of the Gauss-Markov theorem is that the sample average, Yˉ\bar { Y } , is the most efficient linear estimator of E(Yi)E \left( Y _ { i } \right) when Y1,,YnY _ { 1 } , \ldots , Y _ { n } are i.i.d. with E(Yi)=μYE \left( Y _ { i } \right) = \mu _ { Y } and var(Yi)=σY2\operatorname { var } \left( Y _ { i } \right) = \sigma _ { Y } ^ { 2 } . This follows from the regression model with no slope and the fact that the OLS estimator is BLUE. Provide a proof by assuming a linear estimator in the YY 's, μ~=i=1naiYi\tilde { \mu } = \sum _ { i = 1 } ^ { n } a _ { i } Y _ { i } . (a)State the condition under which this estimator is unbiased.

(Essay)
4.7/5
(32)

You need to adjust Su^2S _ { \hat { u } } ^ { 2 } by the degrees of freedom to ensure that su^2s _ { \hat { u } } ^ { 2 } is

(Multiple Choice)
4.8/5
(43)

It is possible for an estimator of μY\mu _ { Y } to be inconsistent while

(Multiple Choice)
4.9/5
(40)

The class of linear conditionally unbiased estimators consists of a. all estimators of β1\beta _ { 1 } that are linear functions of Y1,,YnY _ { 1 } , \ldots , Y _ { n } and that are unbiased, conditional on X1,,XnX _ { 1 } , \ldots , X _ { n } . b. OLS, WLS, and TSLS. c. those estimators that are asymptotically normally distributed. d. all estimators of β1\beta _ { 1 } that are linear functions of X1,,XnX _ { 1 } , \ldots , X _ { n } and that are unbiased, conditional on X1,,XnX _ { 1 } , \ldots , X _ { n } .

(Short Answer)
4.9/5
(41)

(Requires Appendix material)State and prove the Cauchy-Schwarz Inequality.

(Essay)
4.8/5
(31)

"One should never bother with WLS.Using OLS with robust standard errors gives correct inference, at least asymptotically." True, false, or a bit of both? Explain carefully what the quote means and evaluate it critically.

(Essay)
5.0/5
(37)

If, in addition to the least squares assumptions made in the previous chapter on the simple regression model, the errors are homoskedastic, then the OLS estimator is

(Multiple Choice)
4.8/5
(39)

Slutsky's theorem combines the Law of Large Numbers

(Multiple Choice)
4.7/5
(45)

Estimation by WLS

(Multiple Choice)
4.8/5
(29)
Showing 1 - 20 of 39
close modal

Filters

  • Essay(0)
  • Multiple Choice(0)
  • Short Answer(0)
  • True False(0)
  • Matching(0)