Exam 17: The Theory of Linear Regression With One Regressor

arrow
  • Select Tags
search iconSearch Question
  • Select Tags

What does the Gauss-Markov theorem prove? Without giving mathematical details, explain how the proof proceeds.What is its importance?

(Essay)
4.9/5
(42)

The following is not part of the extended least squares assumptions for regression with a single regressor: a. var(uiXi)=σu2\operatorname { var } \left( u _ { i } \mid X _ { i } \right) = \sigma _ { u } ^ { 2 } . b. E(uiXi)=0E \left( u _ { i } \mid X _ { i } \right) = 0 . c. the conditional distribution of uiu _ { i } given XiX _ { i } is normal. d. var(uiXi)=σu,i2\quad \operatorname { var } \left( u _ { i } \mid X _ { i } \right) = \sigma _ { u , i } ^ { 2 } .

(Short Answer)
4.7/5
(35)

Consider the model Yi=β1Xi+uiY _ { i } = \beta _ { 1 } X _ { i } + u _ { i } , where the XiX _ { i } and the uiu _ { i } are mutually independent i.i.d. random variables with finite fourth moment and E(ui)=0E \left( u _ { i } \right) = 0 . Let β^1\widehat { \beta } _ { 1 } denote the OLS estimator of β1\beta _ { 1 } . Show that n(β^1β1)=i=1nXiuini=1nXi2n.\sqrt { n } \left( \widehat { \beta } _ { 1 } - \beta _ { 1 } \right) = \frac { \frac { \sum _ { i = 1 } ^ { n } X _ { i } u _ { i } } { \sqrt { n } } } { \frac { \sum _ { i = 1 } ^ { n } X _ { i } ^ { 2 } } { n } } .

(Essay)
4.8/5
(29)

All of the following are good reasons for an applied econometrician to learn some econometric theory, with the exception of a. turning your statistical software from a "black box" into a flexible toolkit from which you are able to select the right tool for a given job. b. understanding econometric theory lets you appreciate why these tools work and what assumptions are required for each tool to work properly. c. learning how to invert a 4×44 \times 4 matrix by hand. d. helping you recognize when a tool will not work well in an application and when it is time for you to look for a different econometric approach.

(Short Answer)
4.9/5
(40)

The extended least squares assumptions are of interest, because

(Multiple Choice)
4.8/5
(33)

Consider the model Yi=β1Xi+uiY _ { i } = \beta _ { 1 } X _ { i } + u _ { i } , where ui=cXi2eiu _ { i } = c X _ { i } ^ { 2 } e _ { i } and all of the XX 's and ee 's are i.i.d. and distributed N(0,1)N ( 0,1 ) . (a)Which of the Extended Least Squares Assumptions are satisfied here? Prove your assertions.

(Essay)
4.8/5
(42)

In practice, the most difficult aspect of feasible WLS estimation is

(Multiple Choice)
4.8/5
(41)

The Gauss-Markov Theorem proves that a. the OLS estimator is tt distributed. b. the OLS estimator has the smallest mean square error. c. the OLS estimator is unbiased. d. with homoskedastic errors, the OLS estimator has the smallest variance in the class of linear and unbiased estimators, conditional on X1,,XnX _ { 1 } , \ldots , X _ { n } .

(Short Answer)
4.8/5
(40)

Finite-sample distributions of the OLS estimator and t-statistics are complicated, unless a. the regressors are all normally distributed. b. the regression errors are homoskedastic and normally distributed, conditional on X1,,XnX _ { 1 } , \ldots , X _ { n } c. the Gauss-Markov Theorem applies. d. the regressor is also endogenous.

(Short Answer)
4.8/5
(32)

(Requires Appendix material) Your textbook considers various distributions such as the standard normal, t,χ2 t, \chi^{2} , and F F distribution, and relationships between them. (a)  Using statistical tables, give examples that the following relationship holds: Fn1,=χn12n1\text { Using statistical tables, give examples that the following relationship holds: } F _ { n _ { 1 } , \infty } = \frac { \chi _ { n _ { 1 } } ^ { 2 } } { n _ { 1 } } \text {. }

(Essay)
4.9/5
(38)

An implication of n(β^1β1)dN(0,var(vi)[var(Xi)]2)\sqrt { n } \left( \hat { \beta } _ { 1 } - \beta _ { 1 } \right) \stackrel { d } { \rightarrow } N \left( 0 , \frac { \operatorname { var } \left( v _ { i } \right) } { \left[ \operatorname { var } \left( X _ { i } \right) \right] ^ { 2 } } \right) is that

(Multiple Choice)
4.7/5
(35)

Consider the simple regression model Yi=β0+β1Xi+uiY _ { i } = \beta _ { 0 } + \beta _ { 1 } X _ { i } + u _ { i } where Xi>0X _ { \mathrm { i } } > 0 for all ii , and the conditional variance is var(uiXi)=θXi2\operatorname { var } \left( u _ { i } \mid X _ { i } \right) = \theta X _ { i } ^ { 2 } where θ\theta is a known constant with θ>0\theta > 0 . (a) Write the weighted regression as Y~i=β0X~0i+β1X~1i+u~i\tilde { Y } _ { i } = \beta _ { 0 } \tilde { X } _ { 0 i } + \beta _ { 1 } \tilde { X } _ { 1 i } + \tilde { u } _ { i } . How would you construct Y~i\tilde { Y } _ { i } , X~0i\tilde { X } _ { 0 i } and X~1i?\tilde { X } _ { 1 i } ?

(Essay)
4.9/5
(30)

Besides the Central Limit Theorem, the other cornerstone of asymptotic distribution theory is the

(Multiple Choice)
4.8/5
(42)

(Requires Appendix Material)This question requires you to work with Chebychev's Inequality. (a)State Chebychev's Inequality.

(Essay)
4.7/5
(33)

If the errors are heteroskedastic, then

(Multiple Choice)
4.9/5
(38)

One of the earlier textbooks in econometrics, first published in 1971, compared "estimation of a parameter to shooting at a target with a rifle.The bull's-eye can be taken to represent the true value of the parameter, the rifle the estimator, and each shot a particular estimate." Use this analogy to discuss small and large sample properties of estimators.How do you think the author approached the n → ∞ condition? (Dependent on your view of the world, feel free to substitute guns with bow and arrow, or missile.)

(Essay)
4.8/5
(40)

The following is not one of the Gauss-Markov conditions a. var(uiX1,,Xn)=σu2,0<σu2<\operatorname { var } \left( u _ { i } \mid X _ { 1 } , \ldots , X _ { n } \right) = \sigma _ { u } ^ { 2 } , 0 < \sigma _ { u } ^ { 2 } < \infty for i=1,,ni = 1 , \ldots , n , b. the errors are normally distributed. c. E(uiujX1,,Xn)=0,i=1,,n,j=1,,n,ijE \left( u _ { i } u _ { j } \mid X _ { 1 } , \ldots , X _ { n } \right) = 0 , i = 1 , \ldots , n , j = 1 , \ldots , n , i \neq j . d. E(uiX1,,Xn)=0E \left( u _ { i } \mid X _ { 1 } , \ldots , X _ { n } \right) = 0 .

(Short Answer)
4.7/5
(33)

(Requires Appendix material) If the Gauss-Markov conditions hold, then OLS is BLUE. In addition, assume here that XX is nonrandom. Your textbook proves the Gauss-Markov theorem by using the simple regression model Yi=β0+β1Xi+uiY _ { i } = \beta _ { 0 } + \beta _ { 1 } X _ { i } + u _ { i } and assuming a linear estimator β~1=i=1naiYi\tilde { \beta } _ { 1 } = \sum _ { i = 1 } ^ { n } a _ { i } Y _ { i } . Substitution of the simple regression model into this expression then results in two conditions for the unbiasedness of the estimator: i=1nai=0 and i=1naiXi=1.\sum _ { i = 1 } ^ { n } a _ { i } = 0 \text { and } \sum _ { i = 1 } ^ { n } a _ { i } X _ { i } = 1 . The variance of the estimator is var(β~1X1,,Xn)=σu2i=1nai2\operatorname { var } \left( \tilde { \beta } _ { 1 } \mid X _ { 1 } , \ldots , X _ { n } \right) = \sigma _ { u } ^ { 2 } \sum _ { i = 1 } ^ { n } a _ { i } ^ { 2 } . Different from your textbook, use the Lagrangian method to minimize the variance subject to the two constraints. Show that the resulting weights correspond to the OLS weights. .

(Essay)
4.9/5
(38)

The link between the variance of Yˉ\bar { Y } and the probability that Yˉ\bar { Y } is within ±δ of μY\pm \delta \text { of } \mu _ { Y } is provided by

(Multiple Choice)
4.8/5
(40)
Showing 21 - 39 of 39
close modal

Filters

  • Essay(0)
  • Multiple Choice(0)
  • Short Answer(0)
  • True False(0)
  • Matching(0)