
Introductory Econometrics 4th Edition by Jeffrey Wooldridge
Edition 4ISBN: 978-0324660609
Introductory Econometrics 4th Edition by Jeffrey Wooldridge
Edition 4ISBN: 978-0324660609 Exercise 20
Consider the simple regression model y = 0 + 1 x + u under the first four Gauss Markov assumptions. For some function g(x), for example g(x) = x2 or g(x) = log(1 + x2), define zt = g(x1). Define a slope estimator as
Show that 1 is linear and unbiased. Remember, because E(u|x) = 0, you can treat both x. and zt as nonrandom in your derivation.
(ii) Add the homoskedasticity assumption, MLR.5. Show that
(iii) Show directly that, under the Gauss-Markov assumptions, Var(r1) 1 is the OLS estimator. [Hint: The Cauchy-Schwartz inequality in Appendix B implies that
notice that we can drop x from the sample covariance.]
![Consider the simple regression model y = 0 + 1 x + u under the first four Gauss Markov assumptions. For some function g(x), for example g(x) = x2 or g(x) = log(1 + x2), define zt = g(x1). Define a slope estimator as Show that 1 is linear and unbiased. Remember, because E(u|x) = 0, you can treat both x. and zt as nonrandom in your derivation. (ii) Add the homoskedasticity assumption, MLR.5. Show that (iii) Show directly that, under the Gauss-Markov assumptions, Var(r1) 1 is the OLS estimator. [Hint: The Cauchy-Schwartz inequality in Appendix B implies that notice that we can drop x from the sample covariance.]](https://storage.examlex.com/SM2712/11eb9ee2_f082_22c1_8edd_2fb6d8733f69_SM2712_00.jpg)
Show that 1 is linear and unbiased. Remember, because E(u|x) = 0, you can treat both x. and zt as nonrandom in your derivation.
(ii) Add the homoskedasticity assumption, MLR.5. Show that
![Consider the simple regression model y = 0 + 1 x + u under the first four Gauss Markov assumptions. For some function g(x), for example g(x) = x2 or g(x) = log(1 + x2), define zt = g(x1). Define a slope estimator as Show that 1 is linear and unbiased. Remember, because E(u|x) = 0, you can treat both x. and zt as nonrandom in your derivation. (ii) Add the homoskedasticity assumption, MLR.5. Show that (iii) Show directly that, under the Gauss-Markov assumptions, Var(r1) 1 is the OLS estimator. [Hint: The Cauchy-Schwartz inequality in Appendix B implies that notice that we can drop x from the sample covariance.]](https://storage.examlex.com/SM2712/11eb9ee2_f082_22c2_8edd_b5c774615cd4_SM2712_00.jpg)
(iii) Show directly that, under the Gauss-Markov assumptions, Var(r1) 1 is the OLS estimator. [Hint: The Cauchy-Schwartz inequality in Appendix B implies that
![Consider the simple regression model y = 0 + 1 x + u under the first four Gauss Markov assumptions. For some function g(x), for example g(x) = x2 or g(x) = log(1 + x2), define zt = g(x1). Define a slope estimator as Show that 1 is linear and unbiased. Remember, because E(u|x) = 0, you can treat both x. and zt as nonrandom in your derivation. (ii) Add the homoskedasticity assumption, MLR.5. Show that (iii) Show directly that, under the Gauss-Markov assumptions, Var(r1) 1 is the OLS estimator. [Hint: The Cauchy-Schwartz inequality in Appendix B implies that notice that we can drop x from the sample covariance.]](https://storage.examlex.com/SM2712/11eb9ee2_f082_22c3_8edd_af92df654b4c_SM2712_00.jpg)
notice that we can drop x from the sample covariance.]
Explanation
(i)
For simplicity, define ; this is no...
Introductory Econometrics 4th Edition by Jeffrey Wooldridge
Why don’t you like this exercise?
Other Minimum 8 character and maximum 255 character
Character 255