Exam 18: The Theory of Multiple Regression

arrow
  • Select Tags
search iconSearch Question
  • Select Tags

The OLS estimator

Free
(Multiple Choice)
4.8/5
(33)
Correct Answer:
Verified

A

Write the following four restrictions in the form Rβ=r,\boldsymbol { R } \boldsymbol { \beta } = \boldsymbol { r } , where the hypotheses are to be tested simultaneously. =2, +=1, =0, =-. Can you write the following restriction β2=β3β1\beta _ { 2 } = - \frac { \beta _ { 3 } } { \beta _ { 1 } } in the same format? Why not?

Free
(Essay)
4.8/5
(32)
Correct Answer:
Verified

(0001020011000000001000010001)(β0β1β2β3β4β5β6)=(0100)\left( \begin{array} { c c c c c c c } 0 & 0 & 0 & 1 & 0 & - 2 & 0 \\0 & 1 & 1 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 1 & 0 & 0 \\0 & 0 & 1 & 0 & 0 & 0 & 1\end{array} \right) \left( \begin{array} { l } \beta _ { 0 } \\\beta _ { 1 } \\\beta _ { 2 } \\\beta _ { 3 } \\\beta _ { 4 } \\\beta _ { 5 } \\\beta _ { 6 }\end{array} \right) = \left( \begin{array} { l } 0 \\1 \\0 \\0\end{array} \right)
The restriction β2=β3β1\beta _ { 2 } = - \frac { \beta _ { 3 } } { \beta _ { 1 } } cannot be written in the same format because it is nonlinear.

Write an essay on the difference between the OLS estimator and the GLS estimator.

Free
(Essay)
4.9/5
(45)
Correct Answer:
Verified

Answers will vary by student, but some of the following points should be made.
The multiple regression model is Yi=β0+β1X1i+β2X2i++βkXki+ui,i=1,,nY _ { i } = \beta _ { 0 } + \beta _ { 1 } X _ { 1 i } + \beta _ { 2 } X _ { 2 i } + \cdots + \beta _ { k } X _ { k i } + u _ { i } , i = 1 , \ldots , n
which, in matrix form, can be written as Y=Xβ+U\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } . The OLS estimator is derived by minimizing the squared prediction mistakes and results in the following formula: β^=(XX)1XY\hat { \boldsymbol { \beta } } = \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { Y } . There are two GLS estimators. The infeasible GLS estimator is β^GLS=(XΩ1X)1(XΩ1Y)\hat { \boldsymbol { \beta } } ^ { G L S } = \left( \boldsymbol { X } ^ { \prime } \boldsymbol { \Omega } ^ { 1 } \boldsymbol { X } \right) ^ { - 1 } \left( \boldsymbol { X } ^ { \prime } \boldsymbol { \Omega } ^ { 1 } \boldsymbol { Y } \right) . Since Ω\Omega is typically unknown, the estimator cannot be calculated, and hence its name. However, a feasible GLS estimator can be calculated if Ω\Omega is a known function of a number of parameters which can be estimated. Once these parameters have been estimated, they can then be used to calculate Ω^\hat { \Omega } , the estimator of Ω\Omega . The feasible GLS estimator is defined as β^GLS=(XΩ^1X)1(XΩ^1Y)\hat { \boldsymbol { \beta } } ^ { G L S } = \left( \boldsymbol { X } ^ { \prime } \hat { \boldsymbol { \Omega } } ^ { - 1 } \boldsymbol { X } \right) ^ { - 1 } \left( \boldsymbol { X } ^ { \boldsymbol {} } \hat { \boldsymbol { \Omega } } ^ { - 1 } \boldsymbol { Y } \right) . There are extended least squares assumptions.  Answers will vary by student, but some of the following points should be made. The multiple regression model is  Y _ { i } = \beta _ { 0 } + \beta _ { 1 } X _ { 1 i } + \beta _ { 2 } X _ { 2 i } + \cdots + \beta _ { k } X _ { k i } + u _ { i } , i = 1 , \ldots , n  which, in matrix form, can be written as  \boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } . The OLS estimator is derived by minimizing the squared prediction mistakes and results in the following formula:  \hat { \boldsymbol { \beta } } = \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { Y } . There are two GLS estimators. The infeasible GLS estimator is  \hat { \boldsymbol { \beta } } ^ { G L S } = \left( \boldsymbol { X } ^ { \prime } \boldsymbol { \Omega } ^ { 1 }  \boldsymbol { X } \right) ^ { - 1 } \left( \boldsymbol { X } ^ { \prime } \boldsymbol { \Omega } ^ { 1 } \boldsymbol { Y } \right) . Since  \Omega  is typically unknown, the estimator cannot be calculated, and hence its name. However, a feasible GLS estimator can be calculated if  \Omega  is a known function of a number of parameters which can be estimated. Once these parameters have been estimated, they can then be used to calculate  \hat { \Omega } , the estimator of  \Omega . The feasible GLS estimator is defined as  \hat { \boldsymbol { \beta } } ^ { G L S } = \left( \boldsymbol { X } ^ { \prime } \hat { \boldsymbol { \Omega } } ^ { - 1 } \boldsymbol { X } \right) ^ { - 1 } \left( \boldsymbol { X } ^ { \boldsymbol {} } \hat { \boldsymbol { \Omega } } ^ { - 1 } \boldsymbol { Y } \right) . There are extended least squares assumptions.

The multiple regression model can be written in matrix form as follows: a. Y=Xβ\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } . b. Y=X+U\boldsymbol { Y } = \boldsymbol { X } + \boldsymbol { U } . c. Y=βX+U\boldsymbol { Y } = \boldsymbol { \beta } \boldsymbol { X } + \boldsymbol { U } . d. Y=Xβ+U\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } .

(Short Answer)
4.9/5
(44)

The Gauss-Markov theorem for multiple regressions states that the OLS estimator

(Multiple Choice)
4.9/5
(31)

Assume that the data looks as follows: Y=(Y1Y2Yn),U=(u1u2un),X=(X11X12X1n), and β=(β1)\boldsymbol { Y } = \left( \begin{array} { l } Y _ { 1 } \\Y _ { 2 } \\\vdots \\Y _ { n }\end{array} \right) , \boldsymbol { U } = \left( \begin{array} { l } u _ { 1 } \\u _ { 2 } \\\vdots \\u _ { n }\end{array} \right) , \boldsymbol { X } = \left( \begin{array} { c } X _ { 11 } \\X _ { 12 } \\\vdots \\X _ { 1 n }\end{array} \right) \text {, and } \boldsymbol { \beta } = \left( \beta _ { 1 } \right) Using the formula for the OLS estimator β^=(XX)1XY\hat { \boldsymbol { \beta } } = \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { Y } , derive the formula for β^1\widehat { \beta } _ { 1 } , the only slope in this "regression through the origin."

(Essay)
4.7/5
(32)

The presence of correlated error terms creates problems for inference based on OLS. These can be overcome by

(Multiple Choice)
4.8/5
(30)

Write the following three linear equations in matrix format Ax=bA x = b , where x\boldsymbol { x } is a 3×13 \times 1 vector containing q,pq , p , and y,Ay , \boldsymbol { A } is a 3×33 \times 3 matrix of coefficients, and b\boldsymbol { b } is a 3×13 \times 1 vector of constants. q = 5 +3 p - 2 y q = 10 - p + 10 y p = 6 y

(Essay)
4.8/5
(47)

Given the following matrices A=(a11a12a21a22),B=(b11b12b21b22), and C=(c11c12c13c21c22c23)\boldsymbol { A } = \left( \begin{array} { l l } a _ { 11 } & a _ { 12 } \\a _ { 21 } & a _ { 22 }\end{array} \right) , \boldsymbol { B } = \left( \begin{array} { l l } b _ { 11 } & b _ { 12 } \\b _ { 21 } & b _ { 22 }\end{array} \right) , \text { and } \boldsymbol { C } = \left( \begin{array} { l l l } c _ { 11 } & c _ { 12 } & c _ { 13 } \\c _ { 21 } & c _ { 22 } & c _ { 23 }\end{array} \right) show that (A+B)=A+B( \boldsymbol { A } + \boldsymbol { B } ) ^ { \prime } = \boldsymbol { A } ^ { \prime } + \boldsymbol { B } ^ { \prime } and (AC)=CA( \boldsymbol { A } \boldsymbol { C } ) ^ { \prime } = \boldsymbol { C } ^ { \prime } \boldsymbol { A } ^ { \prime } .

(Essay)
4.9/5
(36)

A joint hypothesis that is linear in the coefficients and imposes a number of restrictions can be written as a. (XX)1XY\left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { Y } . b. Rβ=r\boldsymbol { R } \boldsymbol { \beta } = \boldsymbol { r } . c. β^β\hat { \beta } - \beta . d. Rβ=0\boldsymbol { R } \boldsymbol { \beta } = \mathbf { 0 } .

(Short Answer)
4.9/5
(38)

The GLS estimator is defined as a. (XΩ1X)1(XΩ1Y)\left(\boldsymbol{X}^{\prime} \boldsymbol{\Omega}^{1} \boldsymbol{X}\right)^{-1}\left(\boldsymbol{X}^{\prime} \Omega^{1} \boldsymbol{Y}\right) b. (XX)1XY\left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { Y } . c. AYA ^ { \prime } Y . d. (XX)1XU\left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } \boldsymbol { U } .

(Short Answer)
4.9/5
(32)

In Chapter 10 of your textbook, panel data estimation was introduced.Panel data consist of observations on the same n entities at two or more time periods T.For two variables, you have (Xit,Yit),i=1,,n and t=1,,T\left( X _ { i t } , Y _ { i t } \right) , i = 1 , \ldots , n \text { and } t = 1 , \ldots , T where n could be the U.S.states.The example in Chapter 10 used annual data from 1982 to 1988 for the fatality rate and beer taxes.Estimation by OLS, in essence, involved "stacking" the data. (a)What would the variance-covariance matrix of the errors look like in this case if you allowed for homoskedasticity-only standard errors? What is its order? Use an example of a linear regression with one regressor of 4 U.S.states and 3 time periods.

(Essay)
4.8/5
(38)

Using the model Y=Xβ+U\boldsymbol { Y } = \boldsymbol { X } \boldsymbol { \beta } + \boldsymbol { U } and the extended least squares assumptions, derive the OLS estimator β^\hat { \beta } . Discuss the conditions under which XX\boldsymbol { X } ^ { \prime } \boldsymbol { X } is invertible.

(Essay)
4.8/5
(42)

One of the properties of the OLS estimator is a. Xβ^=0k+1\boldsymbol { X } \hat { \boldsymbol { \beta } } = \mathbf { 0 } _ { k + 1 } . b. that the coefficient vector β^\hat { \beta } has full rank. c. X(YXβ^)=0k+1\boldsymbol { X } ^ { \prime } ( \boldsymbol { Y } - \boldsymbol { X } \hat { \boldsymbol { \beta } } ) = \mathbf { 0 } _ { k + 1 } . d. (XX)1=XY\left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } = \boldsymbol { X } ^ { \boldsymbol {} } \boldsymbol { Y }

(Short Answer)
4.8/5
(42)

The difference between the central limit theorems for a scalar and vector-valued random variables is

(Multiple Choice)
4.8/5
(31)

An estimator of β\beta is said to be linear if

(Multiple Choice)
4.9/5
(41)

In the case when the errors are homoskedastic and normally distributed, conditional on X, then a. β^\hat { \boldsymbol { \beta } } is distributed N(β,Σβ^X)N \left( \boldsymbol { \beta } , \Sigma _ { \hat { \beta } \mid X } \right) , where Σβ^X=σu2I(k+1)\Sigma _ { \hat { \beta } \mid X } = \sigma _ { u } ^ { 2 } \boldsymbol { I } _ { ( \mathrm { k } + 1 ) } . b. β^\hat { \boldsymbol { \beta } } is distributed N(β,Σβ^)\mathrm { N } \left( \boldsymbol { \beta } , \Sigma _ { \hat { \beta } } \right) , where Σβ^=Σn(β˙β)/n=QX1ΣVQX1/n\Sigma _ { \hat { \beta } } = \Sigma _ { \sqrt { n } ( \dot { \beta } - \beta ) } / n = \boldsymbol { Q } _ { X } ^ { - 1 } \Sigma _ { V } \boldsymbol { Q } _ { X } ^ { - 1 } / n . c. β^\hat { \beta } is distributed N(β,Σβ˙X)N \left( \boldsymbol { \beta } , \Sigma _ { \dot { \beta } \mid X } \right) , where Σβ˙X=σu2(XX)1\Sigma _ { \dot { \beta } \mid X } = \sigma _ { u } ^ { 2 } ( \boldsymbol { X } \boldsymbol { X } ) ^ { - 1 } . d. U^=PXY\hat { U } = \boldsymbol { P } _ { \boldsymbol { X } } \boldsymbol { Y } where PX=X(XX)1X\boldsymbol { P } _ { \boldsymbol { X } } = \boldsymbol { X } \left( \boldsymbol { X } ^ { \prime } \boldsymbol { X } \right) ^ { - 1 } \boldsymbol { X } ^ { \prime } .

(Short Answer)
4.7/5
(39)

Consider the multiple regression model from Chapter 5, where k = 2 and the assumptions of the multiple regression model hold. (a)Show what the X matrix and the β\beta vector would look like in this case.

(Essay)
4.9/5
(41)

In order for a matrix A to have an inverse, its determinant cannot be zero.Derive the determinant of the following matrices: A=(3621)A = \left( \begin{array} { c c } 3 & 6 \\- 2 & 1\end{array} \right) B=(112103402)B = \left( \begin{array} { c c c } 1 & - 1 & 2 \\1 & 0 & 3 \\4 & 0 & 2\end{array} \right) XX\boldsymbol { X } ^ { \prime } \boldsymbol { X } where X=(110)X = \left( \begin{array} { l l } 1 & 10 \end{array} \right)

(Essay)
4.9/5
(41)

One implication of the extended least squares assumptions in the multiple regression model is that a. feasible GLS should be used for estimation. b. E(UX)=InE ( \boldsymbol { U } \mid \boldsymbol { X } ) = \boldsymbol { I } _ { n } . c. XX\boldsymbol { X } ^ { \prime } \boldsymbol { X } is singular. d. the conditional distribution of U\boldsymbol { U } given X\boldsymbol { X } is N(0n,σu2In)N \left( \mathbf { 0 } _ { n } , \sigma _ { u } ^ { 2 } \boldsymbol { I } _ { n } \right) .

(Short Answer)
4.7/5
(42)
Showing 1 - 20 of 38
close modal

Filters

  • Essay(0)
  • Multiple Choice(0)
  • Short Answer(0)
  • True False(0)
  • Matching(0)