Exam 18: The Theory of Multiple Regression

arrow
  • Select Tags
search iconSearch Question
  • Select Tags

In the case when the errors are homoskedastic and normally distributed, conditional on X, then

Free
(Multiple Choice)
4.9/5
(33)
Correct Answer:
Verified

C

The OLS estimator for the multiple regression model in matrix form is

Free
(Multiple Choice)
5.0/5
(34)
Correct Answer:
Verified

A

In Chapter 10 of your textbook, panel data estimation was introduced. Panel data consist of observations on the same n entities at two or more time periods T. For two variables, you have (Xit, Yit), i = 1,..., n and t = 1,..., T where n could be the U.S. states. The example in Chapter 10 used annual data from 1982 to 1988 for the fatality rate and beer taxes. Estimation by OLS, in essence, involved "stacking" the data. (a)What would the variance-covariance matrix of the errors look like in this case if you allowed for homoskedasticity-only standard errors? What is its order? Use an example of a linear regression with one regressor of 4 U.S. states and 3 time periods. (b)Does it make sense that errors in New Hampshire, say, are uncorrelated with errors in Massachusetts during the same time period ("contemporaneously")? Give examples why this correlation might not be zero. (c)If this correlation was known, could you find an estimator which was more efficient than OLS?

Free
(Essay)
4.8/5
(30)
Correct Answer:
Verified

(a)Under the extended least least squares assumptions, E(UU' | X)= σu2\sigma _ { u } ^ { 2 } In.
In the above example of 4 U.S. states and 3 time periods, the identity matrix will be of order 12×12, or (nT)× (nT)in general. Specifically (σu2000σu2000σu2)\left( \begin{array} { c c c c } \sigma _ { u } ^ { 2 } & 0 & \cdots & 0 \\0 & \sigma _ { u } ^ { 2 } & \cdots & 0 \\\vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & \sigma _ { u } ^ { 2 }\end{array} \right) (b)It is reasonable to assume that a shock to an adjacent state would have an effect on its neighboring state, particularly when the shock affects the larger of the two such as the case in Massachusetts. Other examples may be Texas and Arkansas, Michigan and Indiana, California and Arizona, New York and New Jersey, etc. A negative oil price shock, which affects the demand for automobiles produced in Michigan, will have repercussions for suppliers located not only in Michigan, but also elsewhere.
(c)In case of a known variance-covariance matrix of the error terms, the GLS estimator β^\hat { \beta } GLS = ( XX ^ { \prime } ?-1X)-1( XX ^ { \prime } ?-1Y)could be used. The variance-covariance matrix would be of the form Ω=(σ1100σ1200σ1300σ14000σ1100σ1200σ1300σ14000σ1100σ1200σ1300σ14σ1200σ2200σ2300σ24000σ1200σ2200σ2300σ24000σ1200σ2200σ2300σ24σ1300σ2300σ3300σ34000σ1300σ2300σ3300σ34000σ1300σ2300σ3300σ34σ1400σ2400σ3400σ44000σ1400σ2400σ3400σ44000σ1400σ2400σ3400σ44)\Omega = \left( \begin{array} { c c c c c c c c c c c c } \sigma _ { 11 } & 0 & 0 & \sigma _ { 12 } & 0 & 0 & \sigma _ { 13 } & 0 & 0 & \sigma _ { 14 } & 0 & 0 \\0 & \sigma _ { 11 } & 0 & 0 & \sigma _ { 12 } & 0 & 0 & \sigma _ { 13 } & 0 & 0 & \sigma _ { 14 } & 0 \\0 & 0 & \sigma _ { 11 } & 0 & 0 & \sigma _ { 12 } & 0 & 0 & \sigma _ { 13 } & 0 & 0 & \sigma _ { 14 } \\\sigma _ { 12 } & 0 & 0 & \sigma _ { 22 } & 0 & 0 & \sigma _ { 23 } & 0 & 0 & \sigma _ { 24 } & 0 & 0 \\0 & \sigma _ { 12 } & 0 & 0 & \sigma _ { 22 } & 0 & 0 & \sigma _ { 23 } & 0 & 0 & \sigma _ { 24 } & 0 \\0 & 0 & \sigma _ { 12 } & 0 & 0 & \sigma _ { 22 } & 0 & 0 & \sigma _ { 23 } & 0 & 0 & \sigma _ { 24 } \\\sigma _ { 13 } & 0 & 0 & \sigma _ { 23 } & 0 & 0 & \sigma _ { 33 } & 0 & 0 & \sigma _ { 34 } & 0 & 0 \\0 & \sigma _ { 13 } & 0 & 0 & \sigma _ { 23 } & 0 & 0 & \sigma _ { 33 } & 0 & 0 & \sigma _ { 34 } & 0 \\0 & 0 & \sigma _ { 13 } & 0 & 0 & \sigma _ { 23 } & 0 & 0 & \sigma _ { 33 } & 0 & 0 & \sigma _ { 34 } \\\sigma _ { 14 } & 0 & 0 & \sigma _ { 24 } & 0 & 0 & \sigma _ { 34 } & 0 & 0 & \sigma _ { 44 } & 0 & 0 \\0 & \sigma _ { 14 } & 0 & 0 & \sigma _ { 24 } & 0 & 0 & \sigma _ { 34 } & 0 & 0 & \sigma _ { 44 } & 0 \\0 & 0 & \sigma _ { 14 } & 0 & 0 & \sigma _ { 24 } & 0 & 0 & \sigma _ { 34 } & 0 & 0 & \sigma _ { 44 }\end{array} \right) (There is a subtle issue here for the case of a feasible GLS estimator, where the variances and covariances have to be estimated. It can be shown, in that case, that the GLS estimator does not exist unless n ? T, which is not the case for most panels. It is easier to see that the variance-covariance matrix is singular for n>T if the data is stacked by time period.)

Minimization of i=1n(Yib0b1X1ibkXki)2\sum _ { i = 1 } ^ { n } \left( Y _ { i } - b _ { 0 } - b _ { 1 } X _ { 1 i } - \ldots - b _ { k } X _ { k i } \right) ^ { 2 } results in

(Multiple Choice)
4.8/5
(33)

Using the model Y = Xβ + U, and the extended least squares assumptions, derive the OLS estimator β^\hat {\beta} Discuss the conditions under which XX ^ { \prime } X is invertible.

(Essay)
4.8/5
(36)

The linear multiple regression model can be represented in matrix notation as Y= Xβ + U, where X is of order n×(k+1). k represents the number of

(Multiple Choice)
4.9/5
(26)

The Gauss-Markov theorem for multiple regression states that the OLS estimator

(Multiple Choice)
4.7/5
(28)

Consider the multiple regression model from Chapter 5, where k = 2 and the assumptions of the multiple regression model hold. (a)Show what the X matrix and the β vector would look like in this case. (b)Having collected data for 104 countries of the world from the Penn World Tables, you want to estimate the effect of the population growth rate (X1i)and the saving rate (X2i)(average investment share of GDP from 1980 to 1990)on GDP per worker (relative to the U.S.)in 1990. What are your expected signs for the regression coefficient? What is the order of the (X'X)here? (c)You are asked to find the OLS estimator for the intercept and slope in this model using the formula β^\hat{\beta} = (X'X)-1 X'Y. Since you are more comfortable in inverting a 2×2 matrix (the inverse of a 2×2 matrix is, (abcd)1\left( \begin{array} { l l l l l } a&b\\c&d\end{array} \right)^{-1} = 1adbc\frac { 1 } { a d - b c } (dbca)\left( \begin{array} { c c } d & - b \\- c & a\end{array} \right) ) you decide to write the multiple regression model in deviations from mean form. Show what the X matrix, the (X'X)matrix, and the X'Y matrix would look like now. (Hint: use small letters to indicate deviations from mean, i.e., zi = Zi - Zˉ\bar { Z } and note that Yi = β^\hat { \beta } 0 + β^\hat { \beta } 1X1i + β^\hat { \beta } 2X2i + u^\hat {u} i Yˉ\bar { Y } = β^\hat { \beta } 0 + β^\hat { \beta } 1 Xˉ\bar { X } 1 + β^\hat { \beta } 2 Xˉ\bar { X } 2. Subtracting the second equation from the first, you get yi = β^\hat { \beta } 1x1i + β^\hat { \beta } 2x2i + u^\hat {u} i) (d)Show that the slope for the population growth rate is given by β^\hat { \beta } 1 = i=1nyix1ii=1nx2i2i=1nyix2ii=1nx1ix2ii=1nx1i2i=1nx22i2(i=1nx1ix2i)2\frac { \sum_{i=1}^{n} y_{i} x 1 i \sum_{i=1}^{n} x_{2 i}^{2}-\sum_{i=1}^{n} y_{i} x_{2 i}\sum_{i=1}^{n} x_{1 i} x_{2 i}}{\sum_{i=1}^{n} x_{1 i}^{2}\sum_{i=1}^{n} x 2_{2 i}^{2}-(\left.\sum_{i=1}^{n} x_{1 i} x_{2 i}\right)^{2}} (e)The various sums needed to calculate the OLS estimates are given below: i=1nyi2\sum _ { i = 1 } ^ { n } y _ { i } ^ { 2 } = 8.3103; i=1nx1i2\sum _ { i = 1 } ^ { n } x _ { 1 i } ^ { 2 } = .0122; i=1nx2i2\sum _ { i = 1 } ^ { n } x _ { 2 i } ^ { 2 } = 0.6422 i=1nyix1i\sum _ { i = 1 } ^ { n } y _ { i } x _ { 1 i } = -0.2304; i=1nyix2i\sum _ { i = 1 } ^ { n } y _ { i } x _ { 2 i } = 1.5676; i=1nx1ix2i\sum _ { i = 1 } ^ { n } x _ { 1 \mathrm { i } } x _ { 2 i } = -0.0520 Find the numerical values for the effect of population growth and the saving rate on per capita income and interpret these. (f)Indicate how you would find the intercept in the above case. Is this coefficient of interest in the interpretation of the determinants of per capita income? If not, then why estimate it?

(Essay)
5.0/5
(30)

Consider the following symmetric and idempotent Matrix A: A = I - 1n\frac { 1 } { n } ??' and ? = [111]\left[ \begin{array} { c } 1 \\1 \\\ldots \\1\end{array} \right] a. Show that by postmultiplying this matrix by the vector Y (the LHS variable of the OLS regression), you convert all observations of Y in deviations from the mean. b. Derive the expression Y'AY. What is the order of this expression? Under what other name have you encountered this expression before?

(Essay)
4.9/5
(42)

A joint hypothesis that is linear in the coefficients and imposes a number of restrictions can be written as

(Multiple Choice)
4.9/5
(33)

Write the following three linear equations in matrix format Ax = b, where x is a 3×1 vector containing q, p, and y, A is a 3×3 matrix of coefficients, and b is a 3×1 vector of constants. q = 5 +3 p - 2 y q = 10 - p + 10 y p = 6 y

(Essay)
4.8/5
(41)

The TSLS estimator is

(Multiple Choice)
4.8/5
(38)

The extended least squares assumptions in the multiple regression model include four assumptions from Chapter 6 (ui has conditional mean zero; (Xi,Yi), i = 1,…, n are i.i.d. draws from their joint distribution; Xi and ui have nonzero finite fourth moments; there is no perfect multicollinearity). In addition, there are two further assumptions, one of which is

(Multiple Choice)
4.9/5
(39)

Assume that the data looks as follows: Y = (Y1Y2OYn) \left( \begin{array} { l } Y _ { 1 } \\Y _ { 2 } \\O \\Y _ { n }\end{array}\right) , U = (u1u2Oun)\left( \begin{array} { c } u _ { 1 } \\u _ { 2 } \\\mathrm { O } \\u _ { n }\end{array} \right) , X = (X11X12OX1n)\left( \begin{array} { l } X _ { 11 } \\X _ { 12 } \\\mathrm { O } \\X _ { 1 n }\end{array} \right) , and β = (β1) Using the formula for the OLS estimator β^\hat { \beta } = ( XX ^ { \prime } X)-1 XX ^ { \prime } Y, derive the formula for β^\hat { \beta } 1, the only slope in this "regression through the origin."

(Essay)
4.8/5
(38)

Give several economic examples of how to test various joint linear hypotheses using matrix notation. Include specifications of Rβ = r where you test for (i)all coefficients other than the constant being zero, (ii)a subset of coefficients being zero, and (iii)equality of coefficients. Talk about the possible distributions involved in finding critical values for your hypotheses.

(Essay)
4.8/5
(37)

Your textbook shows that the following matrix (Mx = In - Px)is a symmetric idempotent matrix. Consider a different Matrix A, which is defined as follows: A = I - 1n\frac { 1 } { n } ιι' and ι = [111]\left[ \begin{array} { c } 1 \\1 \\\ldots \\1\end{array} \right] a. Show what the elements of A look like. b. Show that A is a symmetric idempotent matrix c. Show that Aι = 0. d. Show that A U^\hat { \mathrm { U } } = U^\hat { \mathrm { U } } , where U^\hat { \mathrm { U } } is the vector of OLS residuals from a multiple regression.

(Essay)
4.9/5
(28)

Write down, in general, the variance-covariance matrix for the multiple regression error term U. Using the assumptions cov(ui,uj|XiXj)= 0 and var(ui|Xi)= σu2\sigma _ { \mathrm { u } } ^ { 2 } Show that the variance-covariance matrix can be written as σu2\sigma _ { \mathrm { u } } ^ { 2 } In.

(Essay)
4.9/5
(36)

A = (a11a12a21a22)\left( \begin{array} { l l } a _ { 11 } & a _ { 12 } \\a _ { 21 } & a _ { 22 }\end{array} \right) , B = (b11b12b21b22)\left( \begin{array} { l l } b _ { 11 } & b _ { 12 } \\b _ { 21 } & b _ { 22 }\end{array} \right) , and C = (c11c12c13c21c22c23)\left( \begin{array} { l l l } c _ { 11 } & c _ { 12 } & c _ { 13 } \\c _ { 21 } & c _ { 22 } & c _ { 23 }\end{array} \right) show that (A+B)( A + B ) ^ { \prime } = AA ^ { \prime } + BB ^ { \prime } and (AC)( A C ) ^ { \prime } = CC ^ { \prime } AA ^ { \prime }

(Essay)
4.9/5
(33)

The difference between the central limit theorems for a scalar and vector-valued random variables is

(Multiple Choice)
4.9/5
(34)

The multiple regression model can be written in matrix form as follows:

(Multiple Choice)
4.9/5
(31)
Showing 1 - 20 of 50
close modal

Filters

  • Essay(0)
  • Multiple Choice(0)
  • Short Answer(0)
  • True False(0)
  • Matching(0)