Exam 6: Linear Regression With Multiple Regressors

arrow
  • Select Tags
search iconSearch Question
  • Select Tags

In the multiple regression model, the SER is given by a. 1n2i=1nui^\frac { 1 } { n - 2 } \sum _ { i = 1 } ^ { n } \widehat { u _ { i } } . b. 1nk2i=1Nui\frac { 1 } { n - k - 2 } \sum _ { i = 1 } ^ { N } u _ { i } . c. 1nk2i=1nu^i\frac { 1 } { n - k - 2 } \sum _ { i = 1 } ^ { n } \hat { u } _ { i } . d. 1nk1i=1nu^i2\frac { 1 } { n - k - 1 } \sum _ { i = 1 } ^ { n } \hat { u } _ { i } ^ { 2 } .

(Short Answer)
4.8/5
(38)

In the case of perfect multicollinearity, OLS is unable to calculate the coefficients for the explanatory variables, because it is impossible to change one variable while holding all other variables constant.To see why this is the case, consider the coefficient for the first explanatory variable in the case of a multiple regression model with two explanatory variables: β^1=i=1nyix1ii=1nx2i2i=1nyix2ii=1nx1ix2ii=1nx1i2i=1nx2i2(i=1nx1ix2i)2\hat { \beta } _ { 1 } = \frac { \sum _ { i = 1 } ^ { n } y _ { i } x _ { 1 i } \sum _ { i = 1 } ^ { n } x _ { 2 i } ^ { 2 } - \sum _ { i = 1 } ^ { n } y _ { i } x _ { 2 i } \sum _ { i = 1 } ^ { n } x _ { 1 i } x _ { 2 i } } { \sum _ { i = 1 } ^ { n } x _ { 1 i } ^ { 2 } \sum _ { i = 1 } ^ { n } x _ { 2 i } ^ { 2 } - \left( \sum _ { i = 1 } ^ { n } x _ { 1 i } x _ { 2 i } \right) ^ { 2 } } (small letters refer to deviations from means as in zi=ZiZˉz _ { i } = Z _ { i } - \bar { Z } ). Divide each of the four terms by i=1nx1i2i=1nx2i2\sum _ { i = 1 } ^ { n } x _ { 1 i } ^ { 2 } \sum _ { i = 1 } ^ { n } x _ { 2 i } ^ { 2 } to derive an expression in terms of regression coefficients from the simple (one explanatory variable) regression model. In case of perfect multicollinearity, what would be R2R ^ { 2 } from the regression of X1iX _ { 1 i } on X2iX _ { 2 i } ? As a result, what would be the value of the denominator in the above expression for β^1\hat { \beta } _ { 1 } ?

(Essay)
4.8/5
(35)

One of the least squares assumptions in the multiple regression model is that you have random variables which are "i.i.d." This stands for

(Multiple Choice)
5.0/5
(40)

The main advantage of using multiple regression analysis over differences in means testing is that the regression technique

(Multiple Choice)
4.7/5
(37)

The intercept in the multiple regression model

(Multiple Choice)
4.8/5
(29)

Give at least three examples from macroeconomics and three from microeconomics that involve specified equations in a multiple regression analysis framework.Indicate in each case what the expected signs of the coefficients would be and if theory gives you an indication about the likely size of the coefficients.

(Essay)
4.8/5
(35)

Consider the following multiple regression models (a)to (d)below.DFemme = 1 if the individual is a female, and is zero otherwise; DMale is a binary variable which takes on The value one if the individual is male, and is zero otherwise; DMarried is a binary Variable which is unity for married individuals and is zero otherwise, and DSingle is (1- DMarried).Regressing weekly earnings (Earn)on a set of explanatory variables, you will Experience perfect multicollinearity in the following cases unless: a. Earni^=β0^+β^1\widehat { \operatorname { Earn } _ { i } } = \widehat { \beta _ { 0 } } + \widehat { \beta } _ { 1 } DFemme +β2^+ \widehat { \beta _ { 2 } } Dmale +β3^X3i+ \widehat { \beta _ { 3 } } X _ { 3 i } . b.  Earn i^=β^0+β^1\widehat { \text { Earn } _ { i } } = \widehat { \beta } _ { 0 } + \widehat { \beta } _ { 1 } DMarried +β^2+ \widehat { \beta } _ { 2 } DSingle +β^3X3i+ \widehat { \beta } _ { 3 } X _ { 3 i } . c. Earni^=β0^+β^1\widehat { \operatorname { Earn } _ { i } } = \widehat { \beta _ { 0 } } + \widehat { \beta } _ { 1 } DFemme +β^3X3i+ \widehat { \beta } _ { 3 } X _ { 3 i } .

(Short Answer)
4.9/5
(29)

If you had a two regressor regression model, then omitting one variable which is relevant

(Multiple Choice)
4.9/5
(38)

The Solow growth model suggests that countries with identical saving rates and population growth rates should converge to the same per capita income level.This result has been extended to include investment in human capital (education)as well as investment in physical capital.This hypothesis is referred to as the "conditional convergence hypothesis," since the convergence is dependent on countries obtaining the same values in the driving variables.To test the hypothesis, you collect data from the Penn World Tables on the average annual growth rate of GDP per worker (g6090)for the 1960-1990 sample period, and regress it on the (i)initial starting level of GDP per worker relative to the United States in 1960 (RelProd60), (ii)average population growth rate of the country (n), (iii)average investment share of GDP from 1960 to1990 (sK - remember investment equals savings), and (iv)educational attainment in years for 1985 (Educ).The results for close to 100 countries is as follows: =0.004-0.172\timesn+0.133\times+0.002\timesEduc-0.044\times, =0.537,SER=0.011 (a)Interpret the results.Do the coefficients have the expected signs? Why does a negative coefficient on the initial level of per capita income indicate conditional convergence ("beta-convergence")?

(Essay)
4.8/5
(44)

You have collected data for 104 countries to address the difficult questions of the determinants for differences in the standard of living among the countries of the world. You recall from your macroeconomics lectures that the neoclassical growth model suggests that output per worker (per capita income)levels are determined by, among others, the saving rate and population growth rate.To test the predictions of this growth model, you run the following regression:  RelPersInc ^=0.33912.894×n+1.397×sK,R2=0.621, SER =0.177\widehat { \text { RelPersInc } } = 0.339 - 12.894 \times n + 1.397 \times s _ { K } , R ^ { 2 } = 0.621 , \text { SER } = 0.177 where RelPersInc is GDP per worker relative to the United States, n is the average population growth rate, 1980-1990, and sK is the average investment share of GDP from 1960 to1990 (remember investment equals saving). (a)Interpret the results.Do the signs correspond to what you expected them to be? Explain.

(Essay)
4.8/5
(38)

It is not hard, but tedious, to derive the OLS formulae for the slope coefficient in the multiple regression case with two explanatory variables.The formula for the first regression slope is β^1=i=1nyix1ii=1nx2i2i=1nyix2ii=1nx1ix2ii=1nx1i2i=1nx2i2(i=1nxiix2i)2\hat { \beta } _ { 1 } = \frac { \sum _ { i = 1 } ^ { n } y _ { i } x _ { 1 i } \sum _ { i = 1 } ^ { n } x _ { 2 i } ^ { 2 } - \sum _ { i = 1 } ^ { n } y _ { i } x _ { 2 i } \sum _ { i = 1 } ^ { n } x _ { 1 i } x _ { 2 i } } { \sum _ { i = 1 } ^ { n } x _ { 1 i } ^ { 2 } \sum _ { i = 1 } ^ { n } x _ { 2 i } ^ { 2 } - \left( \sum _ { i = 1 } ^ { n } x _ { i i } x _ { 2 i } \right) ^ { 2 } } (small letters refer to deviations from means as in zi=ZiZˉz _ { i } = Z _ { i } - \bar { Z } ). Show that this formula reduces to the slope coefficient for the linear regression model with one regressor if the sample correlation between the two explanatory variables is zero.Given this result, what can you say about the effect of omitting the second explanatory variable from the regression?

(Essay)
4.9/5
(39)

(Requires some Calculus) Consider the sample regression function Yi=β0^+β^1X1i+β^2X2iY _ { i } = \widehat { \beta _ { 0 } } + \widehat { \beta } _ { 1 } X _ { 1 i } + \widehat { \beta } _ { 2 } X _ { 2 i } Take the total derivative. Next show that the partial derivative ΔYiΔX1i\frac { \Delta Y _ { i } } { \Delta X _ { 1 i } } is obtained by holding X2iX _ { 2 i } constant, or controlling for X2iX _ { 2 i }

(Essay)
4.8/5
(36)

The administration of your university/college is thinking about implementing a policy of coed floors only in dormitories.Currently there are only single gender floors.One reason behind such a policy might be to generate an atmosphere of better "understanding" between the sexes.The Dean of Students (DoS)has decided to investigate if such a behavior results in more "togetherness" by attempting to find the determinants of the gender composition at the dinner table in your main dining hall, and in that of a neighboring university, which only allows for coed floors in their dorms.The survey includes 176 students, 63 from your university/college, and 113 from a neighboring institution. (a)The Dean's first problem is how to define gender composition.To begin with, the survey excludes single persons' tables, since the study is to focus on group behavior.The Dean also eliminates sports teams from the analysis, since a large number of single-gender students will sit at the same table.Finally, the Dean decides to only analyze tables with three or more students, since she worries about "couples" distorting the results.The Dean finally settles for the following specification of the dependent variable: GenderComp=|(50%-% of Male Students at Table)| Where "| Z| Z | " stands for absolute value of ZZ . The variable can take on values from zero to fifty. Briefly analyze some of the possible values. What are the implications for gender composition as more female students join a given number of males at the table? Why would you choose the absolute value here? Discuss some other possible specifications for the dependent variable.

(Essay)
4.8/5
(36)

Under imperfect multicollinearity

(Multiple Choice)
4.7/5
(28)
Showing 41 - 54 of 54
close modal

Filters

  • Essay(0)
  • Multiple Choice(0)
  • Short Answer(0)
  • True False(0)
  • Matching(0)