Exam 17: The Theory of Linear Regression With One Regressor
Exam 1: Economic Questions and Data17 Questions
Exam 2: Review of Probability70 Questions
Exam 3: Review of Statistics65 Questions
Exam 4: Linear Regression With One Regressor65 Questions
Exam 5: Regression With a Single Regressor: Hypothesis Tests and Confidence Intervals59 Questions
Exam 6: Linear Regression With Multiple Regressors65 Questions
Exam 7: Hypothesis Tests and Confidence Intervals in Multiple Regression64 Questions
Exam 8: Nonlinear Regression Functions63 Questions
Exam 9: Assessing Studies Based on Multiple Regression65 Questions
Exam 10: Regression With Panel Data50 Questions
Exam 11: Regression With a Binary Dependent Variable50 Questions
Exam 12: Instrumental Variables Regression50 Questions
Exam 13: Experiments and Quasi-Experiments50 Questions
Exam 14: Introduction to Time Series Regression and Forecasting50 Questions
Exam 15: Estimation of Dynamic Causal Effects50 Questions
Exam 16: Additional Topics in Time Series Regression50 Questions
Exam 17: The Theory of Linear Regression With One Regressor49 Questions
Exam 18: The Theory of Multiple Regression50 Questions
Select questions type
Your textbook states that an implication of the Gauss-Markov theorem is that the sample average, , is the most efficient linear estimator of E(Yi)when Y1,..., Yn are i.i.d. with E(Yi)= μY and var(Yi)= This follows from the regression model with no slope and the fact that the OLS estimator is BLUE.
Provide a proof by assuming a linear estimator in the Y's, (a)State the condition under which this estimator is unbiased.
(b)Derive the variance of this estimator.
(c)Minimize this variance subject to the constraint (condition)derived in (a)and show that the sample mean is BLUE.
Free
(Essay)
4.8/5
(42)
Correct Answer:
(a)E( )= = μY
Hence for this to be an unbiased estimator, the following condition must hold: (b)var( )= E( - E( ))2 = = (c)Define the Lagrangian L = , where λ is the Lagrange multiplier. To obtain the first order conditions, minimize L with respect to the n weights and the Lagrange multiplier, and solve the resulting (n+1)equations in the (n+1)unknowns. L = 0 = 2 ai - λ; i = 1,..., n L = 0 = Summing the first equation 2 = nλ and bringing in the second equation subsequently, results in λ = Substituting this result into the first equation then gives
2 ai = ; i = 1,..., or ai = ; i = 1,..., n. Since these are also the OLS weights, then OLS is BLUE.
If the errors are heteroskedastic, then
Free
(Multiple Choice)
4.8/5
(31)
Correct Answer:
D
"One should never bother with WLS. Using OLS with robust standard errors gives correct inference, at least asymptotically." True, false, or a bit of both? Explain carefully what the quote means and evaluate it critically.
(Essay)
4.9/5
(36)
For this question you may assume that linear combinations of normal variates are themselves normally distributed. Let a, b, and c be non-zero constants.
(a)X and Y are independently distributed as N(a, σ2). What is the distribution of (bX+cY)?
(b)If X1,..., Xn are distributed i.i.d. as N(a, ), what is the distribution of ?
(c)Draw this distribution for different values of n. What is the asymptotic distribution of this statistic?
(d)Comment on the relationship between your diagram and the concept of consistency.
(e)Let = What is the distribution of ( - a)? Does your answer depend on n?
(Essay)
4.9/5
(35)
The advantage of using heteroskedasticity-robust standard errors is that
(Multiple Choice)
4.8/5
(33)
The extended least squares assumptions are of interest, because
(Multiple Choice)
4.8/5
(28)
If, in addition to the least squares assumptions made in the previous chapter on the simple regression model, the errors are homoskedastic, then the OLS estimator is
(Multiple Choice)
4.9/5
(32)
Assume that var(ui|Xi)= ?0+?1 One way to estimate ?0 and ?1 consistently is to regress
(Multiple Choice)
4.8/5
(35)
(Requires Appendix material)Your textbook considers various distributions such as the standard normal, t, χ2, and F distribution, and relationships between them.
(a)Using statistical tables, give examples that the following relationship holds: F ,∞ = (b)t∞ is distributed standard normal, and the square of the t-distribution with n2 degrees of freedom equals the value of the F distribution with (1, n2)degrees of freedom. Why does this relationship between the t and F distribution hold?
(Essay)
4.7/5
(43)
In practice, you may want to use the OLS estimator instead of the WLS because
(Multiple Choice)
4.9/5
(29)
The class of linear conditionally unbiased estimators consists of
(Multiple Choice)
4.8/5
(38)
(Requires Appendix material)If X and Y are jointly normally distributed and are uncorrelated,
(Multiple Choice)
4.8/5
(34)
Assume that the variance depends on a third variable, Wi, which does not appear in the regression function, and that var(ui|Xi,Wi)= ?0+?1 One way to estimate ?0 and ?1consistently is to regress
(Multiple Choice)
4.8/5
(30)
It is possible for an estimator of to be inconsistent while
(Multiple Choice)
4.8/5
(38)
Consider estimating a consumption function from a large cross-section sample of households. Assume that households at lower income levels do not have as much discretion for consumption variation as households with high income levels. After all, if you live below the poverty line, then almost all of your income is spent on necessities, and there is little room to save. On the other hand, if your annual income was $1 million, you could save quite a bit if you were a frugal person, or spend it all, if you prefer. Sketch what the scatterplot between consumption and income would look like in such a situation. What functional form do you think could approximate the conditional variance var(ui | Inome)?
(Essay)
4.8/5
(30)
"I am an applied econometrician and therefore should not have to deal with econometric theory. There will be others who I leave that to. I am more interested in interpreting the estimation results." Evaluate.
(Essay)
4.9/5
(38)
Showing 1 - 20 of 49
Filters
- Essay(0)
- Multiple Choice(0)
- Short Answer(0)
- True False(0)
- Matching(0)