Introduction to Econometrics: Inference and Identification

Introduction to Econometrics: Inference and Identification
paly

Asst Prof Amine Ouazad outlines the course which covers identification and inference issues in linear regression. The previous session covered Identification Golden Benchmark and randomization.

  • Uploaded on | 1 Views
  • rozhana rozhana

About Introduction to Econometrics: Inference and Identification

PowerPoint presentation about 'Introduction to Econometrics: Inference and Identification'. This presentation describes the topic on Asst Prof Amine Ouazad outlines the course which covers identification and inference issues in linear regression. The previous session covered Identification Golden Benchmark and randomization.. The key topics included in this slideshow are . Download this presentation absolutely free.

Presentation Transcript


1. Econometrics Session 2 Introduction: Inference Amine Ouazad, Asst. Prof. of Economics

2. Outline of the course 1. Introduction: Identification 2. Introduction: Inference 3. Linear Regression 4. Identification Issues in Linear Regressions 5. Inference Issues in Linear Regressions

3. Previous session: Identification Golden Benchmark: Randomization D = E(Y(1)|D=1) E(Y(0)|D=0) We do not in fact observe E(Y(d)|D=d) But we observe:

4. This session Introduction: Inference What problems appear because of the limited number of observations? Hands-on problem #1: At the dinner table, your brother-in-law suggests playing heads or tails using a coin. You suspect he is cheating. How do you prove that the coin is unbalanced?

5. This session Introduction: Inference Hands-on problem #2: Using a survey of 1,248 subjects in Singapore, you determine that the average income is $29,041 per year. How close is this mean to the true average income of Singaporeans? Do we have enough data?

6. This session Introduction: Inference Convergence The Law of Large Numbers The Central Limit Theorem Hypothesis Testing Inference for the estimation of treatment effects.

7. 1. CONVERGENCE Session 2 - Inference

8. Warning (you can ignore this) Proofs of the LLN and the CLT are omitted since most of their details are irrelevant to daily econometric practice. There are multiple flavors of the LLN and the CLT. I only introduce one flavor per theorem. I will introduce more versions as needed in the following sessions, but do not put too much emphasis on the distinctions (Appendix D of the Greene).

9. Notations An estimator of a quantity is a function of the observations in the sample. Examples: Estimator of the fraction of women in Singapore. Estimator of the average salary of Chinese CEOs. Estimator of the effect of a medication of patients health. An estimator is typically noted with a hat . An estimator sometimes has an index n for the number of observations in the sample.

10. Convergence Convergence in probability. An estimator q n of is converging in probability to if for all epsilon, P(|q n - |> ) -> 0 as n->. We write plim q n = An estimator of is consistent if it converges in probability to

11. 2. LAW OF LARGE NUMBERS Session 2 - Inference

12. Law of Large Numbers Let X1, , Xn be an independent sequence of random variables, with finite expected value mu = E(Xj), and finite variance sigma^2 = V(Xj). Let Sn = X1++Xn. Then, for any epsilon>0, As n->infinity

13. Law of large numbers The empirical mean of a series of random variables X 1 , , X n converges in probability to the actual expectancy of the sequence of random variables. Application: What is the fraction of women in Singapore? Xi = 1 if an individual is a woman. EXi is the fraction of women in the population. Empirical mean is arbitrarily close to the true fraction of women in Singapore. Subtlety?

14. Another application Load the micro census data. Take 100, 5% samples of the dataset. Calculate the fraction of women in the dataset, for each dataset. Consider the approximation that the fraction of women is 51% exactly. Illustrate that for epsilon = 0.5%, the number of samples with a mean above 51+-0.5 is shrinking as the size of the sample increases.

15. 2. CENTRAL LIMIT THEOREM Session 2 - Inference

16. Central Limit Theorem Lindeberg-Levy Central Limit Theorem: If x 1 ,,x n are an independent random sample from a probability distribution with finite mean m and finite variance 2 , and then Proof: Rao (1973, p.127) using characteristic functions

17. Applications: Central Limit Theorem Exercise #1: You observe heads,tails,tails,heads,tails,heads. Give an estimate of the probability of heads, with a 95% confidence interval. Exercise #2: Solve the hands-on problem #2 at the beginning of these slides. Discuss the assumptions of the CLT.

18. 3. HYPOTHESIS TESTING Session 2 - Inference

19. Hypothesis testing Null hypothesis H 0 . Alternative hypothesis H a . Unknown parameter . Typical null hypothesis: Is = 0 ? Is > 3 ? Is = ? (if is another unknown parameter). Is = 4 ?

20. Hypothesis Testing: Applications Application #1 (Coin toss): is the coin balanced? Write the null hypothesis. Given the information presented before, can we reject the null hypothesis at 95%? Application #2 (Average income): is the average income greater than $29,000 ? Write the null hypothesis. Given the information presented before, can we reject the null hypothesis at 90%?

21. t-test From the Central Limit Theorem, if the standard deviation were known, under the null hypothesis: But the s.d. is estimated, and, under the null hypothesis:

22. Critical region Region for which the null hypothesis is rejected. If the null hypothesis is true, then the null is rejected in 5% of cases if the critical region is: Where c q is the qth quantile of the student distribution with n-1 degrees of freedom.

23. Flavors of t-tests One-sample, two-sided. See previous slides. One-sample, one-sided. Two-sample, two-sided. Equal and unequal variances. Two-sample, one-sided. Equal and unequal variances.

24. Errors E.g. in judicial trials, medical tests, security checks. Power of a test 1- : probability of rejecting the null when the null is false. Size of a test : proba of type I error. Null hypothesis is not rejected Null hypothesis is rejected Null hypothesis is true Cool, no worries Type I error Probability Null hypothesis is wrong Type II error Probability Cool, no worries

25. Quirky Many papers run a large number of tests on the same data. Many papers report only significant tests What is wrong with this approach? Many papers run robustness checks, i.e. tests where the null hypothesis should not to be rejected. What is wrong with this approach? Conclusion: This is wrong, but common practice. For more , see January 2012 of Strategic Management Journal.

26. 4. INFERENCE FOR TREATMENT EFFECTS Session 2 - Inference

27. Treatment effects: Inference (inspired by Lazear) There are two groups, a treatment and a control group. 128 employees are randomly allocated to the treatment and to the control. Treatment employees: piece rate payoff. Control employees: fixed pay. Treatment workers is 38.3 pieces per hour in the treatment group, and is 23.1 in the control group.

28. Questions 1. Why do we perform a randomized experiment? 2. Do we have enough information to get an estimator of the treatment effect? 3. Is the estimator consistent ? 4. Is the estimator asymptotically normal ? 5. Do we have enough information to get a 95% confidence interval around the estimator of the treatment effect? 6. Test the hypothesis that the medication is effective at raising the health index.