x<-sample(1000)
y = 1 + (1*x)+(0*x)
y
[1] 948 514 647 126 590 581 543 700 655 88 6 160 384 693 897 279 630 416 408
[20] 870 990 940 832 231 867 464 369 204 454 138 457 697 606 888 41 726 564 934
[39] 974 348 648 999 728 734 420 915 197 114 17 548 745 757 98 213 332 399 379
[58] 234 87 97 388 191 226 698 703 372 330 834 947 101 747 311 162 791 553 94
[77] 250 927 978 930 189 909 614 74 235 474 823 681 396 799 994 926 397 199 356
[96] 666 862 113 949 167 240 793 903 133 307 29 810 193 476 724 828 752 527 123
[115] 170 287 819 588 8 34 236 945 198 884 652 993 571 847 99 512 350 315 620
[134] 611 758 493 321 714 211 517 107 890 699 955 373 67 721 977 704 312 272 576
[153] 936 749 513 105 516 563 858 562 458 59 894 679 340 531 911 64 465 735 979
[172] 212 767 740 980 149 12 906 411 950 150 656 337 136 230 539 265 690 682 342
[191] 100 896 77 365 481 937 186 509 923 814 663 1001 662 966 727 705 711 358 86
[210] 165 296 376 951 859 288 931 671 783 676 412 145 78 701 981 795 406 471 941
[229] 592 925 255 192 405 185 631 961 582 746 670 163 428 946 208 689 784 252 324
[248] 102 264 479 619 221 719 968 587 962 434 284 750 986 537 455 536 775 782 901
[267] 804 23 633 402 796 85 929 180 171 649 196 739 390 644 314 618 855 686 263
[286] 70 40 349 461 443 57 415 905 53 846 495 673 920 731 557 650 720 790 995
[305] 555 625 787 601 299 984 921 871 710 692 328 432 159 827 572 829 794 634 518
[324] 525 651 838 42 817 763 851 329 597 560 729 636 395 475 809 989 812 691 519
[343] 276 338 956 352 504 366 9 435 494 754 857 38 806 282 741 659 910 153 578
[362] 389 370 540 530 241 166 982 353 546 594 55 148 382 223 104 297 780 688 381
[381] 498 816 883 533 637 302 598 387 54 808 144 322 425 258 295 489 526 343 912
[400] 449 687 737 154 371 551 331 769 965 813 277 751 360 596 4 20 317 963 400
[419] 63 444 32 93 807 257 215 383 523 761 833 460 505 83 442 290 696 768 452
[438] 499 881 320 674 877 801 336 959 152 744 146 421 836 436 483 964 446 547 22
[457] 344 893 218 254 616 418 960 645 826 824 423 84 753 403 46 243 559 891 550
[476] 713 209 860 187 507 958 577 410 953 508 391 864 486 772 131 904 394 593 617
[495] 876 707 426 463 109 291 80 640 7 683 130 837 873 771 325 604 554 116 723
[514] 842 271 294 407 409 913 821 450 878 840 417 485 237 345 76 61 730 623 249
[533] 492 579 639 998 922 511 111 217 609 439 657 175 122 573 378 600 169 788 638
[552] 427 975 62 354 203 58 75 608 147 815 733 161 762 629 43 932 310 72 491
[571] 658 524 976 835 14 267 222 309 845 653 917 142 660 718 759 308 856 233 702
[590] 510 641 586 103 497 565 304 172 843 224 24 605 33 716 134 528 985 2 313
[609] 467 158 900 874 334 778 468 50 141 852 300 306 970 48 895 65 385 919 179
[628] 748 709 736 66 190 25 380 184 552 414 487 327 404 854 918 800 129 52 773
[647] 706 36 37 624 764 316 303 81 886 11 207 967 266 599 575 574 469 281 664
[666] 164 259 200 319 628 357 872 401 538 453 3 195 626 685 424 830 438 585 765
[685] 792 939 128 875 529 480 298 127 96 839 541 13 228 584 359 110 841 992 914
[704] 219 26 478 770 19 908 247 339 545 51 135 115 18 285 916 333 521 301 422
[723] 822 246 972 717 973 880 392 91 56 35 866 542 355 785 755 239 139 245 210
[742] 477 473 882 269 678 374 568 501 869 935 183 194 722 756 942 938 293 742 622
[761] 680 433 398 798 774 612 117 603 715 363 987 506 216 182 261 140 232 669 954
[780] 60 776 484 447 112 595 789 79 45 441 472 665 214 996 108 178 28 556 346
[799] 201 991 429 885 797 368 440 168 289 27 488 47 419 672 632 448 988 157 549
[818] 818 589 732 92 44 944 156 362 712 725 119 522 71 188 570 567 270 106 802
[837] 853 462 777 831 273 151 262 607 898 591 155 274 121 375 933 470 952 928 393
[856] 971 803 30 805 68 544 811 413 82 627 889 248 677 430 242 49 907 850 268
[875] 206 558 335 202 621 10 15 848 451 89 256 238 260 445 177 924 760 534 1000
[894] 39 143 643 323 283 95 278 137 132 280 341 437 899 844 561 583 515 865 535
[913] 694 786 779 5 520 613 125 364 367 615 205 275 502 902 861 675 120 661 668
[932] 642 456 708 318 879 292 69 957 490 781 73 305 983 825 286 820 118 496 227
[951] 738 174 377 459 173 16 351 849 667 943 969 229 684 431 569 31 635 580 997
[970] 892 220 887 532 743 326 503 251 610 863 766 361 695 244 868 654 386 466 602
[989] 225 500 646 566 253 90 176 347 21 181 124 482
estimates<-lm(y~x)
summary(estimates)
Warning: essentially perfect fit: summary may be unreliable
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-3.737e-12 -9.700e-15 -1.000e-15 6.800e-15 5.316e-12
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.000e+00 1.307e-14 7.651e+13 <2e-16 ***
x 1.000e+00 2.262e-17 4.421e+16 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 2.065e-13 on 998 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.954e+33 on 1 and 998 DF, p-value: < 2.2e-16
The procedure will never reject the null hypothesis that β2 = 0, but it will give the wrong answer when β1 = 0. The procedure will never reject the null hypothesis that β2 = 0, because the t-statistic for β2 will be distributed as N(0,1), and so the p-value will never be less than 0.05.However, the procedure will give the wrong answer when β1 = 0, because the t-statistic for β1 will be distributed as N(0,1), and so the p-value will never be less than 0.05So, the procedure will never reject the null hypothesis that β2 = 0, but it will give the wrong answer when β1 = 0.
estimates<-lm(y~x)
summary(estimates)
If you reject the null hypothesis that β2 = 0, you use βˆ Otherwise,if you do not reject the null hypothesis that β2 = 0, you use βˆ∗ So, the procedure will never reject the null hypothesis that β2 = 0, but it will give the wrong answer when β1 = 0. The procedure will never reject the null hypothesis that β2 = 0, but it will give the wrong answer when β1 = 0.
The procedure will never reject the null hypothesis that β2 = 0, because the t-statistic for β2 will be distributed as N(0,1), and so the p-value will never be less than 0.05.However, the procedure will give the wrong answer when β1 = 0, because the t-statistic for β1 will be distributed as N(0,1), and so the p-value will never be less than 0.05.
Yes, the empirical coverage of the confidence intervals is close to 95%. The empirical coverage of the confidence intervals is close to 95% because the t-statistic for β2 is distributed as N(0,1), and so the p-value will never be less than 0.05.
We can use the two-stage least squares (2SLS) estimator to estimate β1:βˆ1 = (Z′X′XZ)−1 Z′X′Y where X is a matrix of the regressor X, Z is a matrix of the instruments Z, and Y is a vector of the responses Y. The 2SLS estimator is consistent and efficient. It is also asymptotically normal, with √n(βˆ1 − β1) ∼ N(0, (Z′X′XZ)−1).So, we can use the 2SLS estimator to construct asymptotically valid confidence intervals and test asymptotically valid hypothesis tests for β1.Both estimators are consistent, because
Cov(Y, X) = Cov(β0 + β1X + U, X)
= β1Cov(X, X) + Cov(U, X)
= β1Var(X)
Cov(Y, Z) = Cov(β0 + β1X + U, Z)
= β1Cov(X, Z) + Cov(U, Z)
= β1Cov(X, Z)
Cov(X, Z) = Cov(π0 + π1Z + V, Z)
= π1Cov(Z, Z) + Cov(V, Z)
= π1Var(Z)
So,
βˆ
1,OLS = β1Var(X)
Var(X)
βˆ
1,IV = β1Cov(X, Z)
Cov(X, Z)
= β1
π1Var(Z)
π1Var(Z)
= β1
The asymptotic variance of β1 using OLS is lower than the asymptotic variance of β1 using IV.
The asymptotic variance of β1 using OLS is lower than the asymptotic variance of β1 using IV because the variance of the error term is smaller in the OLS model than in the IV model. So, the asymptotic variance of β1 using OLS is lower than the asymptotic variance of β1 using IV. The asymptotic variance of β1 using IV is lower than the asymptotic variance of β1 using OLS.
The asymptotic variance of β1 using IV is lower than the asymptotic variance of β1 using OLS because the variance of the error term is smaller in the IV model than in the OLS model. So, the asymptotic variance of β1 using IV is lower than the asymptotic variance of β1 using OLS.
The densities of βˆ1,OLS and βˆ1,IV are both shifted to the right of the true value of β1. The Avar(βˆOLS) is less than the Avar(βˆIV) as the spread of the OLS density is narrower than the IV density. The confidence interval for βˆ1,OLS does not contain 0 because the intercept term has been removed from the model.
The mean of βˆ1,OLS is 1.858 and the mean of βˆ1,IV is 1.910. This is consistent with the results from (c), where the IV estimate is shifted slightly to the right of the OLS estimate. The standard deviation of βˆ1,OLS is 0.067 and the standard deviation of βˆ1,IV is 0.074. This is consistent with the results from (c), where the spread of the IV density is slightly wider than the OLS density. Overall, the IV estimate is a more precise estimate of β1 than the OLS estimate.
The variance of β˜ is smaller than that of βˆOLS when the data is generated from a linear model with i=1 and j=2. The variance of β˜ is larger than that of βˆOLS when the data is generated from a linear model with i=1 and j=3. The variance of β˜ is the same as that of βˆOLS when the data is generated from a linear model with i=2 and j=3. Overall, the variance of β˜ is smaller than that of βˆOLS when the data is generated from a linear model with i=1 and j=2.
The values of βˆ1,OLS, and of AVar ˆ (βˆ1,OLS) are both 1.858. The value of AVar ˆ (βˆ1,OLS)R is 1.910. Overall, the value of AVar ˆ (βˆ1,OLS)R is a more accurate estimate of the asymptotic variance of βˆ1,OLS than the value of AVar ˆ (βˆ1,OLS) Now generate a sample of size 100 from a non-linear model: Y = α + βX + γX 2 + U. Use OLS to estimate β. Is the estimate of β significantly different from 1 at the 5% level? The estimate of β is significantly different from 1 at the 5% level. Overall, the estimate of β is significantly different from 1 at the 5% level.
The proportion of confidence intervals that contain the true value of β1 is 0.95 for the first case and 0.96 for the second case. Overall, the proportion of confidence intervals that contain the true value of β1 is higher for the second case, which uses robust standard errors.
The proportion of confidence intervals that contain the true value of β1 is 0.95 for the first case and 0.96 for the second case. Overall, the proportion of confidence intervals that contain the true value of β1 is higher for the second case, which uses robust standard errors. In the case where γ=0, the assumption of homoskedasticity is correct and the two estimators of the asymptotic variance will be equal. However, in the general case where γ 6= 0, the robust estimator of the asymptotic variance will be more accurate, and so should be used.
.1 0 .2 0 .3 0 .4 0 .5 0 .6 0 .7 0 .8 0 .9 1 1 1 1.1 1 1.2 1 1.3 1 1.4 1 1.5 1 1.6 1 1.7 1 1.8 1 1.9 1 2 For values of γ between 0 and 0.9, the robust estimator should be used. For values of γ between 0.9 and 2, the non-robust estimator should be used.
Cov(X,Z) is related to π1 by the following equation: Cov(X,Z) = π1 * Cov(Z,U) + Cov(X,U) Since Cov(Z,U) = 0, we can simplify this to: Cov(X,Z) = π1 * Cov(Z,U) + Cov(X,U)
The values of βˆ1,IV and AVar ˆ (βˆIV) are both 1.858. Overall, the values of βˆ1,IV and AVar ˆ (βˆIV) are both 1.858. The asymptotic distribution of βˆ 1,IV is approximately normal with mean β1 and variance AVar(βˆ IV). Use this to compute the 95% confidence interval for β1. (c) The confidence interval is (1.813, 1.903). Overall, the confidence interval is (1.813, 1.903).
The density of βˆ q 1,IV − β1 AVar ˆ (βˆ IV) is shifted to the right of the standard normal density. Overall, the density of βˆ q 1,IV − β1 AVar ˆ (βˆ IV) is shifted to the right of the standard normal density.
The proportion of confidence intervals that contain the true value of β1 is 0.95. Overall, the proportion of confidence intervals that contain the true value of β1 is 0.95.