Comparing MLE Results
Relationship Between Education and Mean Income
library(Zelig)
## Loading required package: survival
data(turnout)
head(turnout)
## race age educate income vote
## 1 white 60 14 3.3458 1
## 2 white 51 10 1.8561 0
## 3 white 24 12 0.6304 0
## 4 white 38 8 3.4183 1
## 5 white 25 12 2.7852 1
## 6 white 67 12 2.3866 1
#log-likelihood function
logLikFun <- function(param) {
mu <- param[1]
sigma <- param[2]
sum(dnorm(x, mean = mu, sd = sigma, log = TRUE))
}
#discrete variables like race and continuous variables like income
#relationship between income and education
ols.lf <- function(param) {
beta <- param[-1]
sigma <- param[1]
y <- as.vector(turnout$income)
x <- cbind(1, turnout$educate)
mu <- x%*%beta
sum(dnorm(y, mu, sigma, log = TRUE))}
#Maximizing log-likelihood using maxLik
library(maxLik)
## Loading required package: miscTools
##
## Please cite the 'maxLik' package as:
## Henningsen, Arne and Toomet, Ott (2011). maxLik: A package for maximum likelihood estimation in R. Computational Statistics 26(3), 443-458. DOI 10.1007/s00180-010-0217-1.
##
## If you have questions, suggestions, or comments regarding the 'maxLik' package, please use a forum or 'tracker' at maxLik's R-Forge site:
## https://r-forge.r-project.org/projects/maxlik/
mle_ols <- maxLik(logLik = ols.lf, start = c(sigma = 1, beta1 = 1, beta2 = 1))
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
summary(mle_ols)
## --------------------------------------------
## Maximum Likelihood estimation
## Newton-Raphson maximisation, 12 iterations
## Return code 2: successive function values within tolerance limit
## Log-Likelihood: -4691.256
## 3 free parameters
## Estimates:
## Estimate Std. error t value Pr(> t)
## sigma 2.52613 0.03989 63.326 < 2e-16 ***
## beta1 -0.65207 0.20827 -3.131 0.00174 **
## beta2 0.37613 0.01663 22.612 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## --------------------------------------------
#Using OLS lm function to replicate same results
summary(lm(income ~ educate, data = turnout))
##
## Call:
## lm(formula = income ~ educate, data = turnout)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.2028 -1.7363 -0.4273 1.3150 11.0632
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.65207 0.21016 -3.103 0.00194 **
## educate 0.37613 0.01677 22.422 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.527 on 1998 degrees of freedom
## Multiple R-squared: 0.201, Adjusted R-squared: 0.2006
## F-statistic: 502.8 on 1 and 1998 DF, p-value: < 2.2e-16
For the problem above, we are looking at maximizing the likelihood estimate of the mean. From result, there are beta1 and beta2. Beta1 is the intercept and can be read as “Ceteris Paribus, with no years of education, average income is -0.65 units(however income was measured)”. Beta2 is the slope which can be read as “Ceteris Paribus, an increase in education by 1 year on average will increase income by 0.37 units”. The sigma is 2.527 which the residual standard error of the model.
Relationship Between Education and Income Inequality
#Looking at the relationship between education and standard deviation of income
ols.lf2 <- function(param) {
mu <- param[1]
theta <- param[-1]
y <- as.vector(turnout$income)
x <- cbind(1, turnout$educate)
sigma <- x%*%theta
sum(dnorm(y, mu, sigma, log = TRUE))
}
mle_ols2 <- maxLik(logLik = ols.lf2, start = c(mu = 1, theta1 = 1, theta2 = 1))
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
summary(mle_ols2)
## --------------------------------------------
## Maximum Likelihood estimation
## Newton-Raphson maximisation, 9 iterations
## Return code 2: successive function values within tolerance limit
## Log-Likelihood: -4861.964
## 3 free parameters
## Estimates:
## Estimate Std. error t value Pr(> t)
## mu 3.516764 0.070320 50.01 <2e-16 ***
## theta1 1.461011 0.106745 13.69 <2e-16 ***
## theta2 0.109081 0.009185 11.88 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## --------------------------------------------
For the second model, instead of getting mean likelihood, we need find max likelihood of weights for changes in standard deviation. This allows us to interpret the data differently and understand a different relationship between variables.
Instead of getting the residual standard error being returned as sd, we get a residual mean error returned as 3.5. The theta1 which is the new intercept can be read as “Ceteris paribus, with 0 years of education, the standard deviation of income will be 1.46”. The new slope, theta2, can be read as “Ceteris paribus, an increase in education by 1 year will increase the standard deviation of income by 0.109”.
In this model, instead of using the mean parameters to understand changes of mean, we can use standard deviation to understand the increase in education will cause the income inequality.
Adding age to both models
#Looking at the relationship between age and education and mean income
ols.lf.age <- function(param) {
beta <- param[-1]
sigma <- param[1]
y <- as.vector(turnout$income)
x <- cbind(1, turnout$educate,turnout$age)
mu <- x%*%beta
sum(dnorm(y, mu, sigma, log = TRUE))}
#Maximizing log-likelihood using maxLik
mle_ols_age <- maxLik(logLik = ols.lf.age, start = c(sigma = 1, beta1 = 1, beta2 = 1,beta3=1))
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
summary(mle_ols_age)
## --------------------------------------------
## Maximum Likelihood estimation
## Newton-Raphson maximisation, 16 iterations
## Return code 2: successive function values within tolerance limit
## Log-Likelihood: -4690.815
## 4 free parameters
## Estimates:
## Estimate Std. error t value Pr(> t)
## sigma 2.525576 0.039919 63.268 <2e-16 ***
## beta1 -0.446047 0.300583 -1.484 0.138
## beta2 0.371011 0.017493 21.209 <2e-16 ***
## beta3 -0.003184 0.003373 -0.944 0.345
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## --------------------------------------------
#Comparing to ols
summary(lm(income~educate+age,data=turnout))
##
## Call:
## lm(formula = income ~ educate + age, data = turnout)
##
## Residuals:
## Min 1Q Median 3Q Max
## -6.2128 -1.7471 -0.4217 1.3042 11.1256
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.446084 0.303955 -1.468 0.142
## educate 0.371013 0.017641 21.031 <2e-16 ***
## age -0.003183 0.003394 -0.938 0.348
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.527 on 1997 degrees of freedom
## Multiple R-squared: 0.2014, Adjusted R-squared: 0.2006
## F-statistic: 251.8 on 2 and 1997 DF, p-value: < 2.2e-16
Adding age as a predictor in order to predict if income changes the parameters and it shows different results. The intercept is -0.446. The slope of education is still 0.37, but the age parameter is changed to -0.003. It means that “Ceteris paribus, an increase in age by one year will decrease income on average by 0.003 units”. This seems wrong, but in fact, this predictor is not as statistically significant as education.
#Looking at the relationship between age and educaton and standard deviation of income
ols.lf2.age <- function(param) {
mu <- param[1]
theta <- param[-1]
y <- as.vector(turnout$income)
x <- cbind(1, turnout$educate,turnout$age)
sigma <- x%*%theta
sum(dnorm(y, mu, sigma, log = TRUE))
}
mle_ols2_age <- maxLik(logLik = ols.lf2.age, start = c(mu = 1, theta1 = 1, theta2 = 1,theta3=1))
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
## Warning in dnorm(y, mu, sigma, log = TRUE): NaNs produced
summary(mle_ols2_age)
## Warning in sqrt(diag(vc)): NaNs produced
## Warning in sqrt(diag(vc)): NaNs produced
## --------------------------------------------
## Maximum Likelihood estimation
## Newton-Raphson maximisation, 3 iterations
## Return code 3: Last step could not find a value above the current.
## Boundary of parameter space?
## Consider switching to a more robust optimisation method temporarily.
## Log-Likelihood: -7542.444
## 4 free parameters
## Estimates:
## Estimate Std. error t value Pr(> t)
## mu 1.0033 0.3368 2.979 0.00289 **
## theta1 0.9810 NA NA NA
## theta2 0.7662 NA NA NA
## theta3 0.1531 NA NA NA
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## --------------------------------------------
The intercept is 0.98 and the new slope of education is 0.766. The slope of age is 0.153, which can be interpreted as “Ceteris paribus, an increase in age by one year will increase standard deviation of income by 0.153 units”.
Since both the age and education coefficients are positive, it reveals that income inequality is caused by both increases in age and education.
library(ggplot2)
##
## Attaching package: 'ggplot2'
## The following object is masked from 'package:Zelig':
##
## stat
ggplot(turnout)+
geom_point(aes(x = age, y = income)) + geom_smooth(aes(x = age, y = income))
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'