This example will illustrate typical aspects of dealing with missing data. Topics will include: Mean imputation, modal imputation for categorical data, and multiple imputation of complex patterns of missing data.
For this example I am using 2020 CDC Behavioral Risk Factor Surveillance System (BRFSS) SMART county data. Link
Every time we ask a question on a survey, there are a variety of reasons why we may fail to get a complete response
This is a cause for concern for those using survey data
Total nonresponse
This is where a respondent completely refuses to participate in the survey, or some other barrier prevents them from participating (not at home, ill, disability)
This is usually accounted for by survey weights
Noncoverage
When an individual is not included in the survey’s sampling frame
Usually adjusted for in the weights
Item nonresponse
When a respondent fails to provide an acceptable response to a survey question
Could be refusal
Could be “I don’t know”
Gould give an inconsistent response
The interviewer could fail to ask the question
Partial nonresponse
Somewhere between total nonresponse and item nonresponse
Respondent could end the interview
e.g. Incomplete responses for a wave in a panel study
Can be dealt with by using weights, which eliminates respondent with partial responses
Can be dealt with by imputation, where the value is filled in by data editors
Missing completely at random (MCAR)
Missing values do not depend on any characteristic of an individual
Missing at random (MAR)
Missing values do not relate to the item of interest itself, but it could be related to another characteristic of the respondent
These two are said to be “ignorable”
Missing Not at random (MNAR)
Think of a question on satisfaction, those that are extremely dissatisfied may be less likely to respond, so the results may be biased because they don’t include those folks
This is “non-ignorable”
One way to estimate if an element is MCAR or MAR is to form a missing data variable (1=missing, 0=nonmissing) and estimate a logistic regression for that variable using key characteristics of a respondent
If all these characteristics show insignificant effects, we can pretty much assume the element is MCAR/MAR
Typically the computer will do one of two things
Delete the entire case if any variables in the equation are missing, this is called listwise deletion
delete the case for a particular comparison, this is called pairwise deletion
Both of these lead to fewer cases for entire models or for particular tests
There are easy ways and hard ways, but the answer is yes.
Easy ways == bad ways
Mean substitution
Plugs in the value for the average response for a missing value for all individuals
If a large % of the respondents are missing the values (say >20%) the mean could be not estimated very well
The variance of the variable will be driven down, if everyone who is missing is given the mean
Can lead to lower effect sizes/betas in the models using the data
Multiple Imputation will use other characteristics of individuals with complete observations to predict the missing value for the missing person
This would use the observed characteristics of the respondent to predict their missing value
Think regression!
Income is a good variable like this
People don’t like to report their income, but they may be willing to report their education, and other characteristics
We can use those to predict their income
This is sensitive to assumptions of the regression model being used!
This works like the regression imputation method, but applies it recursively to all patterns of missing data
Typically this will be done several times, usually 5 is the rule most programs use, and the differences in the imputations are compared across runs
i.e. if we impute the values then run a regression for the outcome, how sensitive are the results of the model to the imputation
There are various ways to do this, and it depends on the scale of the variable
Find a set of variables that are associated with your outcome
If there are cases similar to yours (i.e. they are similar on the non-missing variables), you can plug in their variable of interest for the missing case
This is what the census did for years
Regress Y on the observed X’s for the complete cases
Form fitted or “predicted” values
Replace the missing values with a fitted value from the regression from non-missing observations who are “close” to the missing cases
i.e. replace the missing values with fitted values from similar individuals
library(car)
library(mice)
library(ggplot2)
library(dplyr)
#load brfss
brfss20<- readRDS(url("https://github.com/coreysparks/DEM7283/blob/master/data/brfss20sm.rds?raw=true"))
### Fix variable names
names(brfss20) <- tolower(gsub(pattern = "_",replacement = "",x = names(brfss20)))
### Generate smaller sample for teaching examples
samps<- sample(1:dim(brfss20)[1], size = 10000, replace = F)
brfss20<- brfss20[samps,]
#Poor or fair self rated health
brfss20$badhealth<-Recode(brfss20$genhlth,
recodes="4:5=1; 1:3=0; else=NA")
#sex
brfss20$male<-as.factor(ifelse(brfss20$sex==1,
"Male",
"Female"))
#Age cut into intervals
brfss20$agec<-cut(brfss20$age80,
breaks=c(0,24,39,59,79,99))
#race/ethnicity
brfss20$black<-Recode(brfss20$racegr3,
recodes="2=1; 9=NA; else=0")
brfss20$white<-Recode(brfss20$racegr3,
recodes="1=1; 9=NA; else=0")
brfss20$other<-Recode(brfss20$racegr3,
recodes="3:4=1; 9=NA; else=0")
brfss20$hispanic<-Recode(brfss20$racegr3,
recodes="5=1; 9=NA; else=0")
brfss20$race_eth<-Recode(brfss20$racegr3,
recodes="1='nhwhite'; 2='nh_black'; 3='nh_other';4='nh_multirace'; 5='hispanic'; else=NA",
as.factor = T)
#insurance
brfss20$ins<-Recode(brfss20$hlthpln1,
recodes ="7:9=NA; 1=1;2=0")
#income grouping
brfss20$inc<-ifelse(brfss20$incomg==9,
NA,
brfss20$incomg)
#education level
brfss20$educ<-Recode(brfss20$educa,
recodes="1:2='0Prim'; 3='1somehs'; 4='2hsgrad'; 5='3somecol'; 6='4colgrad';9=NA",
as.factor=T)
#brfss20$educ<-relevel(brfss20$educ, ref='2hsgrad')
#employment
brfss20$employ<-Recode(brfss20$employ1,
recodes="1:2='Employed'; 2:6='nilf'; 7='retired'; 8='unable'; else=NA",
as.factor=T)
brfss20$employ<-relevel(brfss20$employ,
ref='Employed')
#marital status
brfss20$marst<-Recode(brfss20$marital,
recodes="1='married'; 2='divorced'; 3='widowed'; 4='separated'; 5='nm';6='cohab'; else=NA",
as.factor=T)
brfss20$marst<-relevel(brfss20$marst,
ref='married')
#Age cut into intervals
brfss20$agec<-cut(brfss20$age80,
breaks=c(0,29,39,59,79,99))
brfss20$ageg<-factor(brfss20$ageg)
#BMI, in the brfss20a the bmi variable has 2 implied decimal places, so we must divide by 100 to get real bmi's
brfss20$bmi<-brfss20$bmi5/100
brfss20$obese<-ifelse(brfss20$bmi>=30,
1,
0)
#smoking currently
brfss20$smoke<-Recode(brfss20$smoker3,
recodes="1:2 ='Current'; 3 ='Former';4 ='NeverSmoked'; else = NA",
as.factor=T)
brfss20$smoke<-relevel(brfss20$smoke,
ref = "NeverSmoked")
Now, we can get a general idea of the missingness of these variables
by just using summary(brfss20)
summary(brfss20[, c("ins", "smoke", "bmi", "badhealth", "race_eth", "educ", "employ", "marst", "inc")])
## ins smoke bmi badhealth
## Min. :0.0000 NeverSmoked:5850 Min. :12.83 Min. :0.0000
## 1st Qu.:1.0000 Current :1131 1st Qu.:23.91 1st Qu.:0.0000
## Median :1.0000 Former :2449 Median :27.10 Median :0.0000
## Mean :0.9144 NA's : 570 Mean :28.11 Mean :0.1317
## 3rd Qu.:1.0000 3rd Qu.:31.01 3rd Qu.:0.0000
## Max. :1.0000 Max. :93.97 Max. :1.0000
## NA's :54 NA's :1105 NA's :31
## race_eth educ employ marst
## hispanic :1057 0Prim : 198 Employed:5311 married :5060
## nh_black : 967 1somehs : 394 nilf :1394 cohab : 396
## nh_multirace: 176 2hsgrad :2386 retired :2623 divorced :1222
## nh_other : 460 3somecol:2591 unable : 477 nm :2062
## nhwhite :7105 4colgrad:4390 NA's : 195 separated: 209
## NA's : 235 NA's : 41 widowed : 942
## NA's : 109
## inc
## Min. :1.000
## 1st Qu.:3.000
## Median :5.000
## Mean :4.035
## 3rd Qu.:5.000
## Max. :5.000
## NA's :2021
Which shows that, among these recoded variables, inc
,
the income variable, 2021 people in the BRFSS, or 20.21% of the
sample.
The lowest number of missings is in the bad health variable, which only has 0.31% missing.
Now, i’m going to illustrate mean imputation of a continuous variable, BMI.
#I'm going to play with 3 outcomes, bmi, having a regular doctor and income category
summary(brfss20$bmi)
## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
## 12.83 23.91 27.10 28.11 31.01 93.97 1105
#what happens when we replace the missings with the mean?
brfss20$bmi.imp.mean<-ifelse(is.na(brfss20$bmi)==T, mean(brfss20$bmi, na.rm=T), brfss20$bmi)
mean(brfss20$bmi, na.rm=T)
## [1] 28.1084
mean(brfss20$bmi.imp.mean) #no difference!
## [1] 28.1084
fit<-lm(bmi~inc+agec+educ+race_eth, brfss20)
median(brfss20$bmi, na.rm=T)
## [1] 27.1
median(brfss20$bmi.imp.mean) #slight difference
## [1] 27.82
var(brfss20$bmi, na.rm=T)
## [1] 38.325
var(brfss20$bmi.imp.mean) # more noticeable difference!
## [1] 34.08966
So what we see here, is that imputing with the mean does nothing to central tendency (when measured using the mean, but does affect the median slightly), but it does reduce the variance in the outcome. This is because you’re replacing all missing cases with the most likely value (the mean), so you’re artificially deflating the variance. That’s not good.
We can see this in a histogram, where the imputed values increase the peak in the distribution:
#plot the histogram
library(reshape2)
brfss20%>%
select(bmi.imp.mean, bmi)%>%
melt()%>%
ggplot()+geom_freqpoly(aes(x = value,
y = ..density.., colour = variable))
## No id variables; using all as measure variables
## Warning: attributes are not identical across measure variables; they will be
## dropped
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 1105 rows containing non-finite values (stat_bin).
If we have a categorical variable, an easy way to impute the values is to use modal imputation, or impute cases with the mode, or most common value. It doesn’t make sense to use the mean, because what would that mean for a categorical variable?
table(brfss20$employ)
##
## Employed nilf retired unable
## 5311 1394 2623 477
#find the most common value
mcv.employ<-factor(names(which.max(table(brfss20$employ))), levels=levels(brfss20$employ))
mcv.employ
## [1] Employed
## Levels: Employed nilf retired unable
#impute the cases
brfss20$employ.imp<-as.factor(ifelse(is.na(brfss20$employ)==T, mcv.employ, brfss20$employ))
levels(brfss20$employ.imp)<-levels(brfss20$employ)
prop.table(table(brfss20$employ))
##
## Employed nilf retired unable
## 0.54166242 0.14217236 0.26751657 0.04864865
prop.table(table(brfss20$employ.imp))
##
## Employed nilf retired unable
## 0.5506 0.1394 0.2623 0.0477
barplot(prop.table(table(brfss20$employ)), main="Original Data", ylim=c(0, .6))
barplot(prop.table(table(brfss20$employ.imp)), main="Imputed Data",ylim=c(0, .6))
Which doesn’t look like much of a difference because only 195 people were missing. Now let’s try modal imputation on income group:
table(brfss20$inc)
##
## 1 2 3 4 5
## 529 1097 652 986 4715
#find the most common value
mcv.inc<-factor(names(which.max(table(brfss20$inc))), levels = levels(factor(brfss20$inc)))
mcv.inc
## [1] 5
## Levels: 1 2 3 4 5
#impute the cases
brfss20$inc.imp<-as.factor(ifelse(is.na(brfss20$inc)==T, mcv.inc, brfss20$inc))
levels(brfss20$inc.imp)<-levels(as.factor(brfss20$inc))
prop.table(table(brfss20$inc))
##
## 1 2 3 4 5
## 0.06629903 0.13748590 0.08171450 0.12357438 0.59092618
prop.table(table(brfss20$inc.imp))
##
## 1 2 3 4 5
## 0.0529 0.1097 0.0652 0.0986 0.6736
barplot(prop.table(table(brfss20$inc)), main="Original Data", ylim=c(0, .6))
barplot(prop.table(table(brfss20$inc.imp)), main="Imputed Data", ylim=c(0, .6))
Which shows how dramatically we alter the distribution of the variable by imputing at the mode.
We can construct a flag variable. This is a useful exercise to see whether we have missing at random within the data:
fit1<-lm(bmi~is.na(inc), data=brfss20)
fit2<-lm(bmi~is.na(educ), data=brfss20)
fit3<-lm(bmi~is.na(race_eth), data=brfss20)
summary(fit1)
##
## Call:
## lm(formula = bmi ~ is.na(inc), data = brfss20)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.441 -4.092 -1.011 2.919 65.699
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 28.27081 0.07154 395.194 < 2e-16 ***
## is.na(inc)TRUE -1.00882 0.17829 -5.658 1.58e-08 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.18 on 8893 degrees of freedom
## (1105 observations deleted due to missingness)
## Multiple R-squared: 0.003587, Adjusted R-squared: 0.003475
## F-statistic: 32.02 on 1 and 8893 DF, p-value: 1.576e-08
summary(fit2)
##
## Call:
## lm(formula = bmi ~ is.na(educ), data = brfss20)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.279 -4.199 -1.009 2.901 65.861
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 28.1094 0.0657 427.854 <2e-16 ***
## is.na(educ)TRUE -0.5840 1.5999 -0.365 0.715
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.191 on 8893 degrees of freedom
## (1105 observations deleted due to missingness)
## Multiple R-squared: 1.499e-05, Adjusted R-squared: -9.746e-05
## F-statistic: 0.1333 on 1 and 8893 DF, p-value: 0.7151
summary(fit3)
##
## Call:
## lm(formula = bmi ~ is.na(race_eth), data = brfss20)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.269 -4.189 -1.049 2.911 65.871
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 28.09930 0.06624 424.192 <2e-16 ***
## is.na(race_eth)TRUE 0.50294 0.49237 1.021 0.307
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.191 on 8893 degrees of freedom
## (1105 observations deleted due to missingness)
## Multiple R-squared: 0.0001173, Adjusted R-squared: 4.878e-06
## F-statistic: 1.043 on 1 and 8893 DF, p-value: 0.3071
And indeed we see that those with missing incomes have significantly lower bmi’s. This implies that bmi may not be missing at random with respect to income. This is a good process to go through when you are analyzing if your data are missing at random or not.
These days, these types of imputation have been far surpassed by more
complete methods that are based upon regression methods. These methods
are generally referred to as multiple imputation, because we are really
interested in imputing multiple variables simultaneously. Instead of
reviewing this perspective here, I suggest you have a look at Joe
Schafer’s site
that gives a nice treatment of the subject. Here, I will use the
imputation techniques in the mice
library in R, which you
can read about here.
I have used these in practice in publications and generally like the
framework the library uses. Another popular technique is in the
Amelia
library of Gary King, which I haven’t
used much. If you are serious about doing multiple imputation it would
be advised to investigate multiple methodologies.
To begin, I explore the various patterns of missingness in the data.
The md.pattern
function in mice
does this
nicely. Here, each row corresponds to a particular pattern of
missingness (1 = observed, 0=missing)
#look at the patterns of missingness
md.pattern(brfss20[,c("bmi", "inc", "agec","educ","race_eth")])
## agec educ race_eth bmi inc
## 7360 1 1 1 1 1 0
## 1361 1 1 1 1 0 1
## 496 1 1 1 0 1 1
## 517 1 1 1 0 0 2
## 99 1 1 0 1 1 1
## 60 1 1 0 1 0 2
## 17 1 1 0 0 1 2
## 49 1 1 0 0 0 3
## 4 1 0 1 1 1 1
## 9 1 0 1 1 0 2
## 2 1 0 1 0 1 2
## 16 1 0 1 0 0 3
## 2 1 0 0 1 0 3
## 1 1 0 0 0 1 3
## 7 1 0 0 0 0 4
## 0 41 235 1105 2021 3402
The first row shows the number of observations in the data that are complete (first row).
The second row shows the number of people who are missing only the inc variable.
Rows that have multiple 0’s in the columns indicate missing data patterns where multiple variables are missing.
The bottom row tells how many total people are missing each variable, in ANY combination with other variables.
If you want to see how pairs of variables are missing together, the
md.pairs()
function will show this.
A pair of variables can have exactly four missingness patterns: Both
variables are observed (pattern rr
), the first variable is
observed and the second variable is missing (pattern rm
),
the first variable is missing and the second variable is observed
(pattern mr
), and both are missing (pattern
mm
).
md.pairs(brfss20[,c("bmi", "inc", "agec","educ","race_eth")])
## $rr
## bmi inc agec educ race_eth
## bmi 8895 7463 8895 8880 8734
## inc 7463 7979 7979 7972 7862
## agec 8895 7979 10000 9959 9765
## educ 8880 7972 9959 9959 9734
## race_eth 8734 7862 9765 9734 9765
##
## $rm
## bmi inc agec educ race_eth
## bmi 0 1432 0 15 161
## inc 516 0 0 7 117
## agec 1105 2021 0 41 235
## educ 1079 1987 0 0 225
## race_eth 1031 1903 0 31 0
##
## $mr
## bmi inc agec educ race_eth
## bmi 0 516 1105 1079 1031
## inc 1432 0 2021 1987 1903
## agec 0 0 0 0 0
## educ 15 7 41 0 31
## race_eth 161 117 235 225 0
##
## $mm
## bmi inc agec educ race_eth
## bmi 1105 589 0 26 74
## inc 589 2021 0 34 118
## agec 0 0 0 0 0
## educ 26 34 0 41 10
## race_eth 74 118 0 10 235
We can perform a basic multiple imputation by simply doing: Note this may take a very long time with big data sets
dat2<-brfss20
samp2<-sample(1:dim(dat2)[1], replace = F, size = 500)
dat2$bmiknock<-dat2$bmi
dat2$bmiknock[samp2]<-NA
head(dat2[, c("bmiknock","bmi")])
imp<-mice(data = dat2[,c("bmi", "inc", "agec","educ","race_eth")], seed = 22, m = 10)
##
## iter imp variable
## 1 1 bmi inc educ race_eth
## 1 2 bmi inc educ race_eth
## 1 3 bmi inc educ race_eth
## 1 4 bmi inc educ race_eth
## 1 5 bmi inc educ race_eth
## 1 6 bmi inc educ race_eth
## 1 7 bmi inc educ race_eth
## 1 8 bmi inc educ race_eth
## 1 9 bmi inc educ race_eth
## 1 10 bmi inc educ race_eth
## 2 1 bmi inc educ race_eth
## 2 2 bmi inc educ race_eth
## 2 3 bmi inc educ race_eth
## 2 4 bmi inc educ race_eth
## 2 5 bmi inc educ race_eth
## 2 6 bmi inc educ race_eth
## 2 7 bmi inc educ race_eth
## 2 8 bmi inc educ race_eth
## 2 9 bmi inc educ race_eth
## 2 10 bmi inc educ race_eth
## 3 1 bmi inc educ race_eth
## 3 2 bmi inc educ race_eth
## 3 3 bmi inc educ race_eth
## 3 4 bmi inc educ race_eth
## 3 5 bmi inc educ race_eth
## 3 6 bmi inc educ race_eth
## 3 7 bmi inc educ race_eth
## 3 8 bmi inc educ race_eth
## 3 9 bmi inc educ race_eth
## 3 10 bmi inc educ race_eth
## 4 1 bmi inc educ race_eth
## 4 2 bmi inc educ race_eth
## 4 3 bmi inc educ race_eth
## 4 4 bmi inc educ race_eth
## 4 5 bmi inc educ race_eth
## 4 6 bmi inc educ race_eth
## 4 7 bmi inc educ race_eth
## 4 8 bmi inc educ race_eth
## 4 9 bmi inc educ race_eth
## 4 10 bmi inc educ race_eth
## 5 1 bmi inc educ race_eth
## 5 2 bmi inc educ race_eth
## 5 3 bmi inc educ race_eth
## 5 4 bmi inc educ race_eth
## 5 5 bmi inc educ race_eth
## 5 6 bmi inc educ race_eth
## 5 7 bmi inc educ race_eth
## 5 8 bmi inc educ race_eth
## 5 9 bmi inc educ race_eth
## 5 10 bmi inc educ race_eth
print(imp)
## Class: mids
## Number of multiple imputations: 10
## Imputation methods:
## bmi inc agec educ race_eth
## "pmm" "pmm" "" "polyreg" "polyreg"
## PredictorMatrix:
## bmi inc agec educ race_eth
## bmi 0 1 1 1 1
## inc 1 0 1 1 1
## agec 1 1 0 1 1
## educ 1 1 1 0 1
## race_eth 1 1 1 1 0
plot(imp)
Shows how many imputations were done. It also shows total missingness, which imputation method was used for each variable (because you wouldn’t want to use a normal distribution for a categorical variable!!).
It also shows the sequence of how each variable is visited (or imputed, the default is left to right).
We may want to make sure imputed values are plausible by having a look. For instance, are the BMI values outside of the range of the data.
head(imp$imp$badhealth)
## NULL
summary(imp$imp$badhealth)
## Length Class Mode
## 0 NULL NULL
summary(brfss20$bmi)
## Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
## 12.83 23.91 27.10 28.11 31.01 93.97 1105
head(imp$imp$inc)
summary(imp$imp$inc)
## 1 2 3 4 5
## Min. :1.000 Min. :1.000 Min. :1.00 Min. :1.000 Min. :1.000
## 1st Qu.:2.000 1st Qu.:2.000 1st Qu.:2.00 1st Qu.:2.000 1st Qu.:3.000
## Median :4.000 Median :5.000 Median :5.00 Median :5.000 Median :5.000
## Mean :3.758 Mean :3.793 Mean :3.81 Mean :3.781 Mean :3.859
## 3rd Qu.:5.000 3rd Qu.:5.000 3rd Qu.:5.00 3rd Qu.:5.000 3rd Qu.:5.000
## Max. :5.000 Max. :5.000 Max. :5.00 Max. :5.000 Max. :5.000
## 6 7 8 9 10
## Min. :1.0 Min. :1.000 Min. :1.00 Min. :1.000 Min. :1.000
## 1st Qu.:2.0 1st Qu.:2.000 1st Qu.:2.00 1st Qu.:3.000 1st Qu.:2.000
## Median :4.0 Median :5.000 Median :5.00 Median :5.000 Median :5.000
## Mean :3.8 Mean :3.828 Mean :3.81 Mean :3.852 Mean :3.788
## 3rd Qu.:5.0 3rd Qu.:5.000 3rd Qu.:5.00 3rd Qu.:5.000 3rd Qu.:5.000
## Max. :5.0 Max. :5.000 Max. :5.00 Max. :5.000 Max. :5.000
Which shows the imputed values for the first 6 cases across the 5 different imputations, as well as the numeric summary of the imputed values. We can see that there is variation across the imputations, because the imputed values are not the same.
We can also do some plotting. For instance if we want to see how the observed and imputed values of bmi look with respect to race, we can do:
library(lattice)
stripplot(imp,bmi~race_eth|.imp, pch=20)
#stripplot(imp, badhealth~agec|.imp, pch=20)
and we see the distribution of the original data (blue dots), the imputed data (red dots) across the levels of race, for each of the five different imputation runs(the number at the top shows which run, and the first plot is the original data).
This plot shows that the bmi values correspond well with the observed data, so they are probably plausible values.
If we want to get our new, imputed data, we can use the
complete()
function, which by default extracts the first
imputed data set. If we want a different one, we can do
complete(imp, action=3)
for example, to get the third
imputed data set.
dat.imp<-complete(imp, action = 1)
head(dat.imp, n=10)
#Compare to the original data
head(brfss20[,c("bmi", "inc", "agec","educ","race_eth")], n=10)
While the first few cases don’t show much missingness, we can coax some more interesting cases out and compare the original data to the imputed:
head(dat.imp[is.na(brfss20$bmi)==T,], n=10)
head(brfss20[is.na(brfss20$bmi)==T,c("bmi", "inc", "agec","educ","race_eth")], n=10)
A key element of using imputed data, is that the relationships we want to know about should be maintained after imputation, and presumably, the relationships within each imputed data set will be the same. So if we used each of the (5 in this case) imputed data sets in a model, then we should see similar results across the five different models.
Here I look at a linear model for bmi:
#Now, I will see the variability in the 5 different imputations for each outcom
fit.bmi<-with(data=imp ,expr=lm(bmi~factor(inc)+agec+educ+race_eth))
fit.bmi
## call :
## with.mids(data = imp, expr = lm(bmi ~ factor(inc) + agec + educ +
## race_eth))
##
## call1 :
## mice(data = dat2[, c("bmi", "inc", "agec", "educ", "race_eth")],
## m = 10, seed = 22)
##
## nmis :
## bmi inc agec educ race_eth
## 1105 2021 0 41 235
##
## analyses :
## [[1]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 27.8525 0.2198 0.4682
## factor(inc)4 factor(inc)5 agec(29,39]
## 0.2922 -0.3520 2.1181
## agec(39,59] agec(59,79] agec(79,99]
## 2.8609 2.1037 -0.4230
## educ1somehs educ2hsgrad educ3somecol
## -0.7776 -0.9511 -0.7128
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.8703 1.2776 -1.0894
## race_ethnh_other race_ethnhwhite
## -1.1273 -0.2916
##
##
## [[2]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 27.8270 -0.2495 -0.3776
## factor(inc)4 factor(inc)5 agec(29,39]
## -0.3576 -0.8040 2.1496
## agec(39,59] agec(59,79] agec(79,99]
## 2.7136 2.0239 -0.5047
## educ1somehs educ2hsgrad educ3somecol
## -0.5388 -0.7216 -0.2760
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.5416 1.3620 -0.8451
## race_ethnh_other race_ethnhwhite
## -1.2373 -0.1001
##
##
## [[3]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 28.32669 -0.20254 -0.06278
## factor(inc)4 factor(inc)5 agec(29,39]
## -0.17311 -0.83808 2.21035
## agec(39,59] agec(59,79] agec(79,99]
## 2.76630 2.02206 -0.34664
## educ1somehs educ2hsgrad educ3somecol
## -1.20581 -1.26219 -0.91000
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.93319 1.20291 -0.64889
## race_ethnh_other race_ethnhwhite
## -1.68375 -0.16499
##
##
## [[4]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 27.40476 0.02166 -0.17110
## factor(inc)4 factor(inc)5 agec(29,39]
## 0.20582 -0.61525 1.94349
## agec(39,59] agec(59,79] agec(79,99]
## 2.70771 1.87836 -0.49624
## educ1somehs educ2hsgrad educ3somecol
## -0.01980 -0.54851 -0.21945
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.24244 1.36398 -0.70526
## race_ethnh_other race_ethnhwhite
## -1.34488 -0.01354
##
##
## [[5]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 27.17335 -0.18429 0.14415
## factor(inc)4 factor(inc)5 agec(29,39]
## -0.00065 -0.64582 2.13373
## agec(39,59] agec(59,79] agec(79,99]
## 2.83384 2.02367 -0.32044
## educ1somehs educ2hsgrad educ3somecol
## -0.25281 -0.15819 -0.03535
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.02751 1.43502 -0.74220
## race_ethnh_other race_ethnhwhite
## -1.30138 -0.07008
##
##
## [[6]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 27.104034 0.199915 0.216383
## factor(inc)4 factor(inc)5 agec(29,39]
## 0.116667 -0.380955 2.214487
## agec(39,59] agec(59,79] agec(79,99]
## 2.915570 2.208505 -0.357476
## educ1somehs educ2hsgrad educ3somecol
## -0.238181 -0.254537 -0.008124
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.021519 1.051149 -0.860815
## race_ethnh_other race_ethnhwhite
## -1.640668 -0.256408
##
##
## [[7]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 28.3312117 -0.2668215 0.0007272
## factor(inc)4 factor(inc)5 agec(29,39]
## -0.2160944 -0.8405463 2.0677457
## agec(39,59] agec(59,79] agec(79,99]
## 2.7207385 1.9475187 -0.5287963
## educ1somehs educ2hsgrad educ3somecol
## -1.0703320 -1.0616305 -0.5647512
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.9974240 1.3953227 -0.9851668
## race_ethnh_other race_ethnhwhite
## -1.4544048 -0.2195910
##
##
## [[8]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 27.36394 -0.14047 0.03673
## factor(inc)4 factor(inc)5 agec(29,39]
## -0.14276 -0.76710 2.03518
## agec(39,59] agec(59,79] agec(79,99]
## 2.60890 1.89554 -0.36495
## educ1somehs educ2hsgrad educ3somecol
## -0.42557 -0.22107 0.09446
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.03100 1.45349 -0.64358
## race_ethnh_other race_ethnhwhite
## -1.27061 -0.07629
##
##
## [[9]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 27.76568 -0.57602 -0.32907
## factor(inc)4 factor(inc)5 agec(29,39]
## -0.36111 -1.08921 2.00929
## agec(39,59] agec(59,79] agec(79,99]
## 2.72312 2.16024 -0.41703
## educ1somehs educ2hsgrad educ3somecol
## -0.44658 -0.34482 -0.18082
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -1.28683 1.28661 -0.82017
## race_ethnh_other race_ethnhwhite
## -1.14022 -0.09722
##
##
## [[10]]
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth)
##
## Coefficients:
## (Intercept) factor(inc)2 factor(inc)3
## 26.78971 -0.06103 -0.06199
## factor(inc)4 factor(inc)5 agec(29,39]
## -0.22498 -0.72902 2.06179
## agec(39,59] agec(59,79] agec(79,99]
## 3.06445 2.06600 -0.25356
## educ1somehs educ2hsgrad educ3somecol
## -0.02002 -0.00946 0.27751
## educ4colgrad race_ethnh_black race_ethnh_multirace
## -0.83065 1.29974 -0.76192
## race_ethnh_other race_ethnhwhite
## -1.08968 0.17213
with (data=imp, exp=(sd(bmi)))
## call :
## with.mids(data = imp, expr = (sd(bmi)))
##
## call1 :
## mice(data = dat2[, c("bmi", "inc", "agec", "educ", "race_eth")],
## m = 10, seed = 22)
##
## nmis :
## bmi inc agec educ race_eth
## 1105 2021 0 41 235
##
## analyses :
## [[1]]
## [1] 6.327508
##
## [[2]]
## [1] 6.200309
##
## [[3]]
## [1] 6.254287
##
## [[4]]
## [1] 6.141304
##
## [[5]]
## [1] 6.184319
##
## [[6]]
## [1] 6.181014
##
## [[7]]
## [1] 6.268828
##
## [[8]]
## [1] 6.294114
##
## [[9]]
## [1] 6.196084
##
## [[10]]
## [1] 6.336522
with (data=imp, exp=(prop.table(table(inc))))
## call :
## with.mids(data = imp, expr = (prop.table(table(inc))))
##
## call1 :
## mice(data = dat2[, c("bmi", "inc", "agec", "educ", "race_eth")],
## m = 10, seed = 22)
##
## nmis :
## bmi inc agec educ race_eth
## 1105 2021 0 41 235
##
## analyses :
## [[1]]
## inc
## 1 2 3 4 5
## 0.0706 0.1487 0.0834 0.1254 0.5719
##
## [[2]]
## inc
## 1 2 3 4 5
## 0.0707 0.1438 0.0875 0.1245 0.5735
##
## [[3]]
## inc
## 1 2 3 4 5
## 0.0708 0.1442 0.0846 0.1251 0.5753
##
## [[4]]
## inc
## 1 2 3 4 5
## 0.0715 0.1467 0.0831 0.1238 0.5749
##
## [[5]]
## inc
## 1 2 3 4 5
## 0.0684 0.1435 0.0856 0.1249 0.5776
##
## [[6]]
## inc
## 1 2 3 4 5
## 0.0712 0.1440 0.0830 0.1295 0.5723
##
## [[7]]
## inc
## 1 2 3 4 5
## 0.0686 0.1457 0.0848 0.1255 0.5754
##
## [[8]]
## inc
## 1 2 3 4 5
## 0.0713 0.1449 0.0833 0.1237 0.5768
##
## [[9]]
## inc
## 1 2 3 4 5
## 0.0696 0.1432 0.0841 0.1255 0.5776
##
## [[10]]
## inc
## 1 2 3 4 5
## 0.0711 0.1447 0.0861 0.1239 0.5742
with (data=imp, exp=(prop.table(table(race_eth))))
## call :
## with.mids(data = imp, expr = (prop.table(table(race_eth))))
##
## call1 :
## mice(data = dat2[, c("bmi", "inc", "agec", "educ", "race_eth")],
## m = 10, seed = 22)
##
## nmis :
## bmi inc agec educ race_eth
## 1105 2021 0 41 235
##
## analyses :
## [[1]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1082 0.0996 0.0183 0.0474 0.7265
##
## [[2]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1088 0.0991 0.0181 0.0466 0.7274
##
## [[3]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1083 0.0992 0.0179 0.0470 0.7276
##
## [[4]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1089 0.0989 0.0180 0.0473 0.7269
##
## [[5]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1089 0.0987 0.0184 0.0474 0.7266
##
## [[6]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1085 0.0987 0.0178 0.0471 0.7279
##
## [[7]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1080 0.0995 0.0181 0.0470 0.7274
##
## [[8]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1087 0.0993 0.0177 0.0468 0.7275
##
## [[9]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1080 0.0995 0.0183 0.0473 0.7269
##
## [[10]]
## race_eth
## hispanic nh_black nh_multirace nh_other nhwhite
## 0.1086 0.0996 0.0180 0.0475 0.7263
with (data=imp, exp=(prop.table(table(educ))))
## call :
## with.mids(data = imp, expr = (prop.table(table(educ))))
##
## call1 :
## mice(data = dat2[, c("bmi", "inc", "agec", "educ", "race_eth")],
## m = 10, seed = 22)
##
## nmis :
## bmi inc agec educ race_eth
## 1105 2021 0 41 235
##
## analyses :
## [[1]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0199 0.0396 0.2395 0.2601 0.4409
##
## [[2]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0198 0.0397 0.2396 0.2601 0.4408
##
## [[3]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0199 0.0397 0.2395 0.2604 0.4405
##
## [[4]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0199 0.0398 0.2399 0.2597 0.4407
##
## [[5]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0199 0.0395 0.2395 0.2605 0.4406
##
## [[6]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0198 0.0398 0.2399 0.2603 0.4402
##
## [[7]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0199 0.0396 0.2399 0.2600 0.4406
##
## [[8]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0198 0.0399 0.2396 0.2602 0.4405
##
## [[9]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0198 0.0396 0.2399 0.2600 0.4407
##
## [[10]]
## educ
## 0Prim 1somehs 2hsgrad 3somecol 4colgrad
## 0.0199 0.0395 0.2396 0.2604 0.4406
Now we pool the separate models from each imputed data set:
est.p<-pool(fit.bmi)
print(est.p)
## Class: mipo m = 10
## term m estimate ubar b t dfcom
## 1 (Intercept) 10 27.59389084 0.24281613 0.264839264 0.53413932 9983
## 2 factor(inc)2 10 -0.12392399 0.08006143 0.055426807 0.14103092 9983
## 3 factor(inc)3 10 -0.01363506 0.09992236 0.063797332 0.17009943 9983
## 4 factor(inc)4 10 -0.08616230 0.08760942 0.052559545 0.14542492 9983
## 5 factor(inc)5 10 -0.70620308 0.06948104 0.048786563 0.12314626 9983
## 6 agec(29,39] 10 2.09437077 0.05542056 0.007590811 0.06377045 9983
## 7 agec(39,59] 10 2.79150770 0.04095151 0.017028401 0.05968275 9983
## 8 agec(59,79] 10 2.03295169 0.04074866 0.011498844 0.05339739 9983
## 9 agec(79,99] 10 -0.40128033 0.08241559 0.007918676 0.09112613 9983
## 10 educ1somehs 10 -0.49955786 0.28919757 0.166782949 0.47265881 9983
## 11 educ2hsgrad 10 -0.55330974 0.22230779 0.182496207 0.42305362 9983
## 12 educ3somecol 10 -0.25352496 0.22493938 0.139621080 0.37852257 9983
## 13 educ4colgrad 10 -1.37824134 0.22507429 0.184179194 0.42767140 9983
## 14 race_ethnh_black 10 1.31278002 0.07552801 0.014357380 0.09132113 9983
## 15 race_ethnh_multirace 10 -0.81025118 0.24447901 0.020467305 0.26699305 9983
## 16 race_ethnh_other 10 -1.32901716 0.11739711 0.042794211 0.16447074 9983
## 17 race_ethnhwhite 10 -0.11176697 0.04656957 0.017923551 0.06628547 9983
## df riv lambda fmi
## 1 30.05489 1.19976870 0.54540675 0.57291210
## 2 47.74970 0.76153382 0.43231292 0.45468495
## 3 52.40332 0.70231592 0.41256497 0.43377074
## 4 56.40768 0.65992332 0.39756253 0.41784400
## 5 46.99586 0.77237217 0.43578442 0.45835491
## 6 494.99534 0.15066417 0.13093670 0.13442695
## 7 90.16774 0.45740048 0.31384680 0.32857622
## 8 157.08630 0.31040845 0.23687916 0.24641302
## 9 888.09627 0.10569048 0.09558776 0.09761765
## 10 59.15912 0.63438032 0.38814731 0.40783396
## 11 39.66826 0.90300853 0.47451628 0.49914741
## 12 54.16941 0.68277591 0.40574381 0.42653312
## 13 39.80093 0.90013441 0.47372144 0.49831336
## 14 290.33509 0.20910278 0.17294045 0.17857946
## 15 1111.74785 0.09208985 0.08432443 0.08596727
## 16 108.19772 0.40097779 0.28621281 0.29905097
## 17 100.27450 0.42336462 0.29743933 0.31104503
summary(est.p)
We need to pay attention to the fmi
column and the
lambda
column. These convey information about how much the
missingness of each particular variable affects the model
coefficients.
lam<-data.frame(lam=est.p$pooled$lambda, param=row.names(est.p$pooled))
ggplot(data=lam,aes(x=param, y=lam))+geom_col()+theme(axis.text.x = element_text(angle = 45, hjust = 1))
It appears that a couple of the education variables and the income variables have large variances to them. This suggests that there may be noticeable variation in the resulting coefficient of the model, depending on which imputed data set we use.
We can also compare to the model fit on the original data, with missings eliminated:
library(dplyr)
bnm<-brfss20%>%
select(bmi, inc, agec, educ, race_eth)%>%
filter(complete.cases(.))%>%
as.data.frame()
summary(lm(bmi~factor(inc)+agec+educ+race_eth, bnm))
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth, data = bnm)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.721 -4.071 -0.954 2.893 64.068
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 27.56818 0.69429 39.707 < 2e-16 ***
## factor(inc)2 0.09522 0.34481 0.276 0.78245
## factor(inc)3 0.20964 0.38155 0.549 0.58273
## factor(inc)4 0.09079 0.35488 0.256 0.79809
## factor(inc)5 -0.63216 0.31723 -1.993 0.04633 *
## agec(29,39] 1.99905 0.27350 7.309 2.97e-13 ***
## agec(39,59] 2.61705 0.23786 11.002 < 2e-16 ***
## agec(59,79] 2.00950 0.23815 8.438 < 2e-16 ***
## agec(79,99] -0.43776 0.35009 -1.250 0.21119
## educ1somehs -0.57155 0.74506 -0.767 0.44304
## educ2hsgrad -0.48404 0.66548 -0.727 0.46703
## educ3somecol -0.24699 0.66694 -0.370 0.71115
## educ4colgrad -1.37526 0.66649 -2.063 0.03911 *
## race_ethnh_black 1.54857 0.32496 4.765 1.92e-06 ***
## race_ethnh_multirace -0.38261 0.58301 -0.656 0.51167
## race_ethnh_other -1.32266 0.41765 -3.167 0.00155 **
## race_ethnhwhite -0.03643 0.25413 -0.143 0.88601
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.09 on 7343 degrees of freedom
## Multiple R-squared: 0.04421, Adjusted R-squared: 0.04213
## F-statistic: 21.23 on 16 and 7343 DF, p-value: < 2.2e-16
Here, I compare the coefficients from the model where we eliminated all missing data to the one that we fit on the imputed data:
fit1<-lm(bmi~factor(inc)+agec+educ+race_eth, data=brfss20)
summary(fit1)
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth, data = brfss20)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.721 -4.071 -0.954 2.893 64.068
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 27.56818 0.69429 39.707 < 2e-16 ***
## factor(inc)2 0.09522 0.34481 0.276 0.78245
## factor(inc)3 0.20964 0.38155 0.549 0.58273
## factor(inc)4 0.09079 0.35488 0.256 0.79809
## factor(inc)5 -0.63216 0.31723 -1.993 0.04633 *
## agec(29,39] 1.99905 0.27350 7.309 2.97e-13 ***
## agec(39,59] 2.61705 0.23786 11.002 < 2e-16 ***
## agec(59,79] 2.00950 0.23815 8.438 < 2e-16 ***
## agec(79,99] -0.43776 0.35009 -1.250 0.21119
## educ1somehs -0.57155 0.74506 -0.767 0.44304
## educ2hsgrad -0.48404 0.66548 -0.727 0.46703
## educ3somecol -0.24699 0.66694 -0.370 0.71115
## educ4colgrad -1.37526 0.66649 -2.063 0.03911 *
## race_ethnh_black 1.54857 0.32496 4.765 1.92e-06 ***
## race_ethnh_multirace -0.38261 0.58301 -0.656 0.51167
## race_ethnh_other -1.32266 0.41765 -3.167 0.00155 **
## race_ethnhwhite -0.03643 0.25413 -0.143 0.88601
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.09 on 7343 degrees of freedom
## (2640 observations deleted due to missingness)
## Multiple R-squared: 0.04421, Adjusted R-squared: 0.04213
## F-statistic: 21.23 on 16 and 7343 DF, p-value: < 2.2e-16
fit.imp<-lm(bmi~factor(inc)+agec+educ+race_eth, data=dat.imp)
summary(fit.imp)
##
## Call:
## lm(formula = bmi ~ factor(inc) + agec + educ + race_eth, data = dat.imp)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.515 -4.105 -1.052 2.908 64.261
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 27.8525 0.4993 55.780 < 2e-16 ***
## factor(inc)2 0.2198 0.2851 0.771 0.44079
## factor(inc)3 0.4682 0.3207 1.460 0.14438
## factor(inc)4 0.2922 0.2992 0.976 0.32885
## factor(inc)5 -0.3520 0.2659 -1.324 0.18556
## agec(29,39] 2.1181 0.2386 8.879 < 2e-16 ***
## agec(39,59] 2.8609 0.2050 13.958 < 2e-16 ***
## agec(59,79] 2.1037 0.2045 10.287 < 2e-16 ***
## agec(79,99] -0.4230 0.2908 -1.455 0.14577
## educ1somehs -0.7776 0.5449 -1.427 0.15356
## educ2hsgrad -0.9511 0.4775 -1.992 0.04641 *
## educ3somecol -0.7128 0.4804 -1.484 0.13794
## educ4colgrad -1.8703 0.4803 -3.894 9.93e-05 ***
## race_ethnh_black 1.2776 0.2783 4.590 4.49e-06 ***
## race_ethnh_multirace -1.0894 0.4984 -2.186 0.02886 *
## race_ethnh_other -1.1273 0.3470 -3.249 0.00116 **
## race_ethnhwhite -0.2916 0.2189 -1.332 0.18279
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.182 on 9983 degrees of freedom
## Multiple R-squared: 0.047, Adjusted R-squared: 0.04548
## F-statistic: 30.77 on 16 and 9983 DF, p-value: < 2.2e-16
In the analysis that only uses complete cases, we see a significant income effect on bmi, but not once we impute the missing values. This suggests a significant selection effect for the income variable.