In this example, I am demonstrating how to conduct a propensity score matching multilevel longitudinal multiple imputation data analysis using the ELC-K-2011 data set to evaluate differences in self-control across students in public and private schools. For more information please see a draft of a paper under review: https://drive.google.com/file/d/1e-2vcEe65vNgy_iogH_50O_EuG19zhol/view?usp=sharing
The data for this analysis can also be found here: https://iu.box.com/s/ess9dulvdauos2wdnkzz9vhnd14ssjjl
The first step is either librarying (or installing) the packages below.
library(psych)
library(prettyR)
##
## Attaching package: 'prettyR'
## The following objects are masked from 'package:psych':
##
## describe, skew
library(Amelia)
## Loading required package: Rcpp
## ##
## ## Amelia II: Multiple Imputation
## ## (Version 1.7.4, built: 2015-12-05)
## ## Copyright (C) 2005-2018 James Honaker, Gary King and Matthew Blackwell
## ## Refer to http://gking.harvard.edu/amelia/ for more information
## ##
library(mitools)
library(MatchIt)
library(reshape2)
library(nlme)
library(dplyr)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:nlme':
##
## collapse
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
I am creating a data set called data1 that contains the variables that I am interested in to make the data set more manageable. Then I am stating that all -9 values should be NA, because those are considered missing in the ELCS data set.
setwd("~/Box Sync/PropScore")
data = read.csv("ELCS-K-2011.csv", header = TRUE)
data1 = cbind(X1TCHCON = data$X1TCHCON, X2TCHCON = data$X2TCHCON, X4TCHCON = data$X4TCHCON,X1RTHET= data$X1RTHET, X2RTHET = data$X2RTHET, X4RTHET = data$X4RTHET, X1MTHET = data$X1MTHET, X2MTHET = data$X2MTHET, X4MTHET = data$X4MTHET, X1BMI = data$X1BMI, X2BMI = data$X2BMI, X4BMI = data$X4BMI,X1HTOTAL = data$X1HTOTAL, X2HTOTAL = data$X2HTOTAL, X4HTOTAL = data$X4HTOTAL, X1PAR1AGE = data$X1PAR1AGE, X2PAR1AGE = data$X2PAR1AGE, X4PAR1AGE = data$X4PAR1AGE, X1PAR1EMP = data$X1PAR1EMP, X2POVTY = data$X2POVTY, X12LANGST = data$X12LANGST,X_CHSEX_R = data$X_CHSEX_R, X1PUBPRI = data$X1PUBPRI, X1PAR1RAC = data$X1PAR1RAC, X1_RACETHP_R = data$X_RACETHP_R)
# Change the -9 to NAs
data1 = apply(data1, 2, function(x){ifelse(x == -9, NA, x)})
#summary(data1)
data1 = as.data.frame(data1)
Because the multiple imputation package can have trouble with multicollinearity and since the predictors are included in terms of imputed data and matching we will dicotmoize them. To dicotomize each variable where one equals one and everything else equals zero, we can create a new dataset called datCat. I then make these variables NULL in the original data set, because I want to replace them with the new dicotomized variables.
Because for the language variable X12LANGST, the variable of interest is when X12LANGST = 2, we need to recode that differently from the other variables, so that variable is treated as its own dataset.
I use the apply function with ifelse inside a function to recode all the variables in the datCat datset at once.
I also need to recode the public-private indicator separetly, because I want to reverse code that variable.
datCat = data1[c(19:25)]
head(datCat)
## X1PAR1EMP X2POVTY X12LANGST X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## 1 4 2 2 1 1 1 1
## 2 4 3 2 2 1 1 1
## 3 NA 3 2 1 NA NA 3
## 4 1 3 2 1 1 1 1
## 5 1 2 2 2 2 1 3
## 6 1 2 2 2 1 1 1
data1[c(19:25)] = NULL
datCatLang = datCat$X12LANGST
datCat$X12LANGST = NULL
X12LANGST = ifelse(datCatLang == 2, 1, 0)
datCat = data.frame(apply(datCat, 2, function(x){(ifelse(x > 1, 0, 1))}))
head(datCat)
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## 1 0 0 1 1 1 1
## 2 0 0 0 1 1 1
## 3 NA 0 1 NA NA 0
## 4 1 0 1 1 1 1
## 5 1 0 0 0 1 0
## 6 1 0 0 1 1 1
datCat = cbind(datCat, X12LANGST)
## Need to recode the public private indicator
data1 = data.frame(data1, datCat)
data1$X1PUBPRI = ifelse(data1$X1PUBPRI == 1,0,1)
summary(data1)
## X1TCHCON X2TCHCON X4TCHCON X1RTHET
## Min. :1.00 Min. :1.000 Min. :1.000 Min. :-3.0854
## 1st Qu.:2.67 1st Qu.:2.750 1st Qu.:2.750 1st Qu.:-1.1358
## Median :3.25 Median :3.250 Median :3.250 Median :-0.5695
## Mean :3.08 Mean :3.178 Mean :3.216 Mean :-0.5422
## 3rd Qu.:3.50 3rd Qu.:3.750 3rd Qu.:3.750 3rd Qu.:-0.0060
## Max. :4.00 Max. :4.000 Max. :4.000 Max. : 2.9779
## NA's :4624 NA's :2378 NA's :4972 NA's :2505
## X2RTHET X4RTHET X1MTHET X2MTHET
## Min. :-3.0003 Min. :-1.881 Min. :-5.8595 Min. :-5.8657
## 1st Qu.: 0.0005 1st Qu.: 1.135 1st Qu.:-1.0473 1st Qu.: 0.0213
## Median : 0.4950 Median : 1.641 Median :-0.3927 Median : 0.5252
## Mean : 0.4473 Mean : 1.589 Mean :-0.4947 Mean : 0.4472
## 3rd Qu.: 0.9352 3rd Qu.: 2.095 3rd Qu.: 0.1360 3rd Qu.: 0.9506
## Max. : 2.9779 Max. : 3.884 Max. : 5.3202 Max. : 2.8824
## NA's :989 NA's :3059 NA's :2579 NA's :1031
## X4MTHET X1BMI X2BMI X4BMI
## Min. :-1.876 Min. : 8.63 Min. : 7.60 Min. : 7.22
## 1st Qu.: 1.149 1st Qu.:15.06 1st Qu.:15.09 1st Qu.:15.18
## Median : 1.632 Median :15.98 Median :16.05 Median :16.29
## Mean : 1.647 Mean :16.50 Mean :16.60 Mean :17.05
## 3rd Qu.: 2.222 3rd Qu.:17.24 3rd Qu.:17.40 3rd Qu.:18.03
## Max. : 4.731 Max. :42.95 Max. :49.14 Max. :51.75
## NA's :3071 NA's :2472 NA's :1012 NA's :3113
## X1HTOTAL X2HTOTAL X4HTOTAL X1PAR1AGE
## Min. : 2.000 Min. : 2.000 Min. : 2.000 Min. :18.00
## 1st Qu.: 4.000 1st Qu.: 4.000 1st Qu.: 4.000 1st Qu.:29.00
## Median : 4.000 Median : 4.000 Median : 4.000 Median :34.00
## Mean : 4.571 Mean : 4.613 Mean : 4.643 Mean :34.07
## 3rd Qu.: 5.000 3rd Qu.: 5.000 3rd Qu.: 5.000 3rd Qu.:39.00
## Max. :18.000 Max. :15.000 Max. :16.000 Max. :77.00
## NA's :4775 NA's :4647 NA's :5222 NA's :4840
## X2PAR1AGE X4PAR1AGE X1PAR1EMP X2POVTY
## Min. :19.00 Min. :20.00 Min. :0.000 Min. :0.000
## 1st Qu.:30.00 1st Qu.:31.00 1st Qu.:0.000 1st Qu.:0.000
## Median :34.00 Median :35.00 Median :0.000 Median :0.000
## Mean :34.49 Mean :35.61 Mean :0.422 Mean :0.255
## 3rd Qu.:39.00 3rd Qu.:40.00 3rd Qu.:1.000 3rd Qu.:1.000
## Max. :77.00 Max. :76.00 Max. :1.000 Max. :1.000
## NA's :4714 NA's :5273 NA's :5105 NA's :4647
## X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## Min. :0.0000 Min. :0.0000 Min. :0.000 Min. :0.000
## 1st Qu.:0.0000 1st Qu.:0.0000 1st Qu.:0.000 1st Qu.:0.000
## Median :1.0000 Median :0.0000 Median :1.000 Median :0.000
## Mean :0.5122 Mean :0.1297 Mean :0.562 Mean :0.474
## 3rd Qu.:1.0000 3rd Qu.:0.0000 3rd Qu.:1.000 3rd Qu.:1.000
## Max. :1.0000 Max. :1.0000 Max. :1.000 Max. :1.000
## NA's :39 NA's :1710 NA's :4878 NA's :1258
## X12LANGST
## Min. :0.0000
## 1st Qu.:1.0000
## Median :1.0000
## Mean :0.8056
## 3rd Qu.:1.0000
## Max. :1.0000
## NA's :2129
Now I want to get the descriptives for the complete data set. I, therefore, omit all the missing data and create a new dataset with just complete data called dataDesc. I need to separate the data into continuous variables and categorical ones because I will need two different functions to get means and standard deviations for the continuous ones and counts and percentages for the categorical variables. For the means and sds I created a function that generates both the means and sds which I then run through the apply function.
I then use the describe.factor function inside the apply function the to get the counts and percentages for the categorical variables.
dataDesc = data1
dim(dataDesc)
## [1] 18174 25
dataDesc = na.omit(dataDesc)
dim(dataDesc)
## [1] 6360 25
datCon = dataDesc[c(1:18)]
head(datCon)
## X1TCHCON X2TCHCON X4TCHCON X1RTHET X2RTHET X4RTHET X1MTHET X2MTHET
## 1 4.00 3.25 3.25 0.1930 1.7614 3.1523 0.5827 1.5384
## 2 2.67 4.00 3.25 -0.7870 0.7452 2.2023 -0.3473 0.8800
## 4 2.00 1.67 3.00 -1.7087 -0.2922 0.8632 -2.1245 -0.2834
## 8 3.67 4.00 3.00 -0.4149 0.6023 2.3156 -0.2921 1.1411
## 11 4.00 4.00 4.00 -0.5024 -0.4003 1.8947 -0.4124 0.5090
## 12 2.25 2.00 2.25 -0.3999 0.7646 1.9006 -0.2823 0.6684
## X4MTHET X1BMI X2BMI X4BMI X1HTOTAL X2HTOTAL X4HTOTAL X1PAR1AGE
## 1 2.7286 16.40 16.32 16.05 6 6 6 34
## 2 2.0825 22.19 22.92 22.84 4 4 3 33
## 4 1.1082 18.14 19.47 21.71 4 4 5 33
## 8 2.7230 13.35 13.00 13.58 4 4 4 34
## 11 2.4883 14.34 14.45 15.05 4 4 4 42
## 12 1.5960 17.39 17.93 19.89 3 3 2 29
## X2PAR1AGE X4PAR1AGE
## 1 34 35
## 2 33 34
## 4 33 34
## 8 34 35
## 11 42 43
## 12 29 30
mean_sd_fun = function(x){
mean_mean = mean(x)
sd_sd = sd(x)
mean_sd = cbind(mean_mean, sd_sd)
}
round(apply(datCon, 2, mean_sd_fun),3)
## X1TCHCON X2TCHCON X4TCHCON X1RTHET X2RTHET X4RTHET X1MTHET X2MTHET
## [1,] 3.147 3.256 3.254 -0.358 0.608 1.750 -0.275 0.614
## [2,] 0.604 0.611 0.610 0.834 0.717 0.694 0.858 0.691
## X4MTHET X1BMI X2BMI X4BMI X1HTOTAL X2HTOTAL X4HTOTAL X1PAR1AGE
## [1,] 1.836 16.394 16.487 16.913 4.534 4.582 4.596 34.905
## [2,] 0.794 2.303 2.406 2.882 1.286 1.328 1.338 6.560
## X2PAR1AGE X4PAR1AGE
## [1,] 34.897 35.939
## [2,] 6.535 6.559
datCat = dataDesc[c(19:25)]
head(datCat)
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R X12LANGST
## 1 0 0 1 0 1 1 1
## 2 0 0 0 0 1 1 1
## 4 1 0 1 0 1 1 1
## 8 0 0 0 0 1 1 1
## 11 1 0 1 0 1 1 0
## 12 0 0 1 0 0 0 1
apply(datCat, 2, function(x){describe.factor(x)})
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## [1,] 3610.00000 5135.00000 3204.00000 5467.00000 4090.00000 3784.00000
## [2,] 56.76101 80.73899 50.37736 85.95912 64.30818 59.49686
## [3,] 2750.00000 1225.00000 3156.00000 893.00000 2270.00000 2576.00000
## [4,] 43.23899 19.26101 49.62264 14.04088 35.69182 40.50314
## X12LANGST
## [1,] 5501.00000
## [2,] 86.49371
## [3,] 859.00000
## [4,] 13.50629
Now I am imputing the data using the Amelia package. I am creating five data sets. I am treating X1-3HTOTALs as logs, because they are count variables, and the other categorical variables as noms, which means they are treated as nominal variables. All other variables are treated as continuous if not specified in Amelia. To check how well Amelia imputed the data I analyzed the density for the three-time points for the outcome of interest of X1-3TCHCON (i.e. self-control).
#head(data1)
m = 5
a.out = amelia(x = data1, m=m, logs = c("X1HTOTAL", "X2HTOTAL", "X4HTOTAL"), noms = c("X1PAR1EMP", "X2POVTY", "X12LANGST","X_CHSEX_R", "X1PUBPRI", "X1PAR1RAC", "X1_RACETHP_R"))
## -- Imputation 1 --
##
## 1 2 3 4 5 6 7 8
##
## -- Imputation 2 --
##
## 1 2 3 4 5 6 7 8
##
## -- Imputation 3 --
##
## 1 2 3 4 5 6 7 8
##
## -- Imputation 4 --
##
## 1 2 3 4 5 6 7 8
##
## -- Imputation 5 --
##
## 1 2 3 4 5 6 7 8
summary(a.out)
##
## Amelia output with 5 imputed datasets.
## Return code: 1
## Message: Normal EM convergence.
##
## Chain Lengths:
## --------------
## Imputation 1: 8
## Imputation 2: 8
## Imputation 3: 8
## Imputation 4: 8
## Imputation 5: 8
##
## Rows after Listwise Deletion: 6360
## Rows after Imputation: 18174
## Patterns of missingness in the data: 793
##
## Fraction Missing for original variables:
## -----------------------------------------
##
## Fraction Missing
## X1TCHCON 0.254429405
## X2TCHCON 0.130846264
## X4TCHCON 0.273577638
## X1RTHET 0.137834269
## X2RTHET 0.054418400
## X4RTHET 0.168317376
## X1MTHET 0.141906020
## X2MTHET 0.056729394
## X4MTHET 0.168977660
## X1BMI 0.136018488
## X2BMI 0.055683944
## X4BMI 0.171288654
## X1HTOTAL 0.262737977
## X2HTOTAL 0.255694949
## X4HTOTAL 0.287333553
## X1PAR1AGE 0.266314515
## X2PAR1AGE 0.259381534
## X4PAR1AGE 0.290139760
## X1PAR1EMP 0.280895785
## X2POVTY 0.255694949
## X_CHSEX_R 0.002145923
## X1PUBPRI 0.094090459
## X1PAR1RAC 0.268405414
## X1_RACETHP_R 0.069219764
## X12LANGST 0.117145373
compare.density(a.out, var = "X1TCHCON", main = "Observed and Imputed values of Self Control Time 1")
compare.density(a.out, var = "X2TCHCON", main = "Observed and Imputed values of Self Control Time 2")
compare.density(a.out, var = "X4TCHCON", main = "Observed and Imputed values of Self Control Time 4")
disperse(a.out)
Here I am getting the descriptives for the imputed dataset to compare against the complete data set.
Because there are five datasets I need to find a way to “loop” across all five data sets. I first use the lapply function to subset each of the five datasets, because I only want to compare the baseline (i.e. the data from the first time point).
I then grab the means using a for loop and the apply function. I can get the counts and percentages from the means for the dichotomized variables because the mean is the percentage for the category that is one. I do the same for the sd’s and just ignore the sd’s for the dichotomous variables. Then I need to average the means and sds across the five data sets, which I do using the mi.meld function.
a.out.imputationsDesc = a.out$imputations
a.out.imputationsDesc = lapply(1:m, function(x) subset(a.out.imputationsDesc[[x]]))
descFun = function(x){
x = data.frame(t(x))
}
mean.out = NULL
for(i in 1:m){
mean.out[[i]] = apply(a.out.imputationsDesc[[i]], 2, mean)
mean.out = data.frame(mean.out)
}
mean.out = descFun(mean.out)
mean.out
## X1TCHCON
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 3.073086
## V2 3.072364
## V3 3.071913
## V4 3.071705
## V5 3.071668
## X2TCHCON
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 3.169550
## V2 3.173891
## V3 3.170975
## V4 3.173900
## V5 3.172219
## X4TCHCON
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 3.192319
## V2 3.190072
## V3 3.189786
## V4 3.195101
## V5 3.190473
## X1RTHET
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. -0.5538606
## V2 -0.5509051
## V3 -0.5497517
## V4 -0.5479545
## V5 -0.5495795
## X2RTHET
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.4370644
## V2 0.4365773
## V3 0.4381137
## V4 0.4367744
## V5 0.4389345
## X4RTHET
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 1.569124
## V2 1.567567
## V3 1.565647
## V4 1.566116
## V5 1.567201
## X1MTHET
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. -0.5105451
## V2 -0.5070729
## V3 -0.5070897
## V4 -0.5079170
## V5 -0.5079731
## X2MTHET
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.4339114
## V2 0.4359724
## V3 0.4356764
## V4 0.4344080
## V5 0.4355089
## X4MTHET
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 1.616322
## V2 1.619966
## V3 1.617228
## V4 1.619680
## V5 1.617773
## X1BMI
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 16.50057
## V2 16.49555
## V3 16.50345
## V4 16.50275
## V5 16.50755
## X2BMI
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 16.60001
## V2 16.59791
## V3 16.60488
## V4 16.60175
## V5 16.60432
## X4BMI
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 17.04397
## V2 17.04086
## V3 17.03835
## V4 17.04256
## V5 17.04445
## X1HTOTAL
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 4.834647
## V2 4.827689
## V3 4.827146
## V4 4.826375
## V5 4.835696
## X2HTOTAL
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 4.882500
## V2 4.877841
## V3 4.874438
## V4 4.877869
## V5 4.878548
## X4HTOTAL
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 4.937554
## V2 4.934282
## V3 4.930129
## V4 4.929241
## V5 4.934569
## X1PAR1AGE
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 34.06954
## V2 34.08384
## V3 34.08491
## V4 34.07304
## V5 34.01965
## X2PAR1AGE
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 34.08432
## V2 34.09610
## V3 34.10209
## V4 34.09192
## V5 34.05643
## X4PAR1AGE
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 35.17110
## V2 35.16166
## V3 35.16684
## V4 35.15219
## V5 35.13112
## X1PAR1EMP
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.4280291
## V2 0.4272037
## V3 0.4233520
## V4 0.4274238
## V5 0.4248927
## X2POVTY
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.2846924
## V2 0.2828766
## V3 0.2827116
## V4 0.2842522
## V5 0.2832618
## X_CHSEX_R
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.5122153
## V2 0.5120502
## V3 0.5122153
## V4 0.5123253
## V5 0.5123803
## X1PUBPRI
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.1370089
## V2 0.1377242
## V3 0.1376692
## V4 0.1376692
## V5 0.1376142
## X1PAR1RAC
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.5253109
## V2 0.5252559
## V3 0.5259161
## V4 0.5229999
## V5 0.5227798
## X1_RACETHP_R
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.4725432
## V2 0.4724882
## V3 0.4740839
## V4 0.4710025
## V5 0.4729834
## X12LANGST
## structure.c.3.07308565361625..3.16955013606425..3.19231909520919.. 0.7921206
## V2 0.7916804
## V3 0.7934962
## V4 0.7919005
## V5 0.7932761
sd.out = NULL
for(i in 1:m){
sd.out[[i]] = apply(a.out.imputationsDesc[[i]], 2, sd)
sd.out = data.frame(sd.out)
}
sd.out = descFun(sd.out)
sd.out
## X1TCHCON
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.6300486
## V2 0.6304115
## V3 0.6319556
## V4 0.6319399
## V5 0.6290800
## X2TCHCON
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.6400291
## V2 0.6383063
## V3 0.6406593
## V4 0.6419056
## V5 0.6398563
## X4TCHCON
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.6277081
## V2 0.6285314
## V3 0.6302934
## V4 0.6259970
## V5 0.6331204
## X1RTHET
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.8621002
## V2 0.8654005
## V3 0.8607125
## V4 0.8627672
## V5 0.8620624
## X2RTHET
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.7876392
## V2 0.7902333
## V3 0.7880660
## V4 0.7905667
## V5 0.7882466
## X4RTHET
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.7626537
## V2 0.7619840
## V3 0.7627362
## V4 0.7664715
## V5 0.7653881
## X1MTHET
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.9403576
## V2 0.9426524
## V3 0.9371497
## V4 0.9428357
## V5 0.9413595
## X2MTHET
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.7736925
## V2 0.7751873
## V3 0.7753670
## V4 0.7750347
## V5 0.7754236
## X4MTHET
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.8573594
## V2 0.8552070
## V3 0.8605617
## V4 0.8589855
## V5 0.8627927
## X1BMI
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 2.416665
## V2 2.420390
## V3 2.419474
## V4 2.409569
## V5 2.423612
## X2BMI
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 2.509391
## V2 2.512213
## V3 2.511550
## V4 2.507856
## V5 2.516992
## X4BMI
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 2.968320
## V2 2.983435
## V3 2.973443
## V4 2.969877
## V5 2.983211
## X1HTOTAL
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 1.447839
## V2 1.448425
## V3 1.449744
## V4 1.444685
## V5 1.447773
## X2HTOTAL
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 1.483816
## V2 1.491412
## V3 1.479955
## V4 1.479034
## V5 1.479289
## X4HTOTAL
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 1.493729
## V2 1.499549
## V3 1.489553
## V4 1.491187
## V5 1.492591
## X1PAR1AGE
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 6.826705
## V2 6.803968
## V3 6.800282
## V4 6.787499
## V5 6.806056
## X2PAR1AGE
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 6.767653
## V2 6.752802
## V3 6.775984
## V4 6.766823
## V5 6.769609
## X4PAR1AGE
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 6.851395
## V2 6.862738
## V3 6.865880
## V4 6.827872
## V5 6.840800
## X1PAR1EMP
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.4948067
## V2 0.4946859
## V3 0.4941038
## V4 0.4947183
## V5 0.4943403
## X2POVTY
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.4512802
## V2 0.4504094
## V3 0.4503298
## V4 0.4510699
## V5 0.4505949
## X_CHSEX_R
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.4998645
## V2 0.4998685
## V3 0.4998645
## V4 0.4998618
## V5 0.4998605
## X1PUBPRI
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.3438662
## V2 0.3446198
## V3 0.3445619
## V4 0.3445619
## V5 0.3445041
## X1PAR1RAC
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.4993727
## V2 0.4993755
## V3 0.4993416
## V4 0.4994845
## V5 0.4994946
## X1_RACETHP_R
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.4992593
## V2 0.4992563
## V3 0.4993416
## V4 0.4991722
## V5 0.4992833
## X12LANGST
## structure.c.0.63004860301843..0.640029076789869..0.627708131242317.. 0.4058012
## V2 0.4061177
## V3 0.4048074
## V4 0.4059595
## V5 0.4049668
mean.sd.out= mi.meld(mean.out, sd.out)
mean.sd.out
## $q.mi
## X1TCHCON X2TCHCON X4TCHCON X1RTHET X2RTHET X4RTHET X1MTHET
## [1,] 3.072147 3.172107 3.19155 -0.5504103 0.4374929 1.567131 -0.5081196
## X2MTHET X4MTHET X1BMI X2BMI X4BMI X1HTOTAL X2HTOTAL
## [1,] 0.4350954 1.618194 16.50197 16.60178 17.04204 4.83031 4.878239
## X4HTOTAL X1PAR1AGE X2PAR1AGE X4PAR1AGE X1PAR1EMP X2POVTY X_CHSEX_R
## [1,] 4.933155 34.0662 34.08617 35.15658 0.4261803 0.2835589 0.5122373
## X1PUBPRI X1PAR1RAC X1_RACETHP_R X12LANGST
## [1,] 0.1375371 0.5244525 0.4726202 0.7924948
##
## $se.mi
## X1TCHCON X2TCHCON X4TCHCON X1RTHET X2RTHET X4RTHET X1MTHET
## [1,] 0.6306884 0.6401557 0.6291395 0.8626133 0.788952 0.7638502 0.9408745
## X2MTHET X4MTHET X1BMI X2BMI X4BMI X1HTOTAL X2HTOTAL
## [1,] 0.7749419 0.8589869 2.417951 2.511604 2.975666 1.447702 1.482712
## X4HTOTAL X1PAR1AGE X2PAR1AGE X4PAR1AGE X1PAR1EMP X2POVTY X_CHSEX_R
## [1,] 1.49333 6.804977 6.766607 6.849774 0.4945358 0.450738 0.499864
## X1PUBPRI X1PAR1RAC X1_RACETHP_R X12LANGST
## [1,] 0.3444231 0.4994163 0.499264 0.4055319
Now we can use the matchit function in Matchit to match the students on the baseline (i.e. timepoint one covariates). I use a nearest neighbor algorithm and match each student with just one other participant. So each student in a private school is matched with one student in a public school. Then to extract all the data for the matched students, I use the match.data function and use lapply to match all five datasets. We can evaluate the match by reviewing the jitter and histograms of the matched and and unmatched groups to evaluate how similar the students are matched on observed covariates.
m.out = lapply(1:m, function(x) matchit(X1PUBPRI ~ X1TCHCON +X1RTHET + X1MTHET + X1BMI + X1HTOTAL + X1PAR1AGE + X1PAR1EMP + X2POVTY + X12LANGST + X_CHSEX_R + X1PAR1RAC + X1_RACETHP_R, data = a.out$imputations[[x]], method = "nearest", ratio = 1))
plot(m.out[[1]], type = "jitter")
## [1] "To identify the units, use first mouse button; to stop, use second."
## integer(0)
plot(m.out[[1]], type = "hist")
m.out = lapply(1:m, function(x) match.data(m.out[[x]]))
Unfortunately, I have not found a way to systematically turn each wide data set (i.e. each time point is its own variable Time1, Time2) into a long format (instead of Time1, Time2, Time3, Time is coded as 1,2,3 and people are repeated over time points).
I use the reshape function and put together each of the time-varying covariates into their own concatenation and have three time points 0,1,2. Then I put all the dataset into one dataset again so that I can use loops later on to automate the process.
dat1 = data.frame(m.out[[1]])
dat2 = data.frame(m.out[[2]])
dat3 = data.frame(m.out[[3]])
dat4 = data.frame(m.out[[4]])
dat5 = data.frame(m.out[[5]])
head(dat4)
## X1TCHCON X2TCHCON X4TCHCON X1RTHET X2RTHET X4RTHET X1MTHET
## 3 3.174380 2.75 4.000000 -0.2343125 0.3323 1.186100 0.7329962
## 5 3.750000 3.50 2.894068 -0.0734000 0.8013 2.423292 -0.5545000
## 13 4.000000 3.50 2.987718 0.3953000 0.7352 2.512541 0.3557000
## 16 3.583365 3.50 2.750000 -1.6957029 -0.3519 0.653900 -1.8494516
## 19 4.000000 4.00 4.000000 -0.6029000 0.9554 1.888000 0.4169000
## 23 3.000000 3.75 4.000000 0.8916000 1.5234 2.631900 1.2520000
## X2MTHET X4MTHET X1BMI X2BMI X4BMI X1HTOTAL X2HTOTAL X4HTOTAL
## 3 1.0112 2.208000 16.02650 15.10 17.92000 4.637544 4 4.000000
## 5 0.6163 1.925971 15.20000 15.54 14.71017 5.000000 7 6.889397
## 13 0.9247 2.459014 17.07000 16.94 17.07147 7.069808 5 7.115828
## 16 -0.2024 0.622600 16.80144 17.39 17.57000 5.394088 4 5.000000
## 19 0.7944 2.149900 13.70000 13.82 13.80000 6.000000 6 6.000000
## 23 1.9194 3.424400 14.34000 15.08 15.29000 5.000000 5 5.000000
## X1PAR1AGE X2PAR1AGE X4PAR1AGE X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI
## 3 39.63753 39 40.00000 1 0 1 1
## 5 39.00000 39 39.55334 1 0 0 1
## 13 38.59106 40 40.26659 0 0 1 1
## 16 30.03036 30 31.00000 0 0 0 1
## 19 40.00000 40 41.00000 0 0 1 0
## 23 35.00000 35 36.00000 0 0 1 1
## X1PAR1RAC X1_RACETHP_R X12LANGST distance weights
## 3 0 0 1 0.27092140 1
## 5 1 0 1 0.18378256 1
## 13 1 1 1 0.15978216 1
## 16 0 0 1 0.07413902 1
## 19 1 1 1 0.22983192 1
## 23 1 1 1 0.23039401 1
dat1 = reshape(dat1, varying = list(c("X1TCHCON", "X2TCHCON", "X4TCHCON"), c("X1RTHET", "X2RTHET", "X4RTHET"), c("X1MTHET", "X2MTHET", "X4MTHET"), c("X1BMI", "X2BMI", "X4BMI"), c("X1HTOTAL", "X2HTOTAL", "X4HTOTAL"), c("X1PAR1AGE", "X2PAR1AGE", "X4PAR1AGE")), times = c(0,1,2), direction = "long")
dat2 = reshape(dat2, varying = list(c("X1TCHCON", "X2TCHCON", "X4TCHCON"), c("X1RTHET", "X2RTHET", "X4RTHET"), c("X1MTHET", "X2MTHET", "X4MTHET"), c("X1BMI", "X2BMI", "X4BMI"), c("X1HTOTAL", "X2HTOTAL", "X4HTOTAL"), c("X1PAR1AGE", "X2PAR1AGE", "X4PAR1AGE")), times = c(0,1,2), direction = "long")
dat3 = reshape(dat3, varying = list(c("X1TCHCON", "X2TCHCON", "X4TCHCON"), c("X1RTHET", "X2RTHET", "X4RTHET"), c("X1MTHET", "X2MTHET", "X4MTHET"), c("X1BMI", "X2BMI", "X4BMI"), c("X1HTOTAL", "X2HTOTAL", "X4HTOTAL"), c("X1PAR1AGE", "X2PAR1AGE", "X4PAR1AGE")), times = c(0,1,2), direction = "long")
dat4 = reshape(dat4, varying = list(c("X1TCHCON", "X2TCHCON", "X4TCHCON"), c("X1RTHET", "X2RTHET", "X4RTHET"), c("X1MTHET", "X2MTHET", "X4MTHET"), c("X1BMI", "X2BMI", "X4BMI"), c("X1HTOTAL", "X2HTOTAL", "X4HTOTAL"), c("X1PAR1AGE", "X2PAR1AGE", "X4PAR1AGE")), times = c(0,1,2), direction = "long")
dat5 = reshape(dat5, varying = list(c("X1TCHCON", "X2TCHCON", "X4TCHCON"), c("X1RTHET", "X2RTHET", "X4RTHET"), c("X1MTHET", "X2MTHET", "X4MTHET"), c("X1BMI", "X2BMI", "X4BMI"), c("X1HTOTAL", "X2HTOTAL", "X4HTOTAL"), c("X1PAR1AGE", "X2PAR1AGE", "X4PAR1AGE")), times = c(0,1,2), direction = "long")
datAll = list(dat1, dat2, dat3, dat4, dat5)
Now I am getting descriptive statistics for both the public and private to allow comparisons. The only unique part of this code is that I am subsetting the data to be for the data set that I want. For example, the first for loop is asking for only public students and time point zero, because we want descriptives for those in public at time point zero.
mean_public = NULL
for(i in 1:m){
mean_public[[i]] = subset(datAll[[i]], X1PUBPRI == 0 & time == 0)
mean_public[[i]]= apply(mean_public[[i]], 2, mean)
mean_public[[i]] = data.frame(t(mean_public[[i]]))
}
mean_public = bind_rows(mean_public); mean_public
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R X12LANGST
## 1 0.5240964 0.1261044 0.5060241 0 0.6281124 0.5919679 0.8630522
## 2 0.5261686 0.1202557 0.4898122 0 0.6244507 0.5773072 0.8713544
## 3 0.5131894 0.1139089 0.4952038 0 0.6163070 0.5815348 0.8796962
## 4 0.5055955 0.1306954 0.5163869 0 0.6119105 0.5567546 0.8705036
## 5 0.5209916 0.1155538 0.4786086 0 0.6137545 0.5733707 0.8700520
## distance weights time X1TCHCON X1RTHET X1MTHET X1BMI X1HTOTAL
## 1 0.1803766 1 0 3.094563 -0.3192869 -0.1901148 16.26921 4.574596
## 2 0.1809783 1 0 3.114713 -0.3502282 -0.2155486 16.14695 4.620088
## 3 0.1826492 1 0 3.104015 -0.3590149 -0.2100860 16.11284 4.593712
## 4 0.1790076 1 0 3.103357 -0.3172040 -0.2019477 16.17577 4.572666
## 5 0.1816359 1 0 3.111177 -0.3375623 -0.1894697 16.13842 4.574915
## X1PAR1AGE id
## 1 36.49836 2513.316
## 2 36.48955 2526.445
## 3 36.37092 2512.171
## 4 36.58129 2495.311
## 5 36.16221 2500.794
sd_public = NULL
for(i in 1:m){
sd_public[[i]] = subset(datAll[[i]], X1PUBPRI == 0 & time == 0)
sd_public[[i]]= apply(sd_public[[i]], 2, sd)
sd_public[[i]] = data.frame(t(sd_public[[i]]))
}
sd_public = bind_rows(sd_public); sd_public
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R X12LANGST
## 1 0.4995193 0.3320337 0.5000641 0 0.4834057 0.4915679 0.3438613
## 2 0.4994145 0.3253253 0.4999961 0 0.4843612 0.4940861 0.3348742
## 3 0.4999259 0.3177641 0.5000769 0 0.4863818 0.4934059 0.3253814
## 4 0.5000686 0.3371344 0.4998313 0 0.4874126 0.4968678 0.3358157
## 5 0.4996591 0.3197530 0.4996421 0 0.4869854 0.4946864 0.3363135
## distance weights time X1TCHCON X1RTHET X1MTHET X1BMI X1HTOTAL
## 1 0.07882501 0 0 0.6337740 0.8271677 0.8328061 2.088561 1.306134
## 2 0.07843892 0 0 0.6213685 0.8627558 0.8833414 2.067585 1.340148
## 3 0.07927461 0 0 0.6305447 0.8311164 0.8430899 2.048942 1.271815
## 4 0.07551889 0 0 0.6177667 0.8449800 0.8293426 2.019531 1.300077
## 5 0.07749274 0 0 0.6227796 0.8330077 0.8510528 2.037831 1.307934
## X1PAR1AGE id
## 1 7.157007 1448.622
## 2 6.787993 1450.947
## 3 7.000066 1450.295
## 4 7.059721 1440.528
## 5 7.053838 1451.394
mean_sd_public = mi.meld(q = mean_public, se = sd_public); mean_sd_public
## $q.mi
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## [1,] 0.5180083 0.1213036 0.4972071 0 0.618907 0.576187
## X12LANGST distance weights time X1TCHCON X1RTHET X1MTHET
## [1,] 0.8709317 0.1809295 1 0 3.105565 -0.3366593 -0.2014333
## X1BMI X1HTOTAL X1PAR1AGE id
## [1,] 16.16864 4.587195 36.42047 2509.607
##
## $se.mi
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## [1,] 0.4998046 0.3265758 0.5001772 0 0.4857729 0.4943275
## X12LANGST distance weights time X1TCHCON X1RTHET X1MTHET
## [1,] 0.3353635 0.07793577 0 0 0.6253335 0.8401487 0.8482426
## X1BMI X1HTOTAL X1PAR1AGE id
## [1,] 2.0537 1.305592 7.015064 1448.423
mean_private = NULL
for(i in 1:m){
mean_private[[i]] = subset(datAll[[i]], X1PUBPRI == 1 & time == 0)
mean_private[[i]]= apply(mean_private[[i]], 2, mean)
mean_private[[i]] = data.frame(t(mean_private[[i]]))
}
mean_private = bind_rows(mean_private); mean_private
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R X12LANGST
## 1 0.5184739 0.1333333 0.5024096 1 0.6204819 0.5787149 0.8662651
## 2 0.5145825 0.1318418 0.4998002 1 0.6228526 0.5765082 0.8673592
## 3 0.5051958 0.1215028 0.5027978 1 0.6123102 0.5807354 0.8705036
## 4 0.5059952 0.1302958 0.5011990 1 0.6183054 0.5715428 0.8669065
## 5 0.5065974 0.1195522 0.4974010 1 0.6153539 0.5741703 0.8668533
## distance weights time X1TCHCON X1RTHET X1MTHET X1BMI X1HTOTAL
## 1 0.1804023 1 0 3.095661 -0.3257364 -0.1938067 16.19470 4.593133
## 2 0.1810062 1 0 3.106506 -0.3304061 -0.2001301 16.20427 4.578571
## 3 0.1826584 1 0 3.098489 -0.3365088 -0.1987229 16.19040 4.584836
## 4 0.1790187 1 0 3.107754 -0.3165537 -0.1872200 16.19217 4.592755
## 5 0.1816434 1 0 3.108002 -0.3303873 -0.1930724 16.18347 4.574771
## X1PAR1AGE id
## 1 36.60876 2467.684
## 2 36.65035 2480.555
## 3 36.60428 2492.829
## 4 36.47740 2509.689
## 5 36.44698 2502.206
sd_private = NULL
for(i in 1:m){
sd_private[[i]] = subset(datAll[[i]], X1PUBPRI == 1 & time == 0)
sd_private[[i]]= apply(sd_private[[i]], 2, sd)
sd_private[[i]] = data.frame(t(sd_private[[i]]))
}
sd_private = bind_rows(sd_private); sd_private
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R X12LANGST
## 1 0.4997590 0.3400029 0.5000946 0 0.4853645 0.4938643 0.3404357
## 2 0.4998872 0.3383863 0.5000999 0 0.4847692 0.4942106 0.3392539
## 3 0.5000729 0.3267760 0.5000921 0 0.4873206 0.4935374 0.3358157
## 4 0.5000640 0.3366958 0.5000985 0 0.4858994 0.4949541 0.3397437
## 5 0.5000565 0.3245020 0.5000932 0 0.4866088 0.4945670 0.3398012
## distance weights time X1TCHCON X1RTHET X1MTHET X1BMI X1HTOTAL
## 1 0.07889392 0 0 0.6100201 0.8014998 0.8023031 2.146334 1.276103
## 2 0.07852508 0 0 0.6087266 0.8143509 0.8118227 2.152461 1.294426
## 3 0.07926553 0 0 0.6166528 0.8170011 0.7973445 2.144440 1.320124
## 4 0.07554175 0 0 0.6195809 0.8187920 0.8176636 2.147209 1.295750
## 5 0.07749101 0 0 0.6147597 0.8098728 0.7989474 2.173979 1.288364
## X1PAR1AGE id
## 1 6.203628 1426.715
## 2 6.219348 1439.459
## 3 6.164833 1439.258
## 4 6.049800 1449.061
## 5 6.191213 1437.054
mean_sd_private = mi.meld(q = mean_private, se = sd_private); mean_sd_private
## $q.mi
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## [1,] 0.510169 0.1273052 0.5007215 1 0.6178608 0.5763343
## X12LANGST distance weights time X1TCHCON X1RTHET X1MTHET
## [1,] 0.8675775 0.1809458 1 0 3.103282 -0.3279185 -0.1945904
## X1BMI X1HTOTAL X1PAR1AGE id
## [1,] 16.193 4.584813 36.55755 2490.593
##
## $se.mi
## X1PAR1EMP X2POVTY X_CHSEX_R X1PUBPRI X1PAR1RAC X1_RACETHP_R
## [1,] 0.5000109 0.3334051 0.5001015 0 0.4860146 0.4942429
## X12LANGST distance weights time X1TCHCON X1RTHET X1MTHET
## [1,] 0.339019 0.07796925 0 0 0.6139941 0.8123674 0.8056739
## X1BMI X1HTOTAL X1PAR1AGE id
## [1,] 2.152928 1.295065 6.166843 1438.445
Here we are evaluating a multilevel model with the time-varying covariates only because time-invariant covariates are already accounted for when you conduct the matching. This model below is a random intercepts and slopes model.
model2 = lapply(1:m, function(x) lme(X1TCHCON ~ X1PUBPRI*time + X1RTHET + X1MTHET + X1BMI + X1HTOTAL + X1PAR1EMP, random = ~ time | id, data = datAll[[x]], method = "ML"))
summary(model2[[1]])
## Linear mixed-effects model fit by maximum likelihood
## Data: datAll[[x]]
## AIC BIC logLik
## 24148.58 24247.54 -12061.29
##
## Random effects:
## Formula: ~time | id
## Structure: General positive-definite, Log-Cholesky parametrization
## StdDev Corr
## (Intercept) 0.4999545 (Intr)
## time 0.1805769 -0.499
## Residual 0.3962649
##
## Fixed effects: X1TCHCON ~ X1PUBPRI * time + X1RTHET + X1MTHET + X1BMI + X1HTOTAL + X1PAR1EMP
## Value Std.Error DF t-value p-value
## (Intercept) 3.273494 0.05020900 9954 65.19736 0.0000
## X1PUBPRI -0.005585 0.01749754 4977 -0.31919 0.7496
## time -0.102550 0.01114119 9954 -9.20456 0.0000
## X1RTHET 0.100150 0.00974118 9954 10.28105 0.0000
## X1MTHET 0.064961 0.00954294 9954 6.80719 0.0000
## X1BMI -0.012775 0.00265308 9954 -4.81523 0.0000
## X1HTOTAL 0.024819 0.00463117 9954 5.35918 0.0000
## X1PAR1EMP -0.048183 0.01410973 4977 -3.41487 0.0006
## X1PUBPRI:time -0.008518 0.00945234 9954 -0.90118 0.3675
## Correlation:
## (Intr) X1PUBPRI time X1RTHE X1MTHE X1BMI X1HTOT X1PAR1
## X1PUBPRI -0.185
## time -0.028 0.250
## X1RTHET 0.014 0.015 -0.378
## X1MTHET -0.021 -0.011 -0.328 -0.608
## X1BMI -0.851 0.009 -0.092 0.008 0.031
## X1HTOTAL -0.445 0.005 -0.058 0.041 0.006 0.010
## X1PAR1EMP -0.152 0.004 0.010 -0.005 -0.005 -0.050 0.112
## X1PUBPRI:time 0.111 -0.601 -0.425 -0.014 0.016 -0.002 -0.012 -0.001
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -3.80764362 -0.52721389 0.03879767 0.55900797 2.89191408
##
## Number of Observations: 14940
## Number of Groups: 4980
Now we need to combine the parameter estimates and standard errors using the mi.meld function.
After we get the combined parameter estimates and the standard errors, we can then get the p-value and 95% confidence interval using the degrees of freedom from the previous model.
coefs = NULL
for(i in 1:m){
coefs[[i]] = summary(model2[[i]])
coefs[[i]] = coefs[[i]]$tTable[[9,1]]
}
coefs = t(data.frame(coefs))
se = NULL
for(i in 1:m){
se[[i]] = summary(model2[[i]])
se[[i]] = se[[i]]$tTable[[9,2]]
}
se = t(data.frame(se))
parSe = mi.meld(q = coefs, se = se)
tStat = parSe$q.mi/parSe$se.mi
2*pt(-abs(tStat), df = 10078)
## [,1]
## [1,] 0.4737482
upper = parSe$q.mi +(2*parSe$se.mi)
upper
## [,1]
## [1,] 0.01685922
lower = parSe$q.mi-(2*parSe$se.mi)
lower
## [,1]
## [1,] -0.03567893