Example of a random-coefficients multinomial logistic regression using mlogit

rcr_mlogitPB52Example.Rmd

Mixed-effects multinomial analyses of a pseudo perceptual experiment

DRAFT 0 TM Nearey

Example adapting Peterson and Barney data (from Santiago Barreda’s phonTools package.)

I have not found any R-ready perceptual data of the appropriate structure. I will instead treat the Peterson and Barney 1952 data as though it represented a perceptual experimend. In the pb52 production data, the speaker variable indicates a unique speaker and there are two repetitions of each of 10 vowel categories with f0 F1, F2 and F3 measurements. so here I’m pretending the original vowel categories are listeners’ responses, and that each speaker’s data was listened to by a unique pseudo listener. We can do this by adding a new ‘plistener’ (for pseudo listener) column to the data. (See below.) [ I will try to add an analysis of a real perceptual data set to RPubs in the near future.]

The simplest multinomial logit (MNL) analysis is typically applied to situations we don’t encounter much in phonetics experiment. Here’s a scenario. A sound proof booth is set up at West Edmonton Mall. Volunteer listeners enter the booth, hear one and only one stimulus, and make a choice. All treatments are typically treated as fixed-effects.

Another more realistic scenario where simple MNL can be applied is in the study of the responses of a single listener who responds to all of our stimuli.

In real phonetics experiments, we have a repeated-measures or panel-data or clustered data situation. Where each listener (panelist) responds to more than one stimulus situation. Listeners are probably not identical and the responses of one listener to two similar stimuli are probably more similar to each other than they are to the responses of another listener to those same stimuli. Simple MNL cannot accomodate the kinds of within-listener correlation [or between-speaker heterogeneity] that may be present.

The pseudo-experiment describes represents such repeated-measures or panel study. Roughly speaking, there are two kinds of random effects: one due to the multinomial sampling at the trial level (just as in a pure multinomial experiment where every trial was responded to by a different listener) and another due to heterogeneity among the panelists.

Origin: This started as an RMD file compiled from a spin of a draft notebook rcr_mlogitPB52Example.R.

CRAN Packages required

  • phonTools version >= 0.2-2.1
  • mlogit version >= 0.2-2.1 (mlogit also loads CRAN packages Formula, maxLik and miscTools )
  • nnet version >=7.3.12

Four kinds of analysis

  1. Simple multinomial MNL (‘fixed effects’, ‘pooled listeners’)
  1. We’ll use the easy-to-call nnet::multinom for a “pooled-listener” multinomial analysis to check a couple of things. In such a pooled-listeners analysis all fitted terms are treated as fixed-effects and the only variability is assumed to be due to multinomial sampling from predicted multinomial probabilities. This treats all pseudo-listeners as though they were identical.

  2. A similar fixed-effects analysis will also be run using the package mlogit, which is capable of a staggering variety of multinomial logistic analyses, including the kind of mixed-effects models that seem most natural to

  1. A Marginal population-average clustered bootstrap analysis (CBMNL) The next analysis is one that stops short of a full fledged random-effects analysis, but is often enough to answer key research questions. It involves constructing a bunch of pseudo-panels, buy resampling listeners with replacement. So if ther are 5 listeners in the original experiment and the original panel contains listeners [1 2 3 4 5] a bootstrap sample might be constructed with listeners [ 1 1 2 3 5] or [1 1 1 1 1] or any collection of 5 numbers from 1 to 5. We repeat a listener’s full data set every time their number is picked in the bootstrapped sample and we omit listeners who lost the lottery on this bootstrap resample. We draw a large number of bootstrap samples and treat the data with simple MNL analysis. We can get legitimate estimates of means and variability of the model coefficients across different putative panels, hence accomodating fairly listener-heterogeneity.

  2. A full-fledged mixed multiomial logit analsis MXMNL. We’ll use mlogit:mlogit to do a couple full fledged random-effects analyses accomodating individual listener variation in a more complete way. (and also for a preliminary comparison against the nnet::multinomial results. )

Some control parameters

If you want to run this code yourself, here are some suggestions for skipping or speeding up some analyses. Parameter | Skip | Quick | Serious | Description ——— |—– |——-|——–|————- NBOOT | | 100 | 1-5 k | Clustered bootstrap population-average nnet analysis NRCRDRAWS | 0 | 100 | 1-5 k | Random coefficients (slopes and intercepts) mlogit analysis NINTERCEPTDRAWS | 0 | 100 | 1-5 k | Random intercepts (fixed slopes) mlogit analysis

NBOOT=1000 # 20 100 5000
NRCRDRAWS=1000 # 0 100 5000
NINTERCEPTDRAWS=1000 # 0 100 5000

Load and inspect the data

# library(phonTools)  -- you will need this in your library for the data call below.
catln <- function(...)cat(...,'\n')
data(pb52,package='phonTools')
catln('Variable names of pb52:',names(pb52))
## Variable names of pb52: type sex speaker vowel repetition f0 f1 f2 f3
print(summary(pb52))
##  type    sex        speaker          vowel       repetition 
##  c:300   f:720   Min.   : 1.00   {      :152   Min.   :1.0  
##  m:660   m:800   1st Qu.:19.75   3'     :152   1st Qu.:1.0  
##  w:560           Median :38.50   A      :152   Median :1.5  
##                  Mean   :38.50   E      :152   Mean   :1.5  
##                  3rd Qu.:57.25   i      :152   3rd Qu.:2.0  
##                  Max.   :76.00   I      :152   Max.   :2.0  
##                                  (Other):608                
##        f0              f1               f2             f3      
##  Min.   : 91.0   Min.   : 190.0   Min.   : 560   Min.   :1400  
##  1st Qu.:133.0   1st Qu.: 420.0   1st Qu.:1100   1st Qu.:2370  
##  Median :200.0   Median : 540.0   Median :1470   Median :2680  
##  Mean   :191.3   Mean   : 563.3   Mean   :1624   Mean   :2708  
##  3rd Qu.:238.0   3rd Qu.: 681.0   3rd Qu.:2100   3rd Qu.:3030  
##  Max.   :350.0   Max.   :1300.0   Max.   :3610   Max.   :4380  
## 
library(nnet)
library(mlogit)
## Loading required package: Formula
## Loading required package: maxLik
## Loading required package: miscTools
## 
## Please cite the 'maxLik' package as:
## Henningsen, Arne and Toomet, Ott (2011). maxLik: A package for maximum likelihood estimation in R. Computational Statistics 26(3), 443-458. DOI 10.1007/s00180-010-0217-1.
## 
## If you have questions, suggestions, or comments regarding the 'maxLik' package, please use a forum or 'tracker' at maxLik's R-Forge site:
## https://r-forge.r-project.org/projects/maxlik/
library(knitr)
# library(phonTools)  -- you will need this in your library for the data call below.
catln <- function(...)cat(...,'\n')
data(pb52,package='phonTools')

Adding pseudo-listeners

The pb52 data set is production data. We’ll convert it to a pseudo listening experiment in the following way: We’ll be pretending that each pseudo plistener represents a unique listener. This would be possible in a real experiment if each listener heard the vowels of one and only one speaker and the vowel variable represented the label assigned by the listener. #

pb52$plistener <- pb52$speaker

Role of the plistener variable

The plistener variable in will be used as the ID variable in the mlogit.data call in the Data preparation for mlogit section later in this document (and also for resampling in the clustered bootstrap section.) We will ignore the speaker variable from here on. (Think of this as assuming that all the relevant perceptual information about the vowels is contained in the measurements and there are no additional speaker effects, fixed or random, there are only random listener related effects. )

So pretending that speaker is really listener, each row of input data looks like this:

type sex speaker vowel repetition f0 f1 f2 f3 plistener

The dependent categorical (factor) variable is vowel The independent continuous variables that we’ll be using are f0, f1, f2, and f3 plus the independent categorical (factor) variable plister. The other variables (those shown as Strikethrough text) will be ignored. Real perceptual data for a “repeated-measures” type experiment could be put in an analogous form. The key thing is that each row of the data represent a single ‘trial’, i.e a single categorical response to a single stimulus set by a single listener and that all the relevant information about the trial, including a listener identifier, is contained in one row.

Preprocessing

I’m logging and then standardizing the frequency data these to try to make scaling better. With raw log values, convergence criteria in mlogit didn’t look great. I’m also moving vowel Sampa [[V]] (wedge) to first position slot, making it the reference category to make coefficient interpretation easier. In production data [[V]] shows non-extreme value on all formants (and f0). Vowel x stimprop coefficients with more positive (more negative) values indicate that higher (lower) values of that property favor that vowel compared to [[V]] (wedge).

for(v in c('f0','f1','f2','f3')){
  pb52[[v]]=scale(log(pb52[[v]]))
}
# There is no ideal middle vowel we'll choose V ( shows good f0 f2 and f3 centrality, but bf1 is fairly high).
pb52$vowel=factor(pb52$vowel, levels= c("V" , "i" , "I" , "E",  "{" , "A" , "O" , "U" , "u", "3'"))

Preliminary simple MNL “pooled listener analysis” via nnet:multinom

If we ignore the repeated-measures nature of the data and assumed that our pseudo listeners were all homogeneous, we can analyze the data simply using the multinom function in the nnet package. This step is not necessary but is useful as a check on the structure of our data and checking against simple multinomial model in mlogit below.

Notice that I’ve set the formulas up as strings first, and converted them to formula with the as.formula function. This is useful in the final model, so we’ll get used to it now. We’re using a large value of maxit here.

nnetFixedModelStr='vowel ~ f0+f1+f2+f3'
nnetFixedFit = multinom(as.formula(nnetFixedModelStr), pb52, maxit=10000, abstol=1e-8, reltol=1e-10)
## # weights:  60 (45 variable)
## initial  value 3499.929341 
## iter  10 value 1303.227481
## iter  20 value 1080.263208
## iter  30 value 805.503273
## iter  40 value 527.808739
## iter  50 value 481.761371
## iter  60 value 468.159612
## iter  70 value 465.550659
## iter  80 value 464.843689
## iter  90 value 463.740884
## iter 100 value 463.249972
## iter 110 value 462.926103
## iter 120 value 462.808703
## iter 130 value 462.718100
## iter 140 value 462.545959
## iter 150 value 461.905625
## iter 160 value 461.847747
## final  value 461.847181 
## converged
print(summary(nnetFixedFit))
## Call:
## multinom(formula = as.formula(nnetFixedModelStr), data = pb52, 
##     maxit = 10000, abstol = 1e-08, reltol = 1e-10)
## 
## Coefficients:
##    (Intercept)         f0          f1         f2          f3
## i   -28.344933 -1.2281977 -22.5606856  81.131202 -15.4477987
## I   -11.842114 -0.3195312 -16.8686658  73.360273 -16.6267913
## E    -8.818741 -1.3420482  -8.9446217  72.270880 -18.4238195
## {   -13.759556 -3.4412906  -0.1992565  71.352756 -19.5905134
## A   -13.579694 -0.6416880   8.9228204  -9.285372  -0.1626118
## O   -11.469784  2.8337255  -0.3049604 -15.750315   1.9178155
## U     2.415237  1.7284651 -12.2496208  -2.541564   2.5085866
## u    -1.725847  2.7467556 -17.5150296  -3.125891   3.5608712
## 3'  -16.898581  9.3495105 -11.1885206  75.452502 -33.8348256
## 
## Std. Errors:
##    (Intercept)        f0        f1        f2         f3
## i     5.605505 1.6335440 3.2133852 30.334500 10.3500086
## I     4.968458 1.5381762 3.1033930 30.265136 10.2958460
## E     4.819566 1.5097715 2.9773074 30.224155 10.2714198
## {     4.971785 1.5552807 3.0068353 30.298655 10.3024958
## A     1.434827 0.3311675 0.9841078  1.114449  0.4171751
## O     1.439780 0.4781283 1.1128941  1.502390  0.5476475
## U     1.238275 0.5190419 1.6248323  1.570485  0.6374793
## u     1.337348 0.5903592 1.7479099  1.618677  0.7053547
## 3'    6.436586 2.3857158 3.1061287 30.587770 11.9990914
## 
## Residual Deviance: 923.6944 
## AIC: 1013.694
nnetCoefMat=coef(nnetFixedFit)
# > str(nnetCoefMat) this is a matrix with rows = categories and columns = intercept+covariates
#  num [1:9, 1:5] 52.1 70.6 11.5 11.9 18.5 ...
#  - attr(*, "dimnames")=List of 2
#   ..$ : chr [1:9] "3'" "A" "E" "i" ...
#   ..$ : chr [1:5] "(Intercept)" "f0" "f1" "f2" ...

A simple population-average method: CBMNL Bootstraping on (pseudo) listeners

For most phonetics experiments, we’re not very interested in individual variation or differences, but rather in the aggregate analysis of a group of listeners, and whether those results would be expected to generalize to another group of listeners. One very simple method is a clustered bootstrap. This approach is discussed in Field & Walsh (2007) De Rooij (2012), and Shao and Tu (1995). There are comments in the code section to clairfy the logic. There’s also a CRAN package clusterSEs that has a function cluster.bs.mlogit that I haven’t tried yet. It may do a much better job.

De Rooij, Mark, and Hailemichael M Worku. “A Warning Concerning the Estimation of Multinomial Logistic Models with Correlated Responses in SAS.” Computer Methods and Programs in Biomedicine 107, no. 2 (2012): 341–346.
Shao, J, and D. Tu. The Jackknife and Bootstrap. Berlin: Springer, 1995.
Field, C. A., & Welsh, A. H. (2007). Bootstrapping clustered data. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(3), 369-390

# Simple cluster bootstrap Field & Welsh 2007
sjIdVarName = "plistener"
# Get a char vector of plistener id's and the number of distinct listeners
uniqueID = unique(pb52[[sjIdVarName]])
nID = length(uniqueID)
# Get list of rows for  each individual plister 
IDRowsList=list()
for ( i in 1 : nID){
  IDRowsList[[i]]=which(pb52[[sjIdVarName]]==uniqueID[i])
}

# We create a 3 d array. bootSampArray [ ,  , i ] will contain
# the matrix of coefficients returned by `nnet::multinom` for the i-th bootstrap sample. 
bootSampArray=array(NA,c(9,5,NBOOT))
fullSampModel=multinom(as.formula(nnetFixedModelStr), pb52, maxit=10000, trace=FALSE)
fullSampCoefs=coef(fullSampModel)
dimnames(bootSampArray) = list(rownames(fullSampCoefs),colnames(fullSampCoefs),NULL)
elementSeq=1:45
bootMultinom=function(){
  # Generate a single bootstrap sample
  inxUseIDs=sample(uniqueID,nID,replace=TRUE)
  # This next line will get row numbers of our bootstrap sample in the original data.
  # Some rows may be repeated multiple times, some may be skipped, depending
  # on how many times each p-listener was selected in inxUseIDs
  inxUseRows=unlist(IDRowsList[inxUseIDs])
  stopifnot(length(inxUseIDs)==nID)
  # Just use defaults of other arguments.
  # Note that subsetting inxUseRows will create a new dataframe for the each of the NBOOT   analyses.
nnetFitBoot = multinom(as.formula(nnetFixedModelStr), pb52[inxUseRows, ], maxit=10000,trace=FALSE) # supress trace output
# This returns a matrix (nvowel-1)  x  (nstimProp+1) with first column being intercepts
  multinomBootCoefs=coef(nnetFitBoot)
 # print(multinomBootCoefs)
  return(multinomBootCoefs)
}
for (iboot in 1:NBOOT){
  if ((iboot%%100) == 0 ) cat('.')
  if  ((iboot%%1000) ==0 ) cat('1k\n')
 tboot=bootMultinom()
 # print(tboot)
 bootSampArray[elementSeq]=as.vector(tboot)
 elementSeq=elementSeq+45
}
## ..........1k
cat('\n')

Collect statistics on the bootstraped parameter estimates in bootSampArray

# means
  bootMeans=apply(bootSampArray,c(1,2),mean) 
# standard errors
  bootSDs=apply(bootSampArray,c(1,2),sd) 
# t-scores 
  bootTvals=bootMeans/bootSDs
# 95% confidence intervals of the sample
  boot5=apply(bootSampArray,c(1,2),function(x) quantile(x, c(.05)))
  boot95=apply(bootSampArray,c(1,2),function(x) quantile(x, c(.95)))

Organize and report summary statistics

Note: the p-levels here need to be taken with a larger grain of salt than usual as they depend on normality of the bootstrap coefficients. There are also better though more complex, ways of getting bootstrap confidence intervals, usually involving another layer of sampling. The function cluster.bs.mlogit in package clusterSes may help here.

  famNames=colnames(bootMeans)
  famNames[1]='Intercept'
# Show summaries
  for (icol in 1:5){ # For intercept and effect type
    tdf=data.frame(bootMeans[,icol,drop=FALSE])
    tdf$se = bootSDs[,icol]
    tdf$t =   bootTvals[,icol]
    # This is treats t-squared as a chi2 variable and calculates the tail probability
    
    tdf$p = 1-pchisq(tdf$t^2,1)
    tdf$ci05=boot5[,icol]
    tdf$ci95=boot95[,icol]
    print(kable(tdf,caption=famNames[icol],row.names=TRUE))
    
  }
## 
## 
## Table: Intercept
## 
##       X.Intercept.           se            t           p          ci05         ci95
## ---  -------------  -----------  -----------  ----------  ------------  -----------
## i       -84.572787   120.739290   -0.7004579   0.4836414   -329.827697   -23.042251
## I       -62.777402   118.878519   -0.5280803   0.5974436   -301.794420    -6.145069
## E       -59.550063   118.823362   -0.5011646   0.6162553   -297.449141    -3.579032
## {       -65.153205   118.623165   -0.5492452   0.5828372   -303.151348    -7.909562
## A       -14.580585     2.160814   -6.7477295   0.0000000    -18.422707   -11.521387
## O       -12.584556     2.240384   -5.6171439   0.0000000    -16.644699    -9.210265
## U         3.334632     1.363874    2.4449708   0.0144864      1.398405     5.602914
## u        -1.122858     1.589426   -0.7064555   0.4799049     -3.627556     1.347661
## 3'     -104.030411   396.536477   -0.2623476   0.7930534   -392.906781    -9.402578
## 
## 
## Table: f0
## 
##                f0            se            t           p          ci05         ci95
## ---  ------------  ------------  -----------  ----------  ------------  -----------
## i     -13.2907818    26.2316539   -0.5066696   0.6123867   -55.6925368    2.3292986
## I     -11.7433059    26.0287005   -0.4511676   0.6518688   -53.3611810    2.9626967
## E     -12.9391218    25.9863881   -0.4979192   0.6185410   -54.8650987    1.6740575
## {     -15.2281365    25.9050591   -0.5878441   0.5566369   -57.5442477   -0.6799233
## A      -0.7026036     0.4148560   -1.6936084   0.0903397    -1.4343032   -0.0740346
## O       3.0365907     0.6456852    4.7028964   0.0000026     2.1144240    4.2034158
## U       2.0067632     0.8055092    2.4912976   0.0127277     0.7934693    3.4232710
## u       3.1525880     1.1022440    2.8601543   0.0042343     1.4829898    5.1249771
## 3'     20.4202012   131.3943436    0.1554116   0.8764969   -38.3825099   65.6927732
## 
## 
## Table: f1
## 
##                f1          se            t           p          ci05         ci95
## ---  ------------  ----------  -----------  ----------  ------------  -----------
## i     -33.3807629   39.487319   -0.8453540   0.3979132   -112.330482    -3.531502
## I     -26.3242106   39.186980   -0.6717591   0.5017371   -105.271793     3.326150
## E     -17.5469802   39.572087   -0.4434181   0.6574633    -98.415921    12.936186
## {      -7.8822627   39.820058   -0.1979470   0.8430865    -88.388229    22.663031
## A       9.5928229    1.614718    5.9408642   0.0000000      7.241725    12.527643
## O      -0.3121344    1.461067   -0.2136346   0.8308320     -2.749933     1.902645
## U     -14.4141519    2.505243   -5.7535948   0.0000000    -18.893170   -11.307903
## u     -20.0712964    2.959584   -6.7817969   0.0000000    -25.297865   -16.314458
## 3'    -26.6541576   45.113433   -0.5908253   0.5546375   -106.540108     2.387460
## 
## 
## Table: f2
## 
##               f2           se            t           p         ci05          ci95
## ---  -----------  -----------  -----------  ----------  -----------  ------------
## i     401.076642   766.726247    0.5231028   0.6009027    61.466421   1989.865006
## I     390.181934   766.895901    0.5087808   0.6109059    52.904707   1983.978353
## E     389.055549   766.790714    0.5073817   0.6118870    52.676434   1984.849836
## {     388.224204   766.522806    0.5064744   0.6125236    52.494829   1983.419738
## A      -9.973102     1.524390   -6.5423544   0.0000000   -12.666230     -7.793481
## O     -17.181713     2.590149   -6.6334845   0.0000000   -21.986306    -13.435170
## U      -1.893537     2.228946   -0.8495214   0.3955913    -5.462485      1.603190
## u      -2.489457     2.400459   -1.0370753   0.2997007    -6.378086      1.308251
## 3'    412.028038   812.632841    0.5070285   0.6121348    56.011043   2037.203345
## 
## 
## Table: f3
## 
##                 f3            se            t           p           ci05          ci95
## ---  -------------  ------------  -----------  ----------  -------------  ------------
## i     -102.9474738   231.5784695   -0.4445468   0.6566473   -587.4579646    -1.7032775
## I     -104.0820669   231.5221809   -0.4495555   0.6530310   -589.2948120    -3.2677699
## E     -105.9269465   231.5461603   -0.4574766   0.6473285   -590.8194573    -4.3542936
## {     -107.3314215   231.3920857   -0.4638509   0.6427546   -591.9700769    -6.4897347
## A       -0.1349669     0.5675995   -0.2377855   0.8120474     -1.0448193     0.8337798
## O        2.1126800     0.8300432    2.5452650   0.0109195      0.8304734     3.4685210
## U        2.5612248     1.2065662    2.1227386   0.0337758      0.6022133     4.3904843
## u        3.5906036     1.4861615    2.4160251   0.0156910      0.9943237     5.8450388
## 3'    -167.5355413   457.0784015   -0.3665357   0.7139654   -701.2965839   -25.1434039

Using the mlogit package

The method used by mlogit for the random effects is known as maximum simulated likelihood (Train 2009) It’s well known in econometrics. Demidenko (2005) refers to it as ‘fixed-sample’ estimates and thinks highly of it for generalized linear and nonlinear models. The basic idea is the random effects are (approximately) integrated out by drawing a random sample evaluating the likelihood and averaging. (Actually, it’s biased, because it’s approximating the likelihood as a sum instead of the likelihood itself. The bias goes to zero in probability when N the number of subjects increases. Furthermore, the estimates are efficient and approach normal distributions if the number of draws R increases at at least the rate sqrt(N). So the more subjects, the more draws to get a guarantee of normality. See Train 2009.)

References:

Demidenko, E. (2005). Mixed Models: Theory and Applications. Wiley-Interscience.
Train, K. (2009). Discrete Choice Methods with Simulation (Second Edition). Cambridge: Cambridge University Press.

Data preparation for mlogit : preparing the mlogit.data object

mlogit requires data in a special format. The included mlogit.data() function will convert a standard R data.frame object to a special ‘mlogit.data’ object. To fully understand this you should type ?mlogit.data() and study the mlogit vignettes, especially Croissant (2013b) “Estimation of multinomial logit models in R” section 1. However, for many experiments where the data has been prepared in the one-trial-per-row format described above, it should be easy to modify this example. [ Note that mlogit, being designed for econometric problems, such as choice of consumer products, deals with more general choice situations, where a single choice situation might involve several different products with different ‘stimulus’ properties, like price, color, packaging, etc. For simple phonetics categorization experiments, we have one set of stimulus properties per trial and we’re deciding which among several “products” those same properties best represent.]

The key arguments are described below :

  • choice = vowel specifies the name of the category choice factor variable, here vowel.

  • alt.levels = c(...) allows us to control the order of the categories. I’ve done this here so they correspond to those in the nnet model above. This apparently does not happen automatically if you have specified a non-default order of levels for the choice factor.

  • `shape = “wide”, wide data simply means that each row of the data involves a single choice situation; that is, for a perceptual experiment a single response to a single trial.

  • id.var='plistener' specifies the identity of the “random unit” , in this case our pseudo listener. (This variable is not actually used unless panel=TRUE is specified in the call to mlogit() . It is necessary for a repeated measures or panel study. See below).

mld_pb52= mlogit.data(data=pb52, choice='vowel', alt.levels= c("V" , "i" , "I" , "E",  "{" , "A" , "O" , "U" , "u", "3'"), shape="wide", id.var='plistener')
print(head(mld_pb52,50))
##      type sex speaker vowel repetition          f0         f1        f2
## 1.V     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.i     m   m       1  TRUE          1 -0.38677791 -2.1645982 1.0529806
## 1.I     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.E     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.{     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.A     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.O     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.U     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.u     m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 1.3'    m   m       1 FALSE          1 -0.38677791 -2.1645982 1.0529806
## 2.V     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.i     m   m       1  TRUE          2  0.07332434 -1.7415520 1.1828350
## 2.I     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.E     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.{     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.A     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.O     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.U     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.u     m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 2.3'    m   m       1 FALSE          2  0.07332434 -1.7415520 1.1828350
## 3.V     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.i     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.I     m   m       1  TRUE          1  0.34057187 -0.8321861 0.7589610
## 3.E     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.{     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.A     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.O     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.U     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.u     m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 3.3'    m   m       1 FALSE          1  0.34057187 -0.8321861 0.7589610
## 4.V     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.i     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.I     m   m       1  TRUE          2  0.17033816 -1.4622229 0.6958254
## 4.E     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.{     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.A     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.O     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.U     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.u     m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 4.3'    m   m       1 FALSE          2  0.17033816 -1.4622229 0.6958254
## 5.V     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.i     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.I     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.E     m   m       1  TRUE          1 -0.36773935 -0.2057603 0.5511229
## 5.{     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.A     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.O     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.U     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.u     m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
## 5.3'    m   m       1 FALSE          1 -0.36773935 -0.2057603 0.5511229
##               f3 plistener chid alt
## 1.V   0.35694696         1    1   {
## 1.i   0.35694696         1    1  3'
## 1.I   0.35694696         1    1   A
## 1.E   0.35694696         1    1   E
## 1.{   0.35694696         1    1   i
## 1.A   0.35694696         1    1   I
## 1.O   0.35694696         1    1   O
## 1.U   0.35694696         1    1   u
## 1.u   0.35694696         1    1   U
## 1.3'  0.35694696         1    1   V
## 2.V   0.24843436         1    2   {
## 2.i   0.24843436         1    2  3'
## 2.I   0.24843436         1    2   A
## 2.E   0.24843436         1    2   E
## 2.{   0.24843436         1    2   i
## 2.A   0.24843436         1    2   I
## 2.O   0.24843436         1    2   O
## 2.U   0.24843436         1    2   u
## 2.u   0.24843436         1    2   U
## 2.3'  0.24843436         1    2   V
## 3.V  -0.03339976         1    3   {
## 3.i  -0.03339976         1    3  3'
## 3.I  -0.03339976         1    3   A
## 3.E  -0.03339976         1    3   E
## 3.{  -0.03339976         1    3   i
## 3.A  -0.03339976         1    3   I
## 3.O  -0.03339976         1    3   O
## 3.U  -0.03339976         1    3   u
## 3.u  -0.03339976         1    3   U
## 3.3' -0.03339976         1    3   V
## 4.V  -0.21029263         1    4   {
## 4.i  -0.21029263         1    4  3'
## 4.I  -0.21029263         1    4   A
## 4.E  -0.21029263         1    4   E
## 4.{  -0.21029263         1    4   i
## 4.A  -0.21029263         1    4   I
## 4.O  -0.21029263         1    4   O
## 4.U  -0.21029263         1    4   u
## 4.u  -0.21029263         1    4   U
## 4.3' -0.21029263         1    4   V
## 5.V  -0.47714906         1    5   {
## 5.i  -0.47714906         1    5  3'
## 5.I  -0.47714906         1    5   A
## 5.E  -0.47714906         1    5   E
## 5.{  -0.47714906         1    5   i
## 5.A  -0.47714906         1    5   I
## 5.O  -0.47714906         1    5   O
## 5.U  -0.47714906         1    5   u
## 5.u  -0.47714906         1    5   U
## 5.3' -0.47714906         1    5   V

Specifying alternative-specific variables in a fixed-effects mlogit model

mlogit uses special 3 part formula called an mFormula. Although all three parts can be useful for some problems in econometrics, it turns out that only part 2 is relevant for typical categorization experiments. Alternate-specific variables are of key import here. These are effectively interactions of stimulus properties (or other listening or grouping conditions) and response category contrasts. They are specified in the second part of the 3 part mFormula: The format is :

choiceVar ~ firstPart | secondPart | thirdPart

For our examples, the first part and third part are always just 0 . All the action is in the second part.

choiceVar ~ 0 | secondPart | 0

(The third part but not first part can be left off entirely).

The next code chunk runs an analysis similar to that in the nnet::multinom example above, though using a different algorithm for optimization. Note that the second part of the formula string looks the same as the right-hand side of the nnet::multinom above.

Note also, that like multinom this analysis ignores the plistener random effect. This analysis is also useful for data and program checking. While ‘plisteneris available in themlogit.dataobject, it's not used bymlogit()unlesspanel = TRUEis set in the call. A small tolerance and large number of iterations is specified here to try to get a better match to themultinom’ analysis above. An examination of the coefficients reveals them to be quite similar. ### Using ‘fixed-effects’ call to produce coefficient names ### This preliminary run is also very useful produces by-products in the form of variable names that take some of the tedium (and guesswork) out of running the full-fledged simulated-maximum-likelihood mixed effects model.

fixed_mFormStr= 'vowel ~ 0 | f0+f1+f2+f3 |0'

mlogitFitFixed=mlogit(as.formula(fixed_mFormStr), data=mld_pb52, panel=FALSE, iterlim=10000, tol = 1e-8)


print(summary(mlogitFitFixed))
## 
## Call:
## mlogit(formula = as.formula(fixed_mFormStr), data = mld_pb52, 
##     panel = FALSE, iterlim = 10000, tol = 1e-08, method = "nr", 
##     print.level = 0)
## 
## Frequencies of alternatives:
##   V   i   I   E   {   A   O   U   u  3' 
## 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 
## 
## nr method
## 16 iterations, 0h:0m:2s 
## g'(-H)^-1g = 0.000119 
## successive function values within tolerance limits 
## 
## Coefficients :
##                 Estimate Std. Error  t-value  Pr(>|t|)    
## i:(intercept)  -28.34524    5.60553  -5.0567 4.267e-07 ***
## I:(intercept)  -11.84240    4.96848  -2.3835 0.0171487 *  
## E:(intercept)   -8.81900    4.81959  -1.8298 0.0672761 .  
## {:(intercept)  -13.75979    4.97180  -2.7676 0.0056477 ** 
## A:(intercept)  -13.57970    1.43483  -9.4643 < 2.2e-16 ***
## O:(intercept)  -11.46983    1.43978  -7.9664 1.554e-15 ***
## U:(intercept)    2.41526    1.23828   1.9505 0.0511170 .  
## u:(intercept)   -1.72583    1.33735  -1.2905 0.1968834    
## 3':(intercept) -16.89881    6.43662  -2.6254 0.0086543 ** 
## i:f0            -1.22824    1.63355  -0.7519 0.4521217    
## I:f0            -0.31957    1.53818  -0.2078 0.8354187    
## E:f0            -1.34208    1.50977  -0.8889 0.3740420    
## {:f0            -3.44132    1.55528  -2.2127 0.0269209 *  
## A:f0            -0.64169    0.33117  -1.9376 0.0526664 .  
## O:f0             2.83373    0.47813   5.9267 3.091e-09 ***
## U:f0             1.72846    0.51904   3.3301 0.0008682 ***
## u:f0             2.74675    0.59036   4.6527 3.277e-06 ***
## 3':f0            9.34944    2.38572   3.9189 8.895e-05 ***
## i:f1           -22.56069    3.21343  -7.0208 2.207e-12 ***
## I:f1           -16.86867    3.10343  -5.4355 5.465e-08 ***
## E:f1            -8.94462    2.97735  -3.0042 0.0026626 ** 
## {:f1            -0.19927    3.00687  -0.0663 0.9471625    
## A:f1             8.92282    0.98411   9.0669 < 2.2e-16 ***
## O:f1            -0.30493    1.11289  -0.2740 0.7840839    
## U:f1           -12.24965    1.62484  -7.5390 4.730e-14 ***
## u:f1           -17.51506    1.74792 -10.0205 < 2.2e-16 ***
## 3':f1          -11.18849    3.10617  -3.6020 0.0003157 ***
## i:f2            81.13261   30.33469   2.6746 0.0074822 ** 
## I:f2            73.36168   30.26532   2.4240 0.0153527 *  
## E:f2            72.27225   30.22434   2.3912 0.0167937 *  
## {:f2            71.35412   30.29884   2.3550 0.0185221 *  
## A:f2            -9.28538    1.11445  -8.3318 < 2.2e-16 ***
## O:f2           -15.75035    1.50239 -10.4835 < 2.2e-16 ***
## U:f2            -2.54154    1.57049  -1.6183 0.1055951    
## u:f2            -3.12586    1.61868  -1.9311 0.0534680 .  
## 3':f2           75.45386   30.58796   2.4668 0.0136333 *  
## i:f3           -15.44822   10.35007  -1.4926 0.1355495    
## I:f3           -16.62721   10.29591  -1.6149 0.1063249    
## E:f3           -18.42424   10.27148  -1.7937 0.0728566 .  
## {:f3           -19.59093   10.30255  -1.9016 0.0572287 .  
## A:f3            -0.16261    0.41718  -0.3898 0.6966914    
## O:f3             1.91781    0.54765   3.5019 0.0004619 ***
## U:f3             2.50859    0.63748   3.9352 8.314e-05 ***
## u:f3             3.56088    0.70536   5.0483 4.457e-07 ***
## 3':f3          -33.83519   11.99919  -2.8198 0.0048055 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Log-Likelihood: -461.85
## McFadden R^2:  0.86804 
## Likelihood ratio test : chisq = 6076.2 (p.value = < 2.22e-16)

Comparison of nnet and mlogit fixed effects analyses

We can compare the coefficient values of the two analyses by reshaping the mlogit output to same matrix shape as the nnet output and printing the differences. You can do more detailed comparison by caring the two summaries.

mlogitFixedCoef=coef(mlogitFitFixed)
# THis is a  45-el vector which can be folded into matrix 9 [response contrasts] x 5 [covariates] to match nnetCoefMat
mlogitFixedCoefMat=matrix(mlogitFixedCoef,9 , 5)
print(mlogitFixedCoefMat-nnetCoefMat)
##      (Intercept)            f0            f1            f2            f3
## i  -3.052162e-04 -3.764595e-05 -7.368729e-06  1.408982e-03 -4.168136e-04
## I  -2.869948e-04 -3.613314e-05 -1.474623e-06  1.402226e-03 -4.213794e-04
## E  -2.576570e-04 -3.122207e-05  6.275127e-06  1.374445e-03 -4.180028e-04
## {  -2.367901e-04 -2.485827e-05 -1.025842e-05  1.359919e-03 -4.128739e-04
## A  -5.168388e-06  2.708459e-06  1.139932e-06 -1.187814e-05  7.964196e-07
## O  -4.378944e-05  9.988786e-07  2.637800e-05 -3.160947e-05 -1.509921e-06
## U   1.955568e-05 -7.080946e-06 -2.829584e-05  2.544399e-05  4.126926e-06
## u   2.033027e-05 -7.850225e-06 -2.932784e-05  2.668659e-05  5.339398e-06
## 3' -2.302230e-04 -6.834396e-05  3.036739e-05  1.360685e-03 -3.628415e-04

Extracting variable names from mlogitFitFixed analysis for use in a full mixed logistic

Note what variable names have been created. We’ll use these later in the mixed modeling for the creation or random effects arguments through the rpar parameter.

parNames=names(mlogitFitFixed$coefficients)
print(parNames)
##  [1] "i:(intercept)"  "I:(intercept)"  "E:(intercept)"  "{:(intercept)" 
##  [5] "A:(intercept)"  "O:(intercept)"  "U:(intercept)"  "u:(intercept)" 
##  [9] "3':(intercept)" "i:f0"           "I:f0"           "E:f0"          
## [13] "{:f0"           "A:f0"           "O:f0"           "U:f0"          
## [17] "u:f0"           "3':f0"          "i:f1"           "I:f1"          
## [21] "E:f1"           "{:f1"           "A:f1"           "O:f1"          
## [25] "U:f1"           "u:f1"           "3':f1"          "i:f2"          
## [29] "I:f2"           "E:f2"           "{:f2"           "A:f2"          
## [33] "O:f2"           "U:f2"           "u:f2"           "3':f2"         
## [37] "i:f3"           "I:f3"           "E:f3"           "{:f3"          
## [41] "A:f3"           "O:f3"           "U:f3"           "u:f3"          
## [45] "3':f3"

We’ll create an rpar argument naming the random effects, stealing the names form the coefficient names created in the mlogitFitFixed analysis. And we’ll specify specify all the random effect types as ‘n’ for normal (others are possible . See mlogit docs). We have to stamp the names back on the variable values. Then we’ll print it out to see what the rpar argument looks like.

rparArg=rep('n',length(parNames))
names(rparArg)=parNames
print(rparArg)
##  i:(intercept)  I:(intercept)  E:(intercept)  {:(intercept)  A:(intercept) 
##            "n"            "n"            "n"            "n"            "n" 
##  O:(intercept)  U:(intercept)  u:(intercept) 3':(intercept)           i:f0 
##            "n"            "n"            "n"            "n"            "n" 
##           I:f0           E:f0           {:f0           A:f0           O:f0 
##            "n"            "n"            "n"            "n"            "n" 
##           U:f0           u:f0          3':f0           i:f1           I:f1 
##            "n"            "n"            "n"            "n"            "n" 
##           E:f1           {:f1           A:f1           O:f1           U:f1 
##            "n"            "n"            "n"            "n"            "n" 
##           u:f1          3':f1           i:f2           I:f2           E:f2 
##            "n"            "n"            "n"            "n"            "n" 
##           {:f2           A:f2           O:f2           U:f2           u:f2 
##            "n"            "n"            "n"            "n"            "n" 
##          3':f2           i:f3           I:f3           E:f3           {:f3 
##            "n"            "n"            "n"            "n"            "n" 
##           A:f3           O:f3           U:f3           u:f3          3':f3 
##            "n"            "n"            "n"            "n"            "n"

Mixed multinomial logit random coefficients analysis

We now finally get to use mlogit to do a mixed-effects analysis, instead of the simple multinomial (fixed effects) analyses so far. We can use the same formula represented fixed_mFormStr before to specify the model as before. Telling mlogit we have a repeated-measures type analysis with random coefficients requires two additional arguments.

  1. panel = TRUE This indicates we have the same individual (plistener here) responding to multiple different choice situations. This is variously refereed to as a repeated measures or a longitudinal or a panel study.

  2. ‘rpar=rparArg’ This tells mlogit which coefficients are to be treated as random effects (and what kind of random distributions are involved for each random coefficient). We constructed the variable rparArg above to code this information.

  3. This is all we need for a basic random effects analysis.

if (NRCRDRAWS>0){
mlogitFitRcr=mlogit(as.formula(fixed_mFormStr), data=mld_pb52, rpar=rparArg, 
                panel=TRUE, R=NRCRDRAWS,print.level=1)
print(summary(mlogitFitRcr))
}
## Initial value of the function : 460.790257771379 
## iteration 1, step = 0.03125, lnL = 454.79789646, chi2 = 238.41494633
## iteration 2, step = 0.0009765625, lnL = 454.6007449, chi2 = 1480.38430655
## iteration 3, step = 0.0009765625, lnL = 454.55236312, chi2 = 1681.91367275
## iteration 4, step = 0.0009765625, lnL = 454.20596921, chi2 = 994.23848572
## iteration 5, step = 0.0009765625, lnL = 453.71700553, chi2 = 1072.56258339
## iteration 6, step = 0.00390625, lnL = 452.888166, chi2 = 472.22049039
## iteration 7, step = 0.00390625, lnL = 452.0288828, chi2 = 402.65640547
## iteration 8, step = 0.0078125, lnL = 450.51836292, chi2 = 429.56969778
## iteration 9, step = 0.0078125, lnL = 448.9962302, chi2 = 487.69732501
## iteration 10, step = 0.0078125, lnL = 448.71665745, chi2 = 377.10298327
## iteration 11, step = 0.0078125, lnL = 447.72727021, chi2 = 278.54489979
## iteration 12, step = 0.015625, lnL = 447.24120767, chi2 = 153.36303774
## iteration 13, step = 0.015625, lnL = 445.69403331, chi2 = 275.86602377
## iteration 14, step = 0.015625, lnL = 444.59897911, chi2 = 242.58200748
## iteration 15, step = 0.015625, lnL = 442.83324817, chi2 = 243.47599933
## iteration 16, step = 0.015625, lnL = 442.29659928, chi2 = 407.15584057
## iteration 17, step = 0.015625, lnL = 441.116236, chi2 = 212.97312371
## iteration 18, step = 0.015625, lnL = 439.32756539, chi2 = 249.25065636
## iteration 19, step = 0.015625, lnL = 437.39866, chi2 = 483.99824853
## iteration 20, step = 0.03125, lnL = 434.83426949, chi2 = 269.08730308
## iteration 21, step = 0.015625, lnL = 432.28019094, chi2 = 365.35242252
## iteration 22, step = 0.015625, lnL = 429.98733216, chi2 = 362.53451418
## iteration 23, step = 0.03125, lnL = 429.29656779, chi2 = 362.84810666
## iteration 24, step = 0.03125, lnL = 427.17292874, chi2 = 236.24738338
## iteration 25, step = 0.03125, lnL = 425.95607766, chi2 = 195.2031087
## iteration 26, step = 0.03125, lnL = 424.88022259, chi2 = 83.35680176
## iteration 27, step = 0.0625, lnL = 424.7477432, chi2 = 117.8036026
## iteration 28, step = 0.0625, lnL = 422.84911645, chi2 = 152.27092093
## iteration 29, step = 0.0625, lnL = 422.01929089, chi2 = 74.67427252
## iteration 30, step = 0.0625, lnL = 420.77259874, chi2 = 121.24362504
## iteration 31, step = 0.03125, lnL = 418.68204955, chi2 = 155.46154532
## iteration 32, step = 0.0625, lnL = 417.3036368, chi2 = 147.23806807
## iteration 33, step = 0.0625, lnL = 415.2173113, chi2 = 114.86782031
## iteration 34, step = 0.125, lnL = 413.72372721, chi2 = 82.54979632
## iteration 35, step = 0.03125, lnL = 413.42343208, chi2 = 141.11816724
## iteration 36, step = 0.125, lnL = 411.56745755, chi2 = 52.97809495
## iteration 37, step = 0.0625, lnL = 409.38869533, chi2 = 80.97600781
## iteration 38, step = 0.0625, lnL = 406.20912525, chi2 = 119.53870001
## iteration 39, step = 0.03125, lnL = 405.0365705, chi2 = 90.97297741
## iteration 40, step = 0.125, lnL = 403.86436699, chi2 = 38.93027923
## iteration 41, step = 0.125, lnL = 402.5518656, chi2 = 32.70883273
## iteration 42, step = 0.25, lnL = 401.64991549, chi2 = 19.33193902
## iteration 43, step = 0.25, lnL = 400.68934565, chi2 = 28.0337415
## iteration 44, step = 0.5, lnL = 399.65835734, chi2 = 17.34899653
## iteration 45, step = 0.5, lnL = 397.97693331, chi2 = 20.80123578
## iteration 46, step = 0.25, lnL = 396.76898225, chi2 = 18.25345971
## iteration 47, step = 0.125, lnL = 395.67635971, chi2 = 21.15053112
## iteration 48, step = 0.25, lnL = 394.67979087, chi2 = 14.57673521
## iteration 49, step = 0.25, lnL = 393.2046181, chi2 = 14.16027938
## iteration 50, step = 0.25, lnL = 392.68867807, chi2 = 9.48938519
## iteration 51, step = 1, lnL = 392.30744898, chi2 = 4.69962957
## iteration 52, step = 0.5, lnL = 392.1856113, chi2 = 8.05932812
## iteration 53, step = 0.5, lnL = 391.21543858, chi2 = 12.76377707
## iteration 54, step = 0.25, lnL = 389.46686903, chi2 = 14.97361484
## iteration 55, step = 0.5, lnL = 388.57777579, chi2 = 8.12058783
## iteration 56, step = 1, lnL = 388.00272431, chi2 = 5.12675349
## iteration 57, step = 1, lnL = 386.54447482, chi2 = 4.1053499
## iteration 58, step = 0.5, lnL = 385.8014803, chi2 = 3.64152034
## iteration 59, step = 1, lnL = 385.22810409, chi2 = 1.20568696
## iteration 60, step = 1, lnL = 384.74657077, chi2 = 1.52050037
## iteration 61, step = 1, lnL = 384.28356711, chi2 = 0.81477189
## iteration 62, step = 1, lnL = 384.06260648, chi2 = 0.69289912
## iteration 63, step = 1, lnL = 383.7318123, chi2 = 0.47925563
## iteration 64, step = 1, lnL = 383.4298645, chi2 = 0.4914693
## iteration 65, step = 1, lnL = 383.15694505, chi2 = 0.58209818
## iteration 66, step = 1, lnL = 382.82779202, chi2 = 0.50734996
## iteration 67, step = 1, lnL = 382.47712035, chi2 = 0.54152153
## iteration 68, step = 1, lnL = 382.11644128, chi2 = 0.56163114
## iteration 69, step = 1, lnL = 381.78360563, chi2 = 0.57558863
## iteration 70, step = 1, lnL = 381.46364514, chi2 = 0.44900754
## iteration 71, step = 1, lnL = 381.18668842, chi2 = 0.43966331
## iteration 72, step = 1, lnL = 381.02649809, chi2 = 0.22293848
## iteration 73, step = 1, lnL = 380.81749898, chi2 = 0.29234621
## iteration 74, step = 1, lnL = 380.5893455, chi2 = 0.3137643
## iteration 75, step = 1, lnL = 380.32431191, chi2 = 0.36475136
## iteration 76, step = 1, lnL = 380.011627, chi2 = 0.45909981
## iteration 77, step = 1, lnL = 379.71216691, chi2 = 0.45440012
## iteration 78, step = 0.25, lnL = 379.70210274, chi2 = -32.0980525
## 
## Call:
## mlogit(formula = as.formula(fixed_mFormStr), data = mld_pb52, 
##     rpar = rparArg, R = NRCRDRAWS, panel = TRUE, print.level = 1)
## 
## Frequencies of alternatives:
##   V   i   I   E   {   A   O   U   u  3' 
## 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 
## 
## bfgs method
## 79 iterations, 0h:18m:22s 
## g'(-H)^-1g = -32.1 
## last step couldn't find higher value 
## 
## Coefficients :
##                     Estimate Std. Error t-value  Pr(>|t|)    
## i:(intercept)     -230.20722  345.90597 -0.6655  0.505718    
## I:(intercept)      -61.23484   97.67575 -0.6269  0.530712    
## E:(intercept)      -50.35461   95.77408 -0.5258  0.599052    
## {:(intercept)      -93.59370  105.52575 -0.8869  0.375118    
## A:(intercept)      -22.50544    3.71843 -6.0524 1.427e-09 ***
## O:(intercept)      -23.90471    4.76071 -5.0212 5.134e-07 ***
## U:(intercept)        8.90282    6.50821  1.3679  0.171332    
## u:(intercept)       -7.55770    7.03442 -1.0744  0.282648    
## 3':(intercept)     -95.36307  150.07256 -0.6354  0.525137    
## i:f0               -12.79971   65.90490 -0.1942  0.846008    
## I:f0                -2.67684   66.31810 -0.0404  0.967803    
## E:f0                -9.28091   64.98408 -0.1428  0.886434    
## {:f0               -29.02642   62.51722 -0.4643  0.642437    
## A:f0                -1.05283    0.74647 -1.4104  0.158422    
## O:f0                 4.91761    1.58958  3.0936  0.001977 ** 
## U:f0                 3.65036    1.64209  2.2230  0.026216 *  
## u:f0                 9.19531    2.29276  4.0106 6.057e-05 ***
## 3':f0               41.22971   90.15624  0.4573  0.647445    
## i:f1              -115.13577  113.15457 -1.0175  0.308911    
## I:f1               -59.95590   49.14864 -1.2199  0.222507    
## E:f1               -23.80722   49.10527 -0.4848  0.627804    
## {:f1                35.38252   66.25282  0.5341  0.593305    
## A:f1                14.98302    2.38922  6.2711 3.586e-10 ***
## O:f1                -0.31116    3.12409 -0.0996  0.920661    
## U:f1               -27.01245   10.45909 -2.5827  0.009804 ** 
## u:f1               -43.89552   10.91335 -4.0222 5.766e-05 ***
## 3':f1              -38.20557   54.90829 -0.6958  0.486550    
## i:f2               392.58686  168.64175  2.3279  0.019916 *  
## I:f2               307.28632  198.13698  1.5509  0.120931    
## E:f2               304.80697  198.11360  1.5385  0.123915    
## {:f2               310.85153  199.48423  1.5583  0.119168    
## A:f2               -15.51456    3.16286 -4.9052 9.332e-07 ***
## O:f2               -32.33396    5.84286 -5.5339 3.131e-08 ***
## U:f2                 0.12496    5.78525  0.0216  0.982767    
## u:f2                -5.12072    5.90615 -0.8670  0.385934    
## 3':f2              316.42682  218.96588  1.4451  0.148431    
## i:f3               -65.00112  121.30045 -0.5359  0.592049    
## I:f3               -72.77189  117.87560 -0.6174  0.536996    
## E:f3               -78.28185  117.63778 -0.6654  0.505764    
## {:f3               -87.06523  119.50495 -0.7285  0.466277    
## A:f3                -0.23719    0.84491 -0.2807  0.778918    
## O:f3                 4.09273    1.36024  3.0088  0.002623 ** 
## U:f3                 5.01378    2.09281  2.3957  0.016588 *  
## u:f3                 6.47916    2.38149  2.7206  0.006516 ** 
## 3':f3             -156.22406  150.11472 -1.0407  0.298016    
## sd.i:(intercept)     0.27520   15.93115  0.0173  0.986218    
## sd.I:(intercept)    -4.65645    3.81296 -1.2212  0.222005    
## sd.E:(intercept)    -2.50927    3.73240 -0.6723  0.501396    
## sd.{:(intercept)     2.69144    8.46192  0.3181  0.750436    
## sd.A:(intercept)     0.69104    1.36378  0.5067  0.612359    
## sd.O:(intercept)    -0.96377    1.95934 -0.4919  0.622803    
## sd.U:(intercept)    -0.87507    1.25388 -0.6979  0.485248    
## sd.u:(intercept)    -1.53201    1.90919 -0.8024  0.422298    
## sd.3':(intercept)   -8.96710   30.65159 -0.2925  0.769867    
## sd.i:f0             10.70005   17.97716  0.5952  0.551708    
## sd.I:f0              0.29476    4.23155  0.0697  0.944465    
## sd.E:f0             -0.26772    3.13274 -0.0855  0.931897    
## sd.{:f0              6.34755    5.75005  1.1039  0.269632    
## sd.A:f0              0.32032    1.43581  0.2231  0.823461    
## sd.O:f0              0.40473    2.01156  0.2012  0.840539    
## sd.U:f0              0.68528    1.34731  0.5086  0.611015    
## sd.u:f0              1.37363    1.18824  1.1560  0.247672    
## sd.3':f0            -1.36771   33.91967 -0.0403  0.967836    
## sd.i:f1             -4.38977   10.51336 -0.4175  0.676282    
## sd.I:f1              2.81596    4.98971  0.5644  0.572514    
## sd.E:f1             10.68225    7.23348  1.4768  0.139735    
## sd.{:f1             -1.43659   11.50550 -0.1249  0.900633    
## sd.A:f1              0.73150    0.73346  0.9973  0.318611    
## sd.O:f1             -0.13457    1.56565 -0.0860  0.931505    
## sd.U:f1              2.55833    1.12176  2.2806  0.022570 *  
## sd.u:f1              2.15805    1.93464  1.1155  0.264646    
## sd.3':f1            -6.99439   60.88700 -0.1149  0.908544    
## sd.i:f2              6.47596   19.46767  0.3327  0.739397    
## sd.I:f2             -0.72663    2.97104 -0.2446  0.806790    
## sd.E:f2              2.78918    2.78827  1.0003  0.317153    
## sd.{:f2             13.77431    9.70513  1.4193  0.155817    
## sd.A:f2             -1.87121    0.86666 -2.1591  0.030842 *  
## sd.O:f2              2.99391    0.95630  3.1307  0.001744 ** 
## sd.U:f2              0.52304    1.72652  0.3029  0.761930    
## sd.u:f2             -1.59023    0.92126 -1.7262  0.084319 .  
## sd.3':f2           -11.53363   32.74337 -0.3522  0.724656    
## sd.i:f3              7.71864   15.78681  0.4889  0.624892    
## sd.I:f3              0.44548    2.92796  0.1521  0.879071    
## sd.E:f3             -1.72911    2.01098 -0.8598  0.389881    
## sd.{:f3             -1.20633   13.08953 -0.0922  0.926571    
## sd.A:f3              1.17149    1.02075  1.1477  0.251102    
## sd.O:f3             -0.57746    4.01934 -0.1437  0.885761    
## sd.U:f3             -1.86215    1.71843 -1.0836  0.278525    
## sd.u:f3              4.50956    1.07908  4.1791 2.927e-05 ***
## sd.3':f3            -2.89233   58.34028 -0.0496  0.960460    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Log-Likelihood: -379.7
## McFadden R^2:  0.89151 
## Likelihood ratio test : chisq = 6240.5 (p.value = < 2.22e-16)
## 
## random coefficients
##                Min.      1st Qu.       Median         Mean      3rd Qu.
## i:(intercept)  -Inf -230.3928346 -230.2072182 -230.2072182 -230.0216018
## I:(intercept)  -Inf  -64.3755628  -61.2348363  -61.2348363  -58.0941098
## E:(intercept)  -Inf  -52.0470918  -50.3546135  -50.3546135  -48.6621353
## {:(intercept)  -Inf  -95.4090448  -93.5936986  -93.5936986  -91.7783524
## A:(intercept)  -Inf  -22.9715422  -22.5054423  -22.5054423  -22.0393425
## O:(intercept)  -Inf  -24.5547574  -23.9047069  -23.9047069  -23.2546563
## U:(intercept)  -Inf    8.3125982    8.9028219    8.9028219    9.4930457
## u:(intercept)  -Inf   -8.5910294   -7.5577027   -7.5577027   -6.5243760
## 3':(intercept) -Inf -101.4112872  -95.3630669  -95.3630669  -89.3148466
## i:f0           -Inf  -20.0167839  -12.7997072  -12.7997072   -5.5826306
## I:f0           -Inf   -2.8756509   -2.6768351   -2.6768351   -2.4780194
## E:f0           -Inf   -9.4614833   -9.2809098   -9.2809098   -9.1003363
## {:f0           -Inf  -33.3077729  -29.0264186  -29.0264186  -24.7450644
## A:f0           -Inf   -1.2688795   -1.0528255   -1.0528255   -0.8367714
## O:f0           -Inf    4.6446163    4.9176058    4.9176058    5.1905952
## U:f0           -Inf    3.1881443    3.6503572    3.6503572    4.1125701
## u:f0           -Inf    8.2688158    9.1953133    9.1953133   10.1218107
## 3':f0          -Inf   40.3072038   41.2297129   41.2297129   42.1522220
## i:f1           -Inf -118.0966268 -115.1357696 -115.1357696 -112.1749125
## I:f1           -Inf  -61.8552354  -59.9558998  -59.9558998  -58.0565641
## E:f1           -Inf  -31.0122892  -23.8072243  -23.8072243  -16.6021594
## {:f1           -Inf   34.4135492   35.3825161   35.3825161   36.3514829
## A:f1           -Inf   14.4896333   14.9830198   14.9830198   15.4764063
## O:f1           -Inf   -0.4019303   -0.3111639   -0.3111639   -0.2203975
## U:f1           -Inf  -28.7380212  -27.0124548  -27.0124548  -25.2868884
## u:f1           -Inf  -45.3510961  -43.8955152  -43.8955152  -42.4399343
## 3':f1          -Inf  -42.9232133  -38.2055690  -38.2055690  -33.4879247
## i:f2           -Inf  388.2188916  392.5868586  392.5868586  396.9548257
## I:f2           -Inf  306.7962182  307.2863201  307.2863201  307.7764220
## E:f2           -Inf  302.9256969  304.8069673  304.8069673  306.6882377
## {:f2           -Inf  301.5609011  310.8515331  310.8515331  320.1421651
## A:f2           -Inf  -16.7766701  -15.5145563  -15.5145563  -14.2524425
## O:f2           -Inf  -34.3533204  -32.3339586  -32.3339586  -30.3145968
## U:f2           -Inf   -0.2278289    0.1249581    0.1249581    0.4777450
## u:f2           -Inf   -6.1933174   -5.1207214   -5.1207214   -4.0481254
## 3':f2          -Inf  308.6475086  316.4268215  316.4268215  324.2061343
## i:f3           -Inf  -70.2072670  -65.0011249  -65.0011249  -59.7949829
## I:f3           -Inf  -73.0723607  -72.7718893  -72.7718893  -72.4714179
## E:f3           -Inf  -79.4481132  -78.2818463  -78.2818463  -77.1155793
## {:f3           -Inf  -87.8788893  -87.0652335  -87.0652335  -86.2515777
## A:f3           -Inf   -1.0273526   -0.2371912   -0.2371912    0.5529701
## O:f3           -Inf    3.7032395    4.0927287    4.0927287    4.4822180
## U:f3           -Inf    3.7577741    5.0137761    5.0137761    6.2697781
## u:f3           -Inf    3.4375017    6.4791554    6.4791554    9.5208091
## 3':f3          -Inf -158.1749000 -156.2240553 -156.2240553 -154.2732107
##                Max.
## i:(intercept)   Inf
## I:(intercept)   Inf
## E:(intercept)   Inf
## {:(intercept)   Inf
## A:(intercept)   Inf
## O:(intercept)   Inf
## U:(intercept)   Inf
## u:(intercept)   Inf
## 3':(intercept)  Inf
## i:f0            Inf
## I:f0            Inf
## E:f0            Inf
## {:f0            Inf
## A:f0            Inf
## O:f0            Inf
## U:f0            Inf
## u:f0            Inf
## 3':f0           Inf
## i:f1            Inf
## I:f1            Inf
## E:f1            Inf
## {:f1            Inf
## A:f1            Inf
## O:f1            Inf
## U:f1            Inf
## u:f1            Inf
## 3':f1           Inf
## i:f2            Inf
## I:f2            Inf
## E:f2            Inf
## {:f2            Inf
## A:f2            Inf
## O:f2            Inf
## U:f2            Inf
## u:f2            Inf
## 3':f2           Inf
## i:f3            Inf
## I:f3            Inf
## E:f3            Inf
## {:f3            Inf
## A:f3            Inf
## O:f3            Inf
## U:f3            Inf
## u:f3            Inf
## 3':f3           Inf

Random intercepts only

The first 9 of the rparArg elements are for random intercepts. We’ll can make the slopes be fixed effects by only using those. These are assigned in rparArgInterceptsOnly

if (NINTERCEPTDRAWS>0){
    rparArgInterceptsOnly=rparArg[1:9]
mlogitFitRandIntercepts=mlogit(as.formula(fixed_mFormStr), data=mld_pb52, rpar=rparArgInterceptsOnly, 
                panel=TRUE, R=NINTERCEPTDRAWS,print.level=1)
print(summary(mlogitFitRandIntercepts))
}
## Initial value of the function : 461.744159469456 
## iteration 1, step = 0.015625, lnL = 461.15553948, chi2 = 80.41895333
## iteration 2, step = 0.001953125, lnL = 460.44680608, chi2 = 654.96951581
## iteration 3, step = 0.0625, lnL = 459.92872897, chi2 = 23.48525993
## iteration 4, step = 0.125, lnL = 459.63351031, chi2 = 15.10758294
## iteration 5, step = 0.03125, lnL = 459.08418886, chi2 = 37.7863775
## iteration 6, step = 0.125, lnL = 457.76886582, chi2 = 34.62542292
## iteration 7, step = 2.38418579101562e-07, lnL = 456.28354764, chi2 = 8861087.70210088
## iteration 8, step = 0.03125, lnL = 455.15869647, chi2 = 83.17609325
## iteration 9, step = 0.125, lnL = 451.78672544, chi2 = 99.72230901
## iteration 10, step = 0.0625, lnL = 449.94650243, chi2 = 165.69158278
## iteration 11, step = 0.0625, lnL = 447.30010507, chi2 = 83.13759845
## iteration 12, step = 0.125, lnL = 445.18984161, chi2 = 61.45521675
## iteration 13, step = 0.0625, lnL = 444.18146825, chi2 = 100.54924015
## iteration 14, step = 0.5, lnL = 442.86960093, chi2 = 20.92288644
## iteration 15, step = 0.5, lnL = 442.03125091, chi2 = 19.19184809
## iteration 16, step = 0.5, lnL = 440.25899208, chi2 = 11.13535259
## iteration 17, step = 0.5, lnL = 440.03191991, chi2 = 11.76588265
## iteration 18, step = 0.5, lnL = 438.97620153, chi2 = 8.18427492
## iteration 19, step = 1, lnL = 437.51507188, chi2 = 3.53264687
## iteration 20, step = 0.5, lnL = 437.38990992, chi2 = 1.67484378
## iteration 21, step = 1, lnL = 436.88117044, chi2 = 1.0791254
## iteration 22, step = 1, lnL = 436.55802523, chi2 = 0.53907214
## iteration 23, step = 1, lnL = 435.80831915, chi2 = 1.29135626
## iteration 24, step = 1, lnL = 435.1551961, chi2 = 0.87983792
## iteration 25, step = 1, lnL = 434.49020542, chi2 = 1.44900889
## iteration 26, step = 1, lnL = 433.85865357, chi2 = 1.14924302
## iteration 27, step = 1, lnL = 433.2405474, chi2 = 0.71691479
## iteration 28, step = 1, lnL = 431.79054615, chi2 = 2.53553001
## iteration 29, step = 1, lnL = 431.18065408, chi2 = 1.3980893
## iteration 30, step = 1, lnL = 430.28820801, chi2 = 1.19906763
## iteration 31, step = 1, lnL = 428.71686297, chi2 = 3.16847322
## iteration 32, step = 1, lnL = 427.88359749, chi2 = 1.08450879
## iteration 33, step = 1, lnL = 426.88761472, chi2 = 1.61641908
## iteration 34, step = 1, lnL = 426.44118171, chi2 = 0.5845967
## iteration 35, step = 1, lnL = 425.90215885, chi2 = 0.77737227
## iteration 36, step = 1, lnL = 425.49079028, chi2 = 0.55114377
## iteration 37, step = 1, lnL = 424.95808662, chi2 = 0.7412621
## iteration 38, step = 1, lnL = 424.38824633, chi2 = 0.86566746
## iteration 39, step = 1, lnL = 423.93794572, chi2 = 0.69885492
## iteration 40, step = 1, lnL = 423.50853134, chi2 = 0.62506883
## iteration 41, step = 1, lnL = 423.06592285, chi2 = 0.65599196
## iteration 42, step = 1, lnL = 422.78619487, chi2 = 0.39905831
## iteration 43, step = 1, lnL = 422.54516084, chi2 = 0.32936026
## iteration 44, step = 1, lnL = 422.31144391, chi2 = 0.32661457
## iteration 45, step = 1, lnL = 422.12927218, chi2 = 0.25536313
## iteration 46, step = 1, lnL = 421.98046332, chi2 = 0.20216737
## iteration 47, step = 1, lnL = 421.82621486, chi2 = 0.20715902
## iteration 48, step = 1, lnL = 421.82299744, chi2 = -47.04540704
## iteration 49, step = 1, lnL = 421.71900458, chi2 = 0.39875996
## iteration 50, step = 1, lnL = 421.61214557, chi2 = 0.14710853
## iteration 51, step = 1, lnL = 421.50780127, chi2 = 0.14629092
## iteration 52, step = 1, lnL = 421.41965151, chi2 = 0.12304053
## iteration 53, step = 1, lnL = 421.35412568, chi2 = 0.09377477
## iteration 54, step = 1, lnL = 421.30905763, chi2 = 0.06056404
## iteration 55, step = 1, lnL = 421.25859933, chi2 = 0.06760419
## iteration 56, step = 1, lnL = 421.20018306, chi2 = 0.07663636
## iteration 57, step = 1, lnL = 421.11026583, chi2 = 0.10645518
## iteration 58, step = 0.5, lnL = 421.10791357, chi2 = 0.34920838
## iteration 59, step = 1, lnL = 421.00334187, chi2 = 0.2358775
## iteration 60, step = 1, lnL = 420.95605484, chi2 = 0.05374365
## iteration 61, step = 1, lnL = 420.89527807, chi2 = 0.15454527
## iteration 62, step = 1, lnL = 420.87784283, chi2 = 0.03110418
## iteration 63, step = 1, lnL = 420.86933454, chi2 = 0.01011216
## iteration 64, step = 1, lnL = 420.85547076, chi2 = 0.02261442
## iteration 65, step = 1, lnL = 420.85049931, chi2 = 0.00612496
## iteration 66, step = 1, lnL = 420.84118376, chi2 = 0.01329259
## iteration 67, step = 1, lnL = 420.83325363, chi2 = 0.01035088
## iteration 68, step = 1, lnL = 420.82199586, chi2 = 0.01524097
## iteration 69, step = 1, lnL = 420.80826935, chi2 = 0.01801914
## iteration 70, step = 1, lnL = 420.78898629, chi2 = 0.02628955
## iteration 71, step = 1, lnL = 420.76925482, chi2 = 0.02794673
## iteration 72, step = 1, lnL = 420.75398563, chi2 = 0.02237053
## iteration 73, step = 1, lnL = 420.74374323, chi2 = 0.01497504
## iteration 74, step = 1, lnL = 420.73774551, chi2 = 0.00883823
## iteration 75, step = 1, lnL = 420.73460328, chi2 = 0.00462335
## iteration 76, step = 1, lnL = 420.73295595, chi2 = 0.00247367
## iteration 77, step = 1, lnL = 420.73214517, chi2 = 0.00123851
## iteration 78, step = 1, lnL = 420.73175283, chi2 = 0.00065088
## iteration 79, step = 1, lnL = 420.73159397, chi2 = 0.0002651
## iteration 80, step = 1, lnL = 420.73155846, chi2 = 7.197e-05
## 
## Call:
## mlogit(formula = as.formula(fixed_mFormStr), data = mld_pb52, 
##     rpar = rparArgInterceptsOnly, R = NINTERCEPTDRAWS, panel = TRUE, 
##     print.level = 1)
## 
## Frequencies of alternatives:
##   V   i   I   E   {   A   O   U   u  3' 
## 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 
## 
## bfgs method
## 81 iterations, 0h:7m:13s 
## g'(-H)^-1g = 7.2E-05 
## successive function values within tolerance limits 
## 
## Coefficients :
##                      Estimate  Std. Error t-value  Pr(>|t|)    
## i:(intercept)     -440.142958 1301.052032 -0.3383 0.7351388    
## I:(intercept)      -26.361028   52.511096 -0.5020 0.6156614    
## E:(intercept)      -20.154554   52.120505 -0.3867 0.6989846    
## {:(intercept)      -31.820141   52.261903 -0.6089 0.5426178    
## A:(intercept)      -14.582936    1.690929 -8.6242 < 2.2e-16 ***
## O:(intercept)      -15.175629    2.858057 -5.3098 1.098e-07 ***
## U:(intercept)        4.784326    3.300008  1.4498 0.1471164    
## u:(intercept)       -4.757019    3.779014 -1.2588 0.2081029    
## 3':(intercept)     -45.152418   76.544784 -0.5899 0.5552695    
## i:f0               -30.918228   88.275191 -0.3502 0.7261524    
## I:f0                -4.410059   28.938285 -0.1524 0.8788752    
## E:f0                -6.999660   29.095348 -0.2406 0.8098833    
## {:f0               -12.363429   29.150809 -0.4241 0.6714786    
## A:f0                -0.720097    0.407875 -1.7655 0.0774825 .  
## O:f0                 3.533094    0.935679  3.7760 0.0001594 ***
## U:f0                 2.593630    0.920088  2.8189 0.0048190 ** 
## u:f0                 5.436060    1.204919  4.5116 6.435e-06 ***
## 3':f0               22.049375   34.662662  0.6361 0.5247027    
## i:f1              -157.520129  389.149619 -0.4048 0.6856390    
## I:f1               -34.171677   21.787236 -1.5684 0.1167816    
## E:f1               -11.761154   20.067660 -0.5861 0.5578251    
## {:f1                 6.616977   20.211781  0.3274 0.7433788    
## A:f1                 9.594369    1.060236  9.0493 < 2.2e-16 ***
## O:f1                -0.941996    1.825732 -0.5160 0.6058858    
## U:f1               -16.696362    4.580849 -3.6448 0.0002676 ***
## u:f1               -26.881134    5.088841 -5.2824 1.275e-07 ***
## 3':f1              -21.962018   32.278601 -0.6804 0.4962579    
## i:f2               330.489445  683.806428  0.4833 0.6288767    
## I:f2               132.496047  257.339884  0.5149 0.6066453    
## E:f2               132.714287  257.442107  0.5155 0.6061959    
## {:f2               131.859455  257.521193  0.5120 0.6086276    
## A:f2                -9.952437    1.330223 -7.4818 7.327e-14 ***
## O:f2               -20.777279    3.047043 -6.8188 9.178e-12 ***
## U:f2                -1.524053    2.949167 -0.5168 0.6053138    
## u:f2                -4.222635    2.974387 -1.4197 0.1557051    
## 3':f2              137.377727  255.461621  0.5378 0.5907409    
## i:f3                17.708874  149.434762  0.1185 0.9056670    
## I:f3               -21.764706   88.099998 -0.2470 0.8048730    
## E:f3               -29.124316   88.146435 -0.3304 0.7410914    
## {:f3               -30.975060   88.158584 -0.3514 0.7253212    
## A:f3                -0.151722    0.443408 -0.3422 0.7322218    
## O:f3                 2.937494    0.834416  3.5204 0.0004309 ***
## U:f3                 2.760843    1.148660  2.4035 0.0162375 *  
## u:f3                 4.641750    1.320217  3.5159 0.0004383 ***
## 3':f3              -73.734245  112.580992 -0.6549 0.5125038    
## sd.i:(intercept)    51.194630  154.128563  0.3322 0.7397719    
## sd.I:(intercept)    -5.191022    2.955059 -1.7567 0.0789765 .  
## sd.E:(intercept)    -0.251700    0.950805 -0.2647 0.7912233    
## sd.{:(intercept)     3.053248    0.897711  3.4011 0.0006710 ***
## sd.A:(intercept)    -0.035006    0.681876 -0.0513 0.9590569    
## sd.O:(intercept)     2.223445    0.611544  3.6358 0.0002771 ***
## sd.U:(intercept)     0.145066    1.074121  0.1351 0.8925678    
## sd.u:(intercept)    -3.179487    0.868062 -3.6627 0.0002495 ***
## sd.3':(intercept)  -10.909600   22.204863 -0.4913 0.6232031    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Log-Likelihood: -420.73
## McFadden R^2:  0.87979 
## Likelihood ratio test : chisq = 6158.4 (p.value = < 2.22e-16)
## 
## random coefficients
##                Min.     1st Qu.      Median        Mean     3rd Qu. Max.
## i:(intercept)  -Inf -474.673212 -440.142958 -440.142958 -405.612705  Inf
## I:(intercept)  -Inf  -29.862318  -26.361028  -26.361028  -22.859737  Inf
## E:(intercept)  -Inf  -20.324323  -20.154554  -20.154554  -19.984785  Inf
## {:(intercept)  -Inf  -33.879526  -31.820141  -31.820141  -29.760757  Inf
## A:(intercept)  -Inf  -14.606547  -14.582936  -14.582936  -14.559325  Inf
## O:(intercept)  -Inf  -16.675320  -15.175629  -15.175629  -13.675938  Inf
## U:(intercept)  -Inf    4.686481    4.784326    4.784326    4.882172  Inf
## u:(intercept)  -Inf   -6.901550   -4.757019   -4.757019   -2.612488  Inf
## 3':(intercept) -Inf  -52.510832  -45.152418  -45.152418  -37.794005  Inf

Notes on the analysis.

  1. The above analysis is what is sometimes called a random coefficients model. It includes random subject effects for both intercepts [= category:(intercept) interactions] and slopes [ = category:stimulus interactions].

  2. We can get a random intercepts only analysis (analogous to those sometimes employed in lmer models) by deleting all but the first 9 elements of the rpar argument, leaving only :(Intercept) interactions in the rpar argument.

  3. The mlogits so far have been run assuming no correlation among random effects. Add the argument correlation = TRUE to the mlogit(...) call to enable general correlation patterns among random effects. This may be expensive in time and memory for models with many random effects.

  4. The argument R = 1000 sets the number of random draws to 1000. This might be too small for a ‘production’ run. Increasing R to 10000 or more will decrease the simulaition noise. In preliminary modeling, it’s reasonable to keep the number of draws modest while checking alternate models. Then when you’ve zeroed in on a subset of promising models, increase the number of draws to check the results.

Wishlist

# This shorter rpar setup doesn't work. See passing comment in https://stackoverflow.com/a/22027859/1795127
# It refers to this post: https://stats.stackexchange.com/a/51716/144281
# which is similar to the strategy I've adopted above.
# The clusterSes::
RCRDRAWS=100
doRCRShort=FALSE
if (doRCRShort){
rparArgShort=c("(Intercept)"="n",f0="n",f1="n",f2="n")
print(rparArgShort)
mlogitFitRcrShort=mlogit(as.formula(fixed_mFormStr), data=mld_pb52, rpar=rparArgShort, 
                panel=TRUE, R=100,print.level=1)
print(summary(mlogitFitRcrShort))
}

Future reference: Issues Bootstrapping mlogit simple multinomial

The function cluster.bs.mlogit in package clusterSes may be the solution to this problem:

It seems possible that mlogit has a better algorithm for fixed effects multinomial logit. However, it works on its own mlogit.data object which has row names that can’t be duplicated. These row names appear to be checked in mlogit() runs. It looks like each bootstrap run would have to build its own ‘mlogit.data’ object or ‘mlogit.data’ objects would have to be manipulated to have id and row.names be unique.

rowNamesSave=row.names(mld_pb52)
row.names(mld_pb52)= c();
bootMlogit=function(){
  inxUseIDs=sample(uniqueID,nID,replace=TRUE)
  inxUseRows=unlist(IDRowsList[inxUseIDs])
  stopifnot(length(inxUseIDs)==nID)
  # just use defaults
  mlogitFitBoot=mlogit(as.formula(fixed_mFormStr), data=mld_pb52[inxUseRows, ], panel=FALSE)
  mlogitBootCoefs=coefs(mlogitFitBoot)
 print(mlogitBootCoefs)
  return(mlogitBootCoefs)
}