As for all rties analyses, the first step for a Coupled-Oscillator analysis is to follow the instructions in “overview_data_prep” to visualize and prepare the data. We include only the minimal required steps here:
Variations on the Coupled-Oscillator model are fairly common in the literature on close relationships and emotion (Boker & Laurenceau, 2006, 2007; Butner, Diamond, & Hicks, 2007; Helm, Sbarra, & Ferrer, 2012; Reed, Barnard, & Butler, 2015; Steele & Ferrer, 2011). All the models have in common that they: 1) are based on differential equations, 2) characterize oscillatory phenomena, 3) include some form of coupling (mutual influence between partners), and 4) represent damping or amplification of the oscillations over time. The last distinction is particularly important because it is central to defining co-regulation, whereby partner’s mutual influence has a homeostatic effect, resulting in both partners returning to a stable set-point following some disruption to the system, versus co-dysregulation, whereby partner’s mutual influence results in increasingly volatile fluctuations away from a set-point (Butler & Randall, 2013; Reed et al., 2015). This contrast is shown in Figure 2 in the overview_data_prep vignette.
One of the challenges of using COMs is that they rely on derivatives. The typical approach in social science is to estimate those derivatives from the data, which has limitations but is tractable, and this is the approach we use in rties (for a discussion see Butler et al., 2017)). We make use of a Local Linear Approximation suggested and implemented in R by S. Boker (Boker, Deboeck, Edler, & Keel, 2010; Boker & Nesselroade, 2002). The rties version of a COM predicts the second derivative of the observed state variable from: 1) a person’s own state variable, which is related to the frequency of oscillations, 2) a person’s own first derivative of the state variable, which indicates damping/amplification, 3) a person’s partner’s state variable, which indicates coupling with respect to frequency, and 4) a person’s partner’s first derivative of the state variable, which indicates coupling with respect to damping/amplification. The model includes separate estimates for both partner types (e.g., men and women), resulting in a total of 8 parameters. The lm model used by the rties functions is:
lm(d2 ~ -1 + dist0:obs_deTrend + dist0:d1 + dist0:p_obs_deTrend + dist0:p_d1 + dist1:obs_deTrend + dist1:d1 + dist1:p_obs_deTrend + dist1:p_d1, na.action=na.exclude, data=datai)
where “d2” is the second derivative of the observed state variable (the time-series emotional experience variable in our example) with linear trends removed (e.g., it is the second derivative of the residuals from each person’s state variable predicted from time) . The “-1” results in the intercept being omitted (which is part of the formulation of any Coupled-Oscillator model). The terms “dist0” and “dist1” are indicator variables, scored 0 or 1 to indicate which partner type is represented. In other words, terms multiplied by “dist0” indicate the estimate of that term for the partner scored 0 on the distinguishing variable provided by the user (see “overview_data_prep”), while terms multiplied by “dist1” indicate the estimate for the partner scored 1 on the original distinguishing variable. The term “obs_deTrend” is the observed state variable with individual linear trends removed. This parameter represents how quickly the observed process oscillates, but its values are not interpretable until transformed into frequency (cycles per time), or its reciprocal, period (time for one cycle). Estimates for the parameter (we will call it η here) also need to be negative in order to be interpretable. Assuming a negative estimate, η < 0, the time for one complete cycle (period) is estimated by ((2π) / (sqrt(-[η]))). Larger absolute values of η are indicative of more rapid oscillations. The term “p_obs_deTrend” is the observed state variable for a person’s partner with individual linear trends removed and represents coupling with respect to frequency (e.g., the impact of the partner on one’s own oscillatory frequency). The term “d1” is the first derivative of the observed state variable with linear trends removed. Negative estimates of this term represent damping, or the tendency for the state variable to converge back to homeostatic levels. Positive estimates, in contrast, represent amplification, or the tendency of the state variable to increasingly deviate away from homeostatic levels. A zero estimate suggests a continuously oscillating process of constant amplitude. Finally, the term “p_d1” is the first derivative of a person’s partner’s state variable with linear trend removed and represents coupling with respect to damping/amplification (e.g., the impact of the partner on one’s own damping/amplification). Note that we estimate this model separately for each dyad (e.g., “datai” is the data from couple “i”) and hence it is not a multilevel model
There are two sample size considerations for any of the models implemented in rties. The first pertains to the number of observations per dyad that are required, while the second is the number of dyads required. The first consideration comes into play when we estimate the dynamics one dyad at a time. Greater complexity requires finer-grained measurement of time and hence more observations per dyad. The Coupled-Oscillator model represents fairly complex dynamics and hence requires more observations per dyad than simpler models. The exact number of observations required, and the spacing between them, will depend upon the temporal processes involved, however. For a good discussion of these issues see Boker & Nesselroade, 2002. As an over-simplified summary, the goal is to have enough observations per oscillatory cycle for at least 2 cycles to be able to “see” the oscillatory pattern. For example, if there were only 2 observations per cycle, there would be no way to estimate the curvature. Or if there were only observations for one cycle, there would be no way to estimate damping/amplification across cycles. A gross guideline that has been suggested is that between 16 to 90 observations per dyad represents the minimum, but again whether this is enough to recover the “true” oscillatory dynamics is dependent on a number of assumptions (Boker & Nesselroade, 2002). A pragmatic approach is to try using a Coupled-Oscillator model and if you do not get any convergence errors or any out-of-bound estimates, and you achieve an adequate fit to the data for most dyads, you have enough observations per dyad to make progress.
The second sample size consideration comes into play when we use latent profiles based on the estimated dynamic parameters for each dyad to either predict the system variable, or be predicted by it (these steps are described in detail in sections below). In both cases, the system variable can be either a dyadic variable (e.g., both partners have the same score, as in relationship length) or an individual variable (e.g., partners can have different scores, as in age). In the case of predicting a dyadic system variable, a regular regression model is appropriate for normally distributed system variables, or a generalized linear model for non-normal ones (any of the families available for glm can be used). In this case, the shared system variable is predicted by the categorical latent profile membership and you can use your favorite rule of thumb along the lines of “n observations for a one-predictor regression model” to choose your sample size. Alternately, you could conduct a power analysis for the linear (e.g., regular regression) or generalized linear model you choose.
The situation is more complicated when the system variable is assessed at the individual level, or when it is the predictor of the latent profiles. In the former case, the system variable is predicted using a cross-sectional random-intercept dyadic model with the latent profile membership, the distinguisher variable and their interaction as fixed effects. For these models, it should be possible to use the R package simr to estimate the necessary sample size to obtain desired power. In the latter case, profile membership is predicted using either a binomial (for 2 latent profiles) or multinomial (for more than 2 latent profiles) model, with either just the system variable as a predictor (for dyadic system variables), or each partner’s system variable and their interaction as predictors (for individual system variables). For these models, it should be possible to use G-Power, or some other software for logistic and multinomial regression, to assess the needed sample size for a given power level.
Having prepared the data, the next step for the Coupled-Oscillator model is to estimate first and second derivatives of the time-series state variable. As mentioned above, we use a Local Linear Approximation suggested and implemented in R by S. Boker (Boker, Deboeck, Edler, & Keel, 2010; Boker & Nesselroade, 2002). This method requires the user to provide settings for 3 control parameters: tau, embed, and delta. Tau is the number of time points to include when estimating the first derivative, which is the mean of two adjacent slopes across that number of time points on either side of time t (e.g., if tau = 2 then the estimate of the first derivative at time = t is based on the mean of the slopes left and right of time t across 2 observations each). The second derivative is the difference in the two slopes with respect to time. Tau = 1 is sensitive to noise and increasing its value acts as smoothing. Embed is relevant to the degree of derivatives that are desired and the minimum embed is 3 for 2nd order derivatives. Higher values increase smoothing. Finally, delta is the inter-observation interval and is typically set to one (e.g., if equal 2, then every second observation is used).
Choosing optimal values for tau and embed is a complex process and the resulting derivative estimates are highly sensitive to them. In rties we provide the “estDerivs” function to investigate sets of tau and embed for a given delta with respect to the quality of fit for an individual oscillator model for each person’s data. In other words, the user provides vectors of tau and embed values and the function fits an oscillator model to each person’s data using each pair of tau and embed values, and returns a list with the maximal R^2 for each person, the values of tau and embed that resulted in that maximal R^2, and the period of oscillation associated with that tau/embed combination. This information can be used to adjust the set of tau and embed until the model fit is fairly good for all people and the range of periods of oscillation is appropriate for the process being investigated. For example, if we expect the process we are studying to oscillate every 2-3 minutes, then we may choose values of tau and embed that result in lower overall R^2 but produce a range of periods from 1-5 minutes in preference to those that produce a higher average R^2, but a range of periods from 10-15 minutes. The reasoning here is that it is preferable to have somewhat worse measurement of the right process, than better measurement of the wrong process. Note that you may encounter a variety of error messages during the process of selecting tau and embed vectors, all of which imply an inappropriate combination of tau and embed. The solution is to change the selection of tau and embed until no errors are encountered. The following code assigns values for taus, embeds and delta and then provides those as arguments to “estDerivs”. Note that the vectors for taus and embeds can be any length, but the longer they are, the longer the function will take to run. We chose these taus and embeds based on prior experimentation that showed that smaller values gave poor fits, while larger values either produced inappropriate period lengths for our process of interest or resulted in error messages.
The object “derivs” that was created from “estDerivs” contains a dataframe called “data” that holds the derivative estimates for each person using the tau/embed combination that maximized that person’s R^2. It also contains a dataframe called “fitTable” with the fit information. Here are the first 6 entries of the fitTable from our example:
head(derivs$fitTable)
#> id tau embed Max-Rsqr Period
#> [1,] 2 7 5 0.84 63.95
#> [2,] 502 7 3 0.66 86.53
#> [3,] 3 7 7 0.51 98.68
#> [4,] 503 7 4 0.52 93.84
#> [5,] 5 7 7 0.65 104.96
#> [6,] 505 7 5 0.26 147.85The first column is the person ID, followed by the tau and embed that maximized the R^2 for each person. From these first entries, we can see that each of the embed values was chosen for at least one person, while a tau value of 7 was chosen for everyone. If that were true for the full sample, we could conclude that an tau values of 3 and 5 were not helpful and we may consider dropping them and trying higher values. In this example, however, an inspection of the full “fitTable”" shows that they were chosen for at least a few people, making it reasonable to keep them as options. The next column has the maximal R^2s, which we can inspect in the usual ways (summary, hist, etc):
summary(derivs$fitTable[ ,4])
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 0.2100 0.4300 0.5350 0.5232 0.6200 0.8400We can see that an individual oscillator model, with the tau and embed options we have provided, gives a fairly good fit to the data, with a mean R^2 of .52. The next consideration is whether these tau/embed combinations are also picking up a period of oscillation that is relevant to our process of interest. In our case, we are investigating emotional experience, which we speculate should oscillate every few minutes based on theories about the time-course of emotions and our prior research. To address this, the last column in fitTable gives the estimated period of oscillation for each person, given the tau/embed combination that resulted in their maximal R^2. We can inspect this as usual:
summary(derivs$fitTable[ ,5])
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 63.95 85.83 96.47 102.55 114.64 180.18The values are given in the temporal units of the raw data, which in our example was 2-second units. Thus to translate the periods back into original time units of seconds we multiply the period by the size of time units over which the observed variable was aggregated. So, for example, for a period of 102 (the mean period estimated) and an observed variable in 2 second units, we have 102 * 2 = 204 seconds. We can then divide by 60 to get the estimate in minutes if desired, which in this case would be 3.4 minutes. A period of this length is theoretically about right for emotional experience, which supports using this combination of tau/embed for estimating the derivatives.
The next step, which is often neglected in the literature, is to assess how well different variants of the Coupled-Oscillator model fit the observed temporal data. Note that in the prior step we assessed the fit of an individual oscillator model to people’s data one at a time in order to help choose tau/embed values for estimating derivatives. Here in the next step, we compare the fit of the Coupled-Oscillator model for each dyad’s data versus an Uncoupled-Oscillator (e.g., one in which both partners are represented, but there is no mutual influence). Our ultimate goal is to either predict outcomes of interest from the dynamics assessed by the model, or to test whether other variables predict those dynamics. Either way, the results are only meaningful if the Coupled-Oscillator model does, in fact, realistically represent the dynamics of the system.
The function “indivCloCompare” fits an uncoupled and a coupled oscillator model to each dyad’s observed state variables and returns the adjusted R-squares for each model (called “R2uncouple” and “R2couple”) for each dyad, along with the difference between them ( called “R2dif”, calculated as coupled minus uncoupled, so positive values indicate better fit for the more complex model). The function takes the name of the dataframe created by the “estDerivs” function (called in this example derivs$data) and the results can be accessed with the “summary”" function:
compare <- indivCloCompare(derivs$data)
summary(compare$R2uncouple)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 0.2450 0.3847 0.4808 0.4848 0.5782 0.7206
summary(compare$R2couple)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 0.2505 0.4584 0.5397 0.5322 0.6123 0.7407
summary(compare$R2dif)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> -0.003259 0.010639 0.038700 0.047422 0.068894 0.207241In this example, we see that the simpler uncoupled model accounts on average for about 48% of the variance in the observed state variables, while the full coupled model accounts for about 5% more on average. These results suggest that coupling is not critical for describing the dynamics of the system, but it is still possible that the small amount of variance explained by coupling may predict system variables, such as relationship quality or health, better than the uncoupled model.
In addition to the model comparison results, we provide a function, “indivCloPlots”“, to plot the observed state variables for each dyad, superimposed with their model predicted trajectories. The function takes as arguments the name of the dataframe produced by”estDerivs" (“derivs$data”" in this example), which of the variants of the oscillator model to plot the results for (“uncoupled” or “coupled”), and an argument called “idConvention” which is the number that was added to the dist0 partner to get the ID number for the dist1 partner (see overview_data_prep vignette). The function also takes optional strings providing labels for the level-0 and level-1 of the distinguishing variable, as well as the name of the observed state variable (default is “observed”) and minimum/maximum values for the y-axis in terms of quantiles (default is the observed minimum and maximum of the state variable). These plots provide a visual tool for assessing model fit at the individual dyad level. The figures can be accessed from the returned list called “plots” and they are also automatically saved as a .pdf file in the working directory (this process takes awhile). The following figures show examples of poor and good fit respectively:
The next step is to use the function “indivClo” to generate the dynamic parameter estimates for each dyad. The function takes the name of the dataframe produced by “estDerivs” (“derivs$data” in this example) and the name of the version of the oscillator model that you want the parameter estimates for (“uncoupled” or “coupled”). It returns: 1) a dataframe containing the parameter estimates (called “params”, for use in the latent profile analysis), and 2) a vector with the adjusted R^2 for the model fit to each dyad’s data (called R^2; this R^2 information is identical to that returned by “indivCloCompare”).
clo <- indivClo(derivs$data, whichModel="coupled")
head(clo$params)
#> obs_0 d1_0 p_obs_0 p_d1_0 obs_1 d1_1 p_obs_1 p_d1_1
#> 1 -0.04125 -0.00061 0.00026 0.00826 -0.01833 0.00799 -0.00036 0.00587
#> 2 -0.01432 -0.00738 0.00417 -0.00784 -0.00404 0.00330 0.00237 -0.01225
#> 3 -0.00384 -0.01206 0.00031 -0.00977 -0.00352 0.00184 0.00065 0.01485
#> 4 -0.03866 -0.00317 0.00134 0.01105 -0.00268 0.00007 0.00108 -0.00298
#> 5 -0.00489 0.00837 0.00125 0.00239 -0.00190 -0.00843 0.00038 0.02249
#> 6 -0.02650 -0.03201 0.00778 0.00865 -0.00575 0.02650 0.00237 -0.03011
#> dyad
#> 1 2
#> 2 3
#> 3 5
#> 4 8
#> 5 10
#> 6 11In the returned dataframe, the variables are: obs_0 = the frequency estimate for the person scored 0 (partner-0) on the distinguishing variable, obs_1 = the frequency estimate for the person scored 1 (partner-1) on the distinguishing variable, d1_0 = the damping/amplification estimate for partner-0, d1_1 = the damping/amplification estimate for partner-1, p_obs_0 = the coupling estimate with respect to frequency for partner-0, p_obs_1 = the coupling estimate with respect to frequency for partner-1, p_d1_0 = the coupling estimate with respect to damping/amplification for partner-0, and p_d1_1 = the coupling estimate with respect to damping/amplification for partner-1.
The next step is to use the data generated by “indivInertCoord” as input to a latent profile analysis. Because the models in rties all represent non-linear dynamics, the behavior of the dyadic system cannot be intuited by examining individual parameter estimates from the model. Rather, they operate together as a set to produce potentially complex temporal trajectories of both partner’s state variables (emotional experience in this example). Thus when it comes to using the dynamic parameter estimates to predict, or be predicted by, system variables such as conversation quality, we wish the parameters to be operating as a set, not as isolated individual predictors or outcomes (such as would be the case if we used them as predictors in a regression model). Instead, we use them as input to a latent profile analysis in order to estimate qualitatively distinct groups of dyads based on the dynamics assessed by the Inertia-Coordination model. In other words, we wish to group the dyads based on their similarities and differences in terms of patterns of inertia and coordination.
Prior to conducting the LPA, it is a good idea to look at histograms of the parameter estimates to check that there is adequate variance across dyads for them to be meaningful as input for a latent profile analysis. In addition, LPA can be sensitive to non-normally distributed variables, so caution should be used if some of the parameters are clearly non-Gaussian. This issue is less problematic, however, if you choose the number of profiles a-priori (as we recommend) rather than based purely on fit statistics. In this example, we see there is adequate variance for all parameters, but many are not normally distributed. We proceed with them anyway, but it would be good for a serious analysis to see how much the results change if they were transformed to be more normally distributed.
The latent profile analysis makes use of the tidyLPA package and we recommend consulting their documentation for a full understanding. We focus here on its specific use in the context of rties. We first load the tidyLPA package (you will need to install it if you haven’t already) and use the dynamic parameter estimates in “clo$params” as input to the tidyLPA “compare_solutions” function. It takes the name of the dataframe we just created with the parameter estimates and the names of which parameter estimates to include in the analysis. Here we include all possible dynamic parameters. The result is a plot showing the BIC fit statistic for various models with various numbers of profiles (see tidyLPA documentation for a full interpretation).
library(tidyLPA)
#> Note that an update to tidyLPA is forthcoming; see vignette('introduction-to-major-changes') for details!
compare_solutions(clo$params, obs_0:p_d1_1)The previous output can help to inform decisions about which model and how many profiles to use. We recommend reading some papers on latent profile analysis if this is new to you. In general, we recommend choosing based on a combination of how many dyads are included in each profile (e.g., we prefer solutions where there is at least 10% of the dyads in the smallest profile) and how meaningful the dynamics of each profile are, which can be established by inspecting the model predicted trajectories of the state variable for each profile (described below). We recommend only considering fit statistics to the extent that our chosen model has an adequate fit (e.g., we don’t care if it has the best fit, just an adequate one). Here we choose model 3 with 2 profiles based on a combination of the entropy estimates and the clearly distinguishable dynamics when we plot them later. The next step is to investigate the makeup of the estimated profiles using the “estimate_profiles” function.
lpaStep1 <- estimate_profiles(clo$params, obs_0:p_d1_1, n_profiles=2, variances="equal", covariances="equal")
#> Fit Equal variances and equal covariances (model 3) model with 2 profiles.
#> LogLik is 1958.263
#> BIC is 3689.863
#> Entropy is 1
plot_profiles(lpaStep1)We see from the resulting plot that the profiles are primarily distinguished by different frequency parameters and coupling with respect to frequency. You could repeat the previous steps with different models and numbers of profiles. Once you have chosen the final combination you want to use moving forward, the next step is to run the “estimate_profiles” two more times with different return options in order to collect the information needed for using the latent profiles as either predictors or outcomes of the system variable. Specifically, the object “lpaStep2” below is created using the “return_orig_df=T” argument, while the object “lpaStep3” uses the “to_return=”mclust"" argument. Those objects are then used as input to the rties function “makeLpaData”, along with the dataframe created by the “estDerivs” function and the name of the model that generated the estimates (either “inertCoord” or “clo”). The result (called “lpaVars” in this example) contains the needed data (called “profileData”) and parameter estimates (called “profileParams”) for all subsequent steps in an rties analysis.
lpaStep2 <- estimate_profiles(clo$params, obs_0, obs_1, d1_0, d1_1, p_obs_0, p_obs_1, p_d1_0, p_d1_1, n_profiles=2, variances="equal", covariances="equal", return_orig_df=T)
#> Fit Equal variances and equal covariances (model 3) model with 2 profiles.
#> LogLik is 1958.263
#> BIC is 3689.863
#> Entropy is 1
lpaStep3 <- estimate_profiles(clo$params, obs_0, obs_1, d1_0, d1_1, p_obs_0, p_obs_1, p_d1_0, p_d1_1, n_profiles=2, variances="equal", covariances="equal", to_return="mclust")
#> Fit Equal variances and equal covariances (model 3) model with 2 profiles.
#> LogLik is 1958.263
#> BIC is 3689.863
#> Entropy is 1
lpaVars <- makeLpaData(derivs$data, lpaStep2, lpaStep3, "clo")As described above, to interpret the dynamic parameter estimates obtained from the Coupled-Oscillator model (or any other rties model), we need to plot the model predicted temporal trajectories. The “cloPredTraj” function does this and takes as arguments the name of the processed dataframe (“data2” in this example), the 2 dataframes produced by the “makeLpaData” function (“profileData” and “profileParams”), the number of profiles, an optional “time_length” argument that specifies the number of time points to plot across (the default is 20), optional names for the levels of the distinguishing variable and the observed state variable, and an optional argument for the minimum/maximum values for the y-axis in terms of quantiles (default is the observed minimum and maximum of the state variable). We see that Profile 1 is characterized by a somewhat chaotic pattern with partner’s having similar amplitude but different frequecies of oscillation, while Profile 2 is characterized by in-phase steady oscillations, with men having higher amplitude oscillations than women.
profilePlots <- cloPredTraj(origData=data2, lpaData=lpaVars$profileData, lpaParams=lpaVars$profileParams, n_profiles=2, dist0name="Men", dist1name="Women", obsName="Dial", minMax= c(.2, .8))
#> [[1]]#>
#> [[2]]
The next step in the analysis is to use each dyad’s profile membership to predict the system variable using the “sysVarOut” function. The system variable can be either dyadic (sysVarType = “dyadic”), where both partners have the same score (e.g., relationship length) or individual (sysVarType = “indiv”), where the partners can have different scores (e.g., age). For dyadic system variables, the only predictor is profile membership and the model is a regular regression model since all variables are at the level of the dyad. If the system variable is individual then the model is a random-intercept dyadic model and 3 models are estimated: 1) the main effect of profile membership (“profile”), 2) main effects of profile membership and the distinguishing variable (“profilePlusDist”), and 3) the interaction of profile membership and the distinguishing variable (“profileByDist”). If the system variable is not normally distributed, any of the generalized linear models supported by glm (for dyadic system variables) or glmmPQL (for individual system variables) are available by specifying the “family” distribution.
For normally distributed system variables, the function returns a list including the lm or lme objects containing the full results for each model (called “models”). These results can be inspected using the usual “summary” function and the models can be compared using the “anova” function. Similarly, for non-normal system variables, the function returns a list of the glm or glmmPQL objects containing the full results for the models. By default, the function also displays histograms of the residuals and plots of the predicted values against observed values for each model, but these can be turned off by setting “printPlots=F”. In this example, we find evidence for an interaction, such that in Profile 1 men and women report higher efforts to influence each other than in Profile 2, but the difference in influence levels between profiles is larger for women.
summary(sysOut$models$profile)
#> Linear mixed-effects model fit by maximum likelihood
#> Data: basedata
#> AIC BIC logLik
#> 501.2098 513.0891 -246.6049
#>
#> Random effects:
#> Formula: ~1 | dyad
#> (Intercept) Residual
#> StdDev: 0.7964596 1.128339
#>
#> Fixed effects: sysVar ~ profile
#> Value Std.Error DF t-value p-value
#> (Intercept) -0.1754386 0.2604472 72 -0.6736053 0.5027
#> profile2 -0.5289639 0.3035624 70 -1.7425207 0.0858
#> Correlation:
#> (Intr)
#> profile2 -0.858
#>
#> Standardized Within-Group Residuals:
#> Min Q1 Med Q3 Max
#> -2.12491128 -0.62091713 0.09099417 0.52441591 2.30638210
#>
#> Number of Observations: 144
#> Number of Groups: 72
summary(sysOut$models$profilePlusDist)
#> Linear mixed-effects model fit by maximum likelihood
#> Data: basedata
#> AIC BIC logLik
#> 502.5866 517.4356 -246.2933
#>
#> Random effects:
#> Formula: ~1 | dyad
#> (Intercept) Residual
#> StdDev: 0.7998968 1.123465
#>
#> Fixed effects: sysVar ~ profile + dist
#> Value Std.Error DF t-value p-value
#> (Intercept) -0.2495127 0.2779666 71 -0.8976356 0.3724
#> profile2 -0.5289639 0.3046370 70 -1.7363743 0.0869
#> distMen 0.1481482 0.1892257 71 0.7829177 0.4363
#> Correlation:
#> (Intr) profl2
#> profile2 -0.807
#> distMen -0.340 0.000
#>
#> Standardized Within-Group Residuals:
#> Min Q1 Med Q3 Max
#> -2.06898067 -0.65484735 0.08425187 0.52306913 2.24966771
#>
#> Number of Observations: 144
#> Number of Groups: 72
summary(sysOut$models$profileByDist)
#> Linear mixed-effects model fit by maximum likelihood
#> Data: basedata
#> AIC BIC logLik
#> 499.2454 517.0642 -243.6227
#>
#> Random effects:
#> Formula: ~1 | dyad
#> (Intercept) Residual
#> StdDev: 0.8276218 1.082557
#>
#> Fixed effects: sysVar ~ profile * dist
#> Value Std.Error DF t-value p-value
#> (Intercept) 0.1052631 0.3170540 70 0.3320037 0.7409
#> profile2 -1.0109235 0.3695401 70 -2.7356259 0.0079
#> distMen -0.5614035 0.3562101 70 -1.5760459 0.1195
#> profile2:distMen 0.9639193 0.4151783 70 2.3216998 0.0232
#> Correlation:
#> (Intr) profl2 distMn
#> profile2 -0.858
#> distMen -0.562 0.482
#> profile2:distMen 0.482 -0.562 -0.858
#>
#> Standardized Within-Group Residuals:
#> Min Q1 Med Q3 Max
#> -2.03638325 -0.57055719 0.02158333 0.55242404 2.21049091
#>
#> Number of Observations: 144
#> Number of Groups: 72
anova(sysOut$models$profilePlusDist, sysOut$models$profileByDist)
#> Model df AIC BIC logLik Test
#> sysOut$models$profilePlusDist 1 5 502.5866 517.4356 -246.2933
#> sysOut$models$profileByDist 2 6 499.2454 517.0642 -243.6227 1 vs 2
#> L.Ratio p-value
#> sysOut$models$profilePlusDist
#> sysOut$models$profileByDist 5.341191 0.0208The last step in the analysis is to turn the direction of prediction around and use the system variable to predict couples’ profile membership. The function “sysVarIn” accomplishes this. It takes as arguments the name of the dataframe containing the profileData (created by the “makeLpaData” function), whether the system variable is “dyadic” or “individual”, the number of profiles, and optional names for the levels of the distinguisher and the system variable. If there are 2 profiles, then binomial regression models are used. If there are more than 2 profiles then multinomial regression is used. For dyadic system variables, a couple’s shared score is the only predictor of their profile membership (called “sysVar”). For individual system variables, two models are tested, one with the main effects of both partner’s system variable (“sysVarMain”) and one with the main effects and their interaction (“sysVarInteract”). In both cases an intercept-only model is included as a comparison point (called “base”).
The function returns a list of the full model results and by default produces plots of profile membership against the system variable(s), but these can be turned off by setting printPlots=F. The results below show evidence for men’s influence predicting profile membership, such that at higher influence there is less likelihood of the couple being in Profile 2. Exponentiating the parameter estimates translates them into odds-ratios (since this is an example of logistic regression) and shows that odds ratio for men’s influence is .56, which is a relatively large effect size.
summary(sysIn$models$sysVarMain)
#>
#> Call:
#> glm(formula = profileN ~ sysVar0 + sysVar1, family = "binomial",
#> data = basedata)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.1061 -0.9239 0.5704 0.7952 1.3153
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> Intercept 0.8920 0.2999 2.974 0.00294 **
#> Influence_Men -0.5845 0.2262 -2.584 0.00977 **
#> Influence_Women 0.2089 0.2287 0.913 0.36116
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 83.100 on 71 degrees of freedom
#> Residual deviance: 75.359 on 69 degrees of freedom
#> AIC: 81.359
#>
#> Number of Fisher Scoring iterations: 4
summary(sysIn$models$sysVarInteract)
#>
#> Call:
#> glm(formula = profileN ~ sysVar0 * sysVar1, family = "binomial",
#> data = basedata)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.1095 -0.9272 0.5708 0.7955 1.3088
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> Intercept 0.889727 0.315143 2.823 0.00475 **
#> Influence_Men -0.583277 0.231852 -2.516 0.01188 *
#> Influence_Women 0.208959 0.228698 0.914 0.36088
#> Influence_Men:Influence_Women 0.003103 0.132840 0.023 0.98137
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 83.100 on 71 degrees of freedom
#> Residual deviance: 75.358 on 68 degrees of freedom
#> AIC: 83.358
#>
#> Number of Fisher Scoring iterations: 4
exp(coef(sysIn$models$sysVarMain))
#> Intercept Influence_Men Influence_Women
#> 2.4400138 0.5573923 1.2322925In summary, taken together, the results from this example provide some evidence that the dynamics represented by the Coupled-Oscillator model are associated with partner’s attempting to influence each other, such that a more chaotic pattern of emotional oscillations is associated with more influence attempts, especially when it is the women trying to influence the men.