We’re going to use the same dataset that you generated last week for this week’s exercise. Last week we looked at choosing a best model. This week we’re going to model average the coefficients and predictions from the models, and make “partial dependence plots”. As before, we’ll first do each step by hand, then with AICcmodavg. Below I’ve assumed you still have all the objects created in week 3 lab. You can ensure this by simply using the same RStudio project you created for the week 3 lab. Then you need to include code from Lab 3 in your Rmd file for lab 4. There are 3 options in increasing order of sophistication:
Open yourname_lab3.Rmd (or whatever you called it). Save it as yourname_lab4.Rmd. Continue working at the end of the file, making one big document. This is entirely fine.
Copy the code chunks out of your lab3 Rmd file into a code chunk at the top of a new lab 4 Rmd file. You can leave out code that makes plots and tables. All you need is code that assigns operations to objects using <-. This one is tedious and subject to making alot of mistakes.
Include the following line in your first code chunk, before you run anything else.
library(knitr)
source(knitr::purl("Lab-3_Towfiqur.Rmd", output = tempfile()))
##
##
## processing file: Lab-3_Towfiqur.Rmd
##
|
| | 0%
|
|. | 3%
|
|... | 5% (unnamed-chunk-29)
|
|.... | 8%
|
|...... | 11% (unnamed-chunk-30)
|
|....... | 14%
|
|........ | 16% (unnamed-chunk-31)
|
|.......... | 19%
|
|........... | 22% (unnamed-chunk-32)
|
|............ | 24%
|
|.............. | 27% (unnamed-chunk-33)
|
|............... | 30%
|
|................. | 32% (unnamed-chunk-34)
|
|.................. | 35%
|
|................... | 38% (unnamed-chunk-35)
|
|..................... | 41%
|
|...................... | 43% (unnamed-chunk-36)
|
|....................... | 46%
|
|......................... | 49% (unnamed-chunk-37)
|
|.......................... | 51%
|
|............................ | 54% (unnamed-chunk-38)
|
|............................. | 57%
|
|.............................. | 59% (unnamed-chunk-39)
|
|................................ | 62%
|
|................................. | 65% (unnamed-chunk-40)
|
|.................................. | 68%
|
|.................................... | 70% (unnamed-chunk-41)
|
|..................................... | 73%
|
|....................................... | 76% (unnamed-chunk-42)
|
|........................................ | 78%
|
|......................................... | 81% (unnamed-chunk-43)
|
|........................................... | 84%
|
|............................................ | 86% (unnamed-chunk-44)
|
|............................................. | 89%
|
|............................................... | 92% (unnamed-chunk-45)
|
|................................................ | 95%
|
|.................................................. | 97% (unnamed-chunk-46)
|
|...................................................| 100%
## output file: C:\Users\mrahman8\AppData\Local\Temp\RtmpCa15qe\fileb2e830cd7ae6
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.1 ✔ readr 2.1.4
## ✔ forcats 1.0.0 ✔ stringr 1.5.0
## ✔ ggplot2 3.4.1 ✔ tibble 3.2.1
## ✔ lubridate 1.9.2 ✔ tidyr 1.3.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
## Warning in `[.model.selection`(model.sel(fits), , -6): cannot recalculate
## "weights" on an incomplete object
The function knitr::purl() pulls all the code chunks out of the Rmd file and puts them in an R script. source() then runs that script. To be super clean, add include = FALSE, echo=FALSE to the code chunk options, which go at the start of the code chunk like this {r} change to{r , include = FALSE, echo=FALSE} that will not show the code or results in the output document.
As before, answer the numbered questions in the regular text section of your R markdown file. ########################################################################################################################## Manually model average the intercept First let’s look at the intercept. We need to get a vector of the estimated intercepts from each model:
library(broom) # for tidy
library(tidyverse) # for map_chr(), map_df(), filter(), mutate()
# old way
old_intercept= sapply(fits,function(mm)coef(mm)[1])
old_intercept
## 1.(Intercept)
## -0.1964312
## Amount.(Intercept)
## -0.1975069
## Edge.(Intercept)
## -0.2028180
## MnPatch.(Intercept)
## -0.2431992
## Hetero.(Intercept)
## -0.1176347
## Amount + Edge.(Intercept)
## -0.1807710
## Amount + MnPatch.(Intercept)
## -0.2003092
## Amount + Hetero.(Intercept)
## -0.1160240
## Edge + MnPatch.(Intercept)
## -0.2409917
## Edge + Hetero.(Intercept)
## -0.1015220
## MnPatch + Hetero.(Intercept)
## -0.1220807
## Amount + Edge + Hetero.(Intercept)
## -0.1013848
## Amount + Edge + MnPatch.(Intercept)
## -0.1850920
## Amount + MnPatch + Hetero.(Intercept)
## -0.1262801
## Edge + MnPatch + Hetero.(Intercept)
## -0.1302164
## Amount + Edge + MnPatch + Hetero.(Intercept)
## -0.1128842
# new way apply tidy() to each element of the list
intercepts <- map_df(fits, tidy) %>% filter(term == "(Intercept)") %>%
mutate(models = modnames)
intercepts
## # A tibble: 16 × 6
## term estimate std.error statistic p.value models
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 (Intercept) -0.196 0.228 -0.860 0.390 1
## 2 (Intercept) -0.198 0.218 -0.906 0.366 Amount
## 3 (Intercept) -0.203 0.225 -0.901 0.368 Edge
## 4 (Intercept) -0.243 0.222 -1.09 0.275 MnPatch
## 5 (Intercept) -0.118 0.195 -0.604 0.546 Hetero
## 6 (Intercept) -0.181 0.213 -0.851 0.396 Amount + Edge
## 7 (Intercept) -0.200 0.219 -0.915 0.361 Amount + MnPatch
## 8 (Intercept) -0.116 0.195 -0.595 0.552 Amount + Hetero
## 9 (Intercept) -0.241 0.223 -1.08 0.280 Edge + MnPatch
## 10 (Intercept) -0.102 0.193 -0.526 0.600 Edge + Hetero
## 11 (Intercept) -0.122 0.195 -0.625 0.533 MnPatch + Hetero
## 12 (Intercept) -0.101 0.189 -0.535 0.593 Amount + Edge + Hetero
## 13 (Intercept) -0.185 0.213 -0.868 0.386 Amount + Edge + MnPatch
## 14 (Intercept) -0.126 0.196 -0.646 0.519 Amount + MnPatch + Hetero
## 15 (Intercept) -0.130 0.192 -0.678 0.498 Edge + MnPatch + Hetero
## 16 (Intercept) -0.113 0.190 -0.594 0.553 Amount + Edge + MnPatch + H…
The estimated intercepts are stored in the list object fits, and we use the function tidy() to pull them out into a dataframe. tidy() only works on a single model object, so we use map_df() to apply tidy to each object in the list, and pull all the results into a data.frame. Then we use filter() to keep just the rows with estimated intercepts. Explore this code a bit by using the individual functions, and looking carefully at the returned results, including the new dataframe intercepts. Do not include these interactive explorations in your submitted R markdown file.
I left the “old way” of extracting these estimates in there for you to compare. Notice that the result is a single vector of the estimates, not a data.frame. Although the tidyverse approach looks more complex, with a couple of lines of code we’ve made several subsequent steps of the old way completely obsolete!
Now we need to calculate the weighted average of the intercepts, using the AIC weights. There are at least two ways to do this – but in the spirit of KISS we’ll use the brute force option:
mavg.intercept <- sum(weights * intercepts$estimate)
mavg.intercept
## [1] -0.1056922
Here we’ve multiplied each element of the vector weights with its corresponding element in intercepts and then used the function sum() to add up the results.
The estimated intercepts for the two top models, 12 and 16, are -0.1014 and -0.1129, respectively,Model 16, which includes “Amount”, “Edge”, “MnPatch”, and “Hetero”, is slightly lower than the estimated intercept or model 12, which only includes “Amount”, “Edge”, and “Hetero”. The weighted average of the intercepts for models 12 and 16 is -0.1056922, which is between the estimated intercepts for the two models (-0.1129 and -0.1014, respectively).
Overall, after controlling for the effects of the other variables in the model, the average response (Y) is higher when using Model 16 than when using Model 12.
Next we need to calculate the variance of our model averaged estimate. Variances can be added together, so first square our standard errors to get the variance of each estimate.
intercepts <- intercepts %>% mutate(var_est = std.error^2)
#intercepts
Now we calculate a weighted average of the variances, but we also have to include a component of how different the estimates are between models (see page 111 in MBILS). Vectorized calculations are our friend here:
sum(weights * (intercepts$var_est + (intercepts$estimate - mavg.intercept)^2))
## [1] 0.03599938
The unconditiona; variance calculated using model averaging is 0.03599938 which is in between the variance from Model-12: 0.03588737 and model-16: 0.03607321.
#############Model averaging coefficients not in every model#################### The intercept was easy because it occurs in every model. What if we want to model average a parameter that does not occur in every model, such as Amount? There are two problems to solve – how to extract the coefficients from the models that have Amount, and what to do with the models Amount does not occur in. The first is actually pretty easy – we just change the filter() condition above. For convienence later on, I’m going to add the modnames to the list fits. Then map_df() will use the names of the list to create a variable in the result indicating which list item the row comes from.
names(fits) <- modnames
amount <- map_df(fits, tidy, .id = "modname") %>% filter(term ==
"Amount")
If you look at the data.frame Amount you can see that it has fewer rows than the number of models. That is because the covariate Amount only appears in some of the models. We have two choices about how to deal with this fact. The first option just uses the weights from the models with Amount in them, re-normalizing the weights to add to 1. The other option is to assume the models without Amount have an estimated value of zero. We’ll try both.
####################Renormalizing weights to 1 We have to do two things: match the weights up to the smaller dataframe, and then change them to sum to one. The first task is solved handily by using a join operation. I will first add the modnames column to the dataframe with all the AIC statistics, then join the two tables together.
# have to add modnames to aic.table too!
aic.table$modname <- modnames
# we want all the rows in Amount, and all columns in both
# that is a left join in database speak. see ?join in dplyr
# for all the options
amount <- left_join(amount, aic.table, by = "modname")
names(amount)
## [1] "modname" "term" "estimate" "std.error" "statistic" "p.value"
## [7] "model" "AIC" "k" "deltas" "weights"
As you can see, we now have added the aic.table columns to the Amount data.frame. Manually compare a couple of rows from Amount with the values in aic.table to convince yourself this magic actually works. Now we renormalize the weights to 1
sum(amount$weights) # less than 1 but not by much
## [1] 0.989816
amount$weights <- amount$weights/sum(amount$weights)
and then do the model averaging calculations:
use with() to make code shorter – looks for objects in Amount first.
mavg.amount <- with(amount, sum(weights * estimate))
amount$var_est <- amount$std.error^2
mavg.var.amount <- with(amount, sum(weights * (var_est + (estimate -
mavg.amount)^2)))
mavg.var.amount
## [1] 0.2475534
3.Compare the model averaged coefficient and it’s variance with the estimates from models 12 and 16. Does the model averaging change your conclusion about the effect of this variable?
The variance of the model-averaged estimate of the Amount coefficient is 0.2475534 which is expected to lie somewhere between the coefficients from Models 12 and 16. Model 12: variance with the estimates 0.21916310 Model 16: variance with the estimates 0.27150830
The model averaging provides a more robust estimate of the effect of the Amount variable by accounting for the uncertainty associated with selecting a single model.
#################Use all models, missing estimates set to 0 The other way to do model averaging, which I prefer, is to assume the estimate is zero for those models in which it doesn’t appear. This “shrinks” the estimate towards zero, and by a greater amount if models without Amount have substantial weight on them. The trick is to use a different join operation, one that “fills in” with missing values rows that are missing in the lefthand table.
amount <- map_df(fits, tidy, .id = "modname") %>% filter(term ==
"Amount")
amount_allrows <- full_join(amount, aic.table, by = "modname")
Notice that the last 8 rows have values from aic.table, but NA for all the columns in Amount. We want to set the estimates and standard errors for those rows to zero.
pick <- is.na(amount_allrows$estimate)
amount_allrows[pick, c("estimate", "std.error")] <- 0
And now we can use exactly the same code to get the model averaged estimates.
mavg.amount2 <- with(amount_allrows, sum(weights * estimate))
amount_allrows$var_est <- amount_allrows$std.error^2
mavg.var.amount2 <- with(amount_allrows, sum(weights * (var_est +
(estimate - mavg.amount2)^2)))
mavg.var.amount2
## [1] 0.2750272
The variance of the model-averaged estimate of the Amount coefficient is 0.2750272. Model 12: Estimates:1.79782031 variance with the estimates 0.21916310 and Model 16: Estimates: 1.59334446 variance with the estimates 0.27150830. Model-averaged estimate is closer to the estimate from Model 12 than Model 16. Changing the method of averaging from conditional to model averaging does not change the conclusion about the effect of Amount on the response.
Model 12, the estimate for Amount is ~ 1.8 which is closer to true coefficient.
##############Model average all the things Ideally we want to get a table with the model averaged coefficients for every parameter in the model. We could copy and paste the calculations above for each parameter, then bind them all together into a data.frame. Or we could do all the calculations at once. We start by simply not filtering when we create our table of parameter estimates.
# get all the things
estimates <- map_df(fits, tidy, .id = "modname")
Start with the version that conditions the weights on the models containing each parameter. First we need to get the weights associated with each model attached to the right rows. Back to left_join()
estimates1 <- left_join(estimates, aic.table[, c("weights", "modname")],
by = "modname")
head(estimates1)
## # A tibble: 6 × 7
## modname term estimate std.error statistic p.value weights
## <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 (Intercept) -0.196 0.228 -0.860 0.390 7.57e-29
## 2 Amount (Intercept) -0.198 0.218 -0.906 0.366 4.89e-22
## 3 Amount Amount 1.32 0.224 5.90 0.00000000864 4.89e-22
## 4 Edge (Intercept) -0.203 0.225 -0.901 0.368 6.05e-27
## 5 Edge Edge 0.760 0.231 3.30 0.00108 6.05e-27
## 6 MnPatch (Intercept) -0.243 0.222 -1.09 0.275 6.04e-25
We have multiple rows for each model, one for every parameter in the model. I restricted the columns of aic.table just for simplicity. Now we are going to repeat all the calculations above for each value of term, and summarize the result into a single data.frame.
estimates1 %>% group_by(term) %>% mutate(wnorm = weights/sum(weights),
var_est = std.error^2) %>%
summarize(avg_est = sum(estimate * wnorm), avg_var = sum(wnorm *
(var_est + (estimate - avg_est)^2)))
## # A tibble: 5 × 3
## term avg_est avg_var
## <chr> <dbl> <dbl>
## 1 (Intercept) -0.106 0.0360
## 2 Amount 1.72 0.248
## 3 Edge -2.05 0.204
## 4 Hetero 2.19 0.0529
## 5 MnPatch 0.281 0.0956
Now that’s pretty nice! We did everything in about 7 lines of code.
Now we want to create a dataframe with the model averaged estimate, standard error, and a t-test comparing the estimate against the known true value and turn it into a nice output table using package pander.
coef_table1 <- estimates1 %>% group_by(term) %>% mutate(wnorm = weights/sum(weights),
var_est = std.error^2) %>%
summarize(avg_est = sum(estimate * wnorm), avg_var = sum(wnorm *
(var_est + (estimate - avg_est)^2)))
coef_table1$avg_SE <- sqrt(coef_table1$avg_var)
coef_table1$true_parms <- c(0, 2, -2, 2, 0)
coef_table1$t_value <- (coef_table1$avg_est - coef_table1$true_parms)/coef_table1$avg_SE
coef_table1$p_value <- 2 * (1 - pt(abs(coef_table1$t_value),
df = 350 - 5))
knitr::kable(coef_table1[, -3], digits = 2)
| term | avg_est | avg_SE | true_parms | t_value | p_value |
|---|---|---|---|---|---|
| (Intercept) | -0.11 | 0.19 | 0 | -0.56 | 0.58 |
| Amount | 1.72 | 0.50 | 2 | -0.55 | 0.58 |
| Edge | -2.05 | 0.45 | -2 | -0.11 | 0.91 |
| Hetero | 2.19 | 0.23 | 2 | 0.84 | 0.40 |
| MnPatch | 0.28 | 0.31 | 0 | 0.91 | 0.36 |
model12<-tidy(fits[[12]])
knitr::kable(model12, digits =2)
| term | estimate | std.error | statistic | p.value |
|---|---|---|---|---|
| (Intercept) | -0.10 | 0.19 | -0.54 | 0.59 |
| Amount | 1.80 | 0.47 | 3.84 | 0.00 |
| Edge | -2.06 | 0.44 | -4.67 | 0.00 |
| Hetero | 2.19 | 0.23 | 9.57 | 0.00 |
The estimated coefficients, standard errors, t-statistics, and p-values for each of the model’s parameters. The intercept term has an estimated coefficient of -0.10, which is not significantly different from zero (p-value = 0.59). The Amount and Edge variables have significant positive and negative effects, respectively, on the response variable, with estimated coefficients of 1.80 and -2.06, and p-values of 0.00. The Hetero variable also has a significant positive effect on the response variable, with an estimated coefficient of 2.19 and a p-value of 0.00.
Now let’s do it with the other option, assuming the estimates are zero when a parameter isn’t present in the model. We need to flesh out our dataframe estimates with the rows that are missing. A full join will do this, but first we need a table with all the coefficients and all the model names in it.
parms <- crossing(term = c("(Intercept)", "Amount", "Edge", "Hetero",
"MnPatch"), modname = modnames)
estimates2 <- full_join(estimates, parms)
## Joining with `by = join_by(modname, term)`
Now we have a table with 5 parameters x 16 models = 80 rows. We can use nearly the same code as before, just skipping the normalization of the weights.
estimates2 <- left_join(estimates2, aic.table[, c("weights",
"modname")], by = "modname")
estimates2 %>% group_by(term) %>% mutate(estimate = if_else(is.na(estimate),
0, estimate), std.error = if_else(is.na(std.error), 0, std.error),
var_est = std.error^2) %>% summarize(avg_est = sum(estimate *
weights), avg_var = sum(weights * (var_est + (estimate -
avg_est)^2)))
## # A tibble: 5 × 3
## term avg_est avg_var
## <chr> <dbl> <dbl>
## 1 (Intercept) -0.106 0.0360
## 2 Amount 1.71 0.275
## 3 Edge -2.05 0.205
## 4 Hetero 2.19 0.0529
## 5 MnPatch 0.102 0.0528
knitr::kable(estimates2, digits =3)
| modname | term | estimate | std.error | statistic | p.value | weights |
|---|---|---|---|---|---|---|
| 1 | (Intercept) | -0.196 | 0.228 | -0.860 | 0.390 | 0.000 |
| Amount | (Intercept) | -0.198 | 0.218 | -0.906 | 0.366 | 0.000 |
| Amount | Amount | 1.321 | 0.224 | 5.900 | 0.000 | 0.000 |
| Edge | (Intercept) | -0.203 | 0.225 | -0.901 | 0.368 | 0.000 |
| Edge | Edge | 0.760 | 0.231 | 3.297 | 0.001 | 0.000 |
| MnPatch | (Intercept) | -0.243 | 0.222 | -1.093 | 0.275 | 0.000 |
| MnPatch | MnPatch | 1.042 | 0.231 | 4.520 | 0.000 | 0.000 |
| Hetero | (Intercept) | -0.118 | 0.195 | -0.604 | 0.546 | 0.000 |
| Hetero | Hetero | 2.174 | 0.189 | 11.520 | 0.000 | 0.000 |
| Amount + Edge | (Intercept) | -0.181 | 0.213 | -0.851 | 0.396 | 0.000 |
| Amount + Edge | Amount | 3.286 | 0.496 | 6.628 | 0.000 | 0.000 |
| Amount + Edge | Edge | -2.182 | 0.494 | -4.414 | 0.000 | 0.000 |
| Amount + MnPatch | (Intercept) | -0.200 | 0.219 | -0.915 | 0.361 | 0.000 |
| Amount + MnPatch | Amount | 1.273 | 0.346 | 3.684 | 0.000 | 0.000 |
| Amount + MnPatch | MnPatch | 0.063 | 0.349 | 0.181 | 0.856 | 0.000 |
| Amount + Hetero | (Intercept) | -0.116 | 0.195 | -0.595 | 0.552 | 0.000 |
| Amount + Hetero | Amount | -0.076 | 0.249 | -0.305 | 0.760 | 0.000 |
| Amount + Hetero | Hetero | 2.217 | 0.235 | 9.427 | 0.000 | 0.000 |
| Edge + MnPatch | (Intercept) | -0.241 | 0.223 | -1.081 | 0.280 | 0.000 |
| Edge + MnPatch | Edge | 0.096 | 0.315 | 0.305 | 0.760 | 0.000 |
| Edge + MnPatch | MnPatch | 0.975 | 0.319 | 3.057 | 0.002 | 0.000 |
| Edge + Hetero | (Intercept) | -0.102 | 0.193 | -0.526 | 0.600 | 0.001 |
| Edge + Hetero | Edge | -0.610 | 0.232 | -2.632 | 0.009 | 0.001 |
| Edge + Hetero | Hetero | 2.477 | 0.220 | 11.274 | 0.000 | 0.001 |
| MnPatch + Hetero | (Intercept) | -0.122 | 0.195 | -0.625 | 0.533 | 0.000 |
| MnPatch + Hetero | MnPatch | 0.075 | 0.223 | 0.337 | 0.736 | 0.000 |
| MnPatch + Hetero | Hetero | 2.145 | 0.208 | 10.291 | 0.000 | 0.000 |
| Amount + Edge + Hetero | (Intercept) | -0.101 | 0.189 | -0.535 | 0.593 | 0.638 |
| Amount + Edge + Hetero | Amount | 1.798 | 0.468 | 3.840 | 0.000 | 0.638 |
| Amount + Edge + Hetero | Edge | -2.058 | 0.440 | -4.674 | 0.000 | 0.638 |
| Amount + Edge + Hetero | Hetero | 2.186 | 0.228 | 9.565 | 0.000 | 0.638 |
| Amount + Edge + MnPatch | (Intercept) | -0.185 | 0.213 | -0.868 | 0.386 | 0.000 |
| Amount + Edge + MnPatch | Amount | 3.215 | 0.554 | 5.803 | 0.000 | 0.000 |
| Amount + Edge + MnPatch | Edge | -2.185 | 0.495 | -4.414 | 0.000 | 0.000 |
| Amount + Edge + MnPatch | MnPatch | 0.098 | 0.340 | 0.289 | 0.773 | 0.000 |
| Amount + MnPatch + Hetero | (Intercept) | -0.126 | 0.196 | -0.646 | 0.519 | 0.000 |
| Amount + MnPatch + Hetero | Amount | -0.264 | 0.349 | -0.758 | 0.449 | 0.000 |
| Amount + MnPatch + Hetero | MnPatch | 0.241 | 0.312 | 0.771 | 0.441 | 0.000 |
| Amount + MnPatch + Hetero | Hetero | 2.228 | 0.236 | 9.451 | 0.000 | 0.000 |
| Edge + MnPatch + Hetero | (Intercept) | -0.130 | 0.192 | -0.678 | 0.498 | 0.009 |
| Edge + MnPatch + Hetero | Edge | -1.043 | 0.290 | -3.599 | 0.000 | 0.009 |
| Edge + MnPatch + Hetero | MnPatch | 0.678 | 0.276 | 2.458 | 0.014 | 0.009 |
| Edge + MnPatch + Hetero | Hetero | 2.425 | 0.219 | 11.062 | 0.000 | 0.009 |
| Amount + Edge + MnPatch + Hetero | (Intercept) | -0.113 | 0.190 | -0.594 | 0.553 | 0.352 |
| Amount + Edge + MnPatch + Hetero | Amount | 1.593 | 0.521 | 3.058 | 0.002 | 0.352 |
| Amount + Edge + MnPatch + Hetero | Edge | -2.067 | 0.441 | -4.691 | 0.000 | 0.352 |
| Amount + Edge + MnPatch + Hetero | MnPatch | 0.271 | 0.303 | 0.895 | 0.372 | 0.352 |
| Amount + Edge + MnPatch + Hetero | Hetero | 2.198 | 0.229 | 9.599 | 0.000 | 0.352 |
| 1 | Amount | NA | NA | NA | NA | 0.000 |
| Edge | Amount | NA | NA | NA | NA | 0.000 |
| Edge + Hetero | Amount | NA | NA | NA | NA | 0.001 |
| Edge + MnPatch | Amount | NA | NA | NA | NA | 0.000 |
| Edge + MnPatch + Hetero | Amount | NA | NA | NA | NA | 0.009 |
| Hetero | Amount | NA | NA | NA | NA | 0.000 |
| MnPatch | Amount | NA | NA | NA | NA | 0.000 |
| MnPatch + Hetero | Amount | NA | NA | NA | NA | 0.000 |
| 1 | Edge | NA | NA | NA | NA | 0.000 |
| Amount | Edge | NA | NA | NA | NA | 0.000 |
| Amount + Hetero | Edge | NA | NA | NA | NA | 0.000 |
| Amount + MnPatch | Edge | NA | NA | NA | NA | 0.000 |
| Amount + MnPatch + Hetero | Edge | NA | NA | NA | NA | 0.000 |
| Hetero | Edge | NA | NA | NA | NA | 0.000 |
| MnPatch | Edge | NA | NA | NA | NA | 0.000 |
| MnPatch + Hetero | Edge | NA | NA | NA | NA | 0.000 |
| 1 | Hetero | NA | NA | NA | NA | 0.000 |
| Amount | Hetero | NA | NA | NA | NA | 0.000 |
| Amount + Edge | Hetero | NA | NA | NA | NA | 0.000 |
| Amount + Edge + MnPatch | Hetero | NA | NA | NA | NA | 0.000 |
| Amount + MnPatch | Hetero | NA | NA | NA | NA | 0.000 |
| Edge | Hetero | NA | NA | NA | NA | 0.000 |
| Edge + MnPatch | Hetero | NA | NA | NA | NA | 0.000 |
| MnPatch | Hetero | NA | NA | NA | NA | 0.000 |
| 1 | MnPatch | NA | NA | NA | NA | 0.000 |
| Amount | MnPatch | NA | NA | NA | NA | 0.000 |
| Amount + Edge | MnPatch | NA | NA | NA | NA | 0.000 |
| Amount + Edge + Hetero | MnPatch | NA | NA | NA | NA | 0.638 |
| Amount + Hetero | MnPatch | NA | NA | NA | NA | 0.000 |
| Edge | MnPatch | NA | NA | NA | NA | 0.000 |
| Edge + Hetero | MnPatch | NA | NA | NA | NA | 0.001 |
| Hetero | MnPatch | NA | NA | NA | NA | 0.000 |
Now the easy way All those calculations can be done for you by package AICcmodavg.
library(AICcmodavg)
##
## Attaching package: 'AICcmodavg'
## The following objects are masked from 'package:MuMIn':
##
## AICc, DIC, importance
modavg(fits, "Amount", modnames = modnames)
##
## Multimodel inference on "Amount" based on AICc
##
## AICc table used to obtain model-averaged estimate:
##
## K AICc Delta_AICc AICcWt Estimate SE
## Amount 3 1981.46 97.13 0.00 1.32 0.22
## Amount + Edge 4 1964.38 80.06 0.00 3.29 0.50
## Amount + MnPatch 4 1983.47 99.15 0.00 1.27 0.35
## Amount + Hetero 4 1903.69 19.37 0.00 -0.08 0.25
## Amount + Edge + Hetero 5 1884.32 0.00 0.65 1.80 0.47
## Amount + Edge + MnPatch 5 1966.35 82.03 0.00 3.22 0.55
## Amount + MnPatch + Hetero 5 1905.15 20.83 0.00 -0.26 0.35
## Amount + Edge + MnPatch + Hetero 6 1885.58 1.26 0.35 1.59 0.52
##
## Model-averaged estimate: 1.73
## Unconditional SE: 0.5
## 95% Unconditional confidence interval: 0.75, 2.7
modavgShrink(fits, "Amount", modnames = modnames)
##
## Multimodel inference on "Amount" based on AICc
##
## AICc table used to obtain model-averaged estimate with shrinkage:
##
## K AICc Delta_AICc AICcWt Estimate SE
## 1 2 2012.78 128.46 0.00 0.00 0.00
## Amount 3 1981.46 97.13 0.00 1.32 0.22
## Edge 3 2004.05 119.73 0.00 0.00 0.00
## MnPatch 3 1994.85 110.53 0.00 0.00 0.00
## Hetero 3 1901.74 17.42 0.00 0.00 0.00
## Amount + Edge 4 1964.38 80.06 0.00 3.29 0.50
## Amount + MnPatch 4 1983.47 99.15 0.00 1.27 0.35
## Amount + Hetero 4 1903.69 19.37 0.00 -0.08 0.25
## Edge + MnPatch 4 1996.80 112.48 0.00 0.00 0.00
## Edge + Hetero 4 1896.87 12.55 0.00 0.00 0.00
## MnPatch + Hetero 4 1903.67 19.35 0.00 0.00 0.00
## Amount + Edge + Hetero 5 1884.32 0.00 0.65 1.80 0.47
## Amount + Edge + MnPatch 5 1966.35 82.03 0.00 3.22 0.55
## Amount + MnPatch + Hetero 5 1905.15 20.83 0.00 -0.26 0.35
## Edge + MnPatch + Hetero 5 1892.87 8.55 0.01 0.00 0.00
## Amount + Edge + MnPatch + Hetero 6 1885.58 1.26 0.34 1.59 0.52
##
## Model-averaged estimate with shrinkage: 1.71
## Unconditional SE: 0.52
## 95% Unconditional confidence interval: 0.68, 2.74
So that was less code, but pulling it all out into a table for all parameters would be more painful than the dplyr based manual code above. You have to do each parameter individually.
Finally, let’s make a plot. The design of the plot below was suggested by Shivani Jadeja a couple of years ago.
gg1 <- ggplot(smithsim, aes(x = Edge, y = Y, color = Amount)) +
geom_point()
nd <- expand.grid(Edge = -3:3, Amount = quantile(smithsim$Amount,
p = c(0, 0.5, 1)), Hetero = median(smithsim$Hetero), MnPatch = median(smithsim$MnPatch))
nd <- augment(fits[[12]], newdata = nd)
gg1 + geom_line(aes(y = .fitted, group = Amount), data = nd)