{r, echo=FALSE} library(readr) load(“C:/Users/Dell/Downloads/Mustard.rda”) View(Mustard)
model1 <- lm(weight ~ light + watering + medium, data = Mustard)
model2 <- lm(weight ~ (light + watering + medium)^2, data = Mustard)
model3 <- lm(weight ~ (light + watering + medium)^2 * light, data = Mustard)
anova_result_1_2 <- anova(model1, model2)
anova_result_2_3 <- anova(model2, model3)
print(anova_result_1_2) print(anova_result_2_3)
In summary, based on the ANOVA comparisons, Model 1 is selected over Model 2, and Model 2 is chosen over Model 3. Therefore, Model 1 is the preferred model overall.
{r, echo=FALSE} # Use the AIC() and BIC() functions to compare models and choose the one with the lowest AIC/BIC # Model selection based on AIC and BIC aic_model1 <- AIC(model1) aic_model2 <- AIC(model2) aic_model3 <- AIC(model3)
bic_model1 <- BIC(model1) bic_model2 <- BIC(model2) bic_model3 <- BIC(model3)
cat(“AIC:”, aic_model1, aic_model2, aic_model3, “”) cat(“BIC:”, bic_model1, bic_model2, bic_model3, “”)
The AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) are both measures used for model selection, with lower values indicating better-fitting models. In your output:
1.AIC:
2.BIC:
In both AIC and BIC, the model with the lower value is preferred. Therefore, based on AIC, Model 1 is preferable, and based on BIC, Model 1 is also preferable.
It’s worth noting that AIC tends to favor more complex models, while BIC penalizes complexity more heavily. In this case, both criteria agree that Model 1 is the better choice.
So, considering ANOVA results and information criteria (AIC and BIC), it seems that Model 1 (main effects only) is the preferred model for your data.
{r, echo=FALSE} # Fit the final selected model final_model <- lm(weight ~ light * watering * medium, data = Mustard)
coefficients <- coef(final_model) print(coefficients)
new_data <- data.frame( light = “red”,
watering = 3,
medium = “cottonwool”
)
new_data\(watering <- factor(new_data\)watering)
predicted_weight <- predict(final_model, newdata = new_data) print(predicted_weight)
Maintaining consistency across other factors, if the traffic signal is red, irrigation occurs threefold, and the substrate is cottonwool, the model forecasts an average weight of around 1.73 grams for the mustard plants.