{r, echo=FALSE} library(readr) load(“C:/Users/Dell/Downloads/Mustard.rda”) View(Mustard)

Model 1: Main effects

model1 <- lm(weight ~ light + watering + medium, data = Mustard)

Model 2: Main effects + 2-way interactions

model2 <- lm(weight ~ (light + watering + medium)^2, data = Mustard)

Model 3: Main effects + 2-way interactions + 3-way interaction

model3 <- lm(weight ~ (light + watering + medium)^2 * light, data = Mustard)

Compare Models 1 and 2

anova_result_1_2 <- anova(model1, model2)

Compare Models 2 and 3

anova_result_2_3 <- anova(model2, model3)

Model selection based on AIC and BIC

{r, echo=FALSE} # Use the AIC() and BIC() functions to compare models and choose the one with the lowest AIC/BIC # Model selection based on AIC and BIC aic_model1 <- AIC(model1) aic_model2 <- AIC(model2) aic_model3 <- AIC(model3)

bic_model1 <- BIC(model1) bic_model2 <- BIC(model2) bic_model3 <- BIC(model3)

Compare AIC and BIC

cat(“AIC:”, aic_model1, aic_model2, aic_model3, “”) cat(“BIC:”, bic_model1, bic_model2, bic_model3, “”)

The AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) are both measures used for model selection, with lower values indicating better-fitting models. In your output:

1.AIC:

2.BIC:

In both AIC and BIC, the model with the lower value is preferred. Therefore, based on AIC, Model 1 is preferable, and based on BIC, Model 1 is also preferable.

It’s worth noting that AIC tends to favor more complex models, while BIC penalizes complexity more heavily. In this case, both criteria agree that Model 1 is the better choice.

So, considering ANOVA results and information criteria (AIC and BIC), it seems that Model 1 (main effects only) is the preferred model for your data.

{r, echo=FALSE} # Fit the final selected model final_model <- lm(weight ~ light * watering * medium, data = Mustard)

coefficients <- coef(final_model) print(coefficients)

Predictions from the final model

Use the predict() function to estimate mean weight for a specific combination of variables

new_data <- data.frame( light = “red”,
watering = 3,
medium = “cottonwool”
)

Assuming ‘watering’ is a factor in your original data

Convert ‘watering’ to a factor in the new data frame

new_data\(watering <- factor(new_data\)watering)

Now, you can proceed with predicting

predicted_weight <- predict(final_model, newdata = new_data) print(predicted_weight)

Maintaining consistency across other factors, if the traffic signal is red, irrigation occurs threefold, and the substrate is cottonwool, the model forecasts an average weight of around 1.73 grams for the mustard plants.