The model with the highest \(R^2\) and adjusted \(R^2\) is the preferable of all candidate models The quadratic model is the preferable model in that case.
In other words, the coefficient of determination is the percent of the variation explained by the .
\(R^2\) is a measure of variation explained by regression.
The following coefficient has a natural interpretation as amount of variability in the data that is explained by the regression fit: \(R^2 = SSLR/SST = 1 - SSR/SST\).
A similar interpretation is given to the adjusted coefficient \(R^2_{adj}\) which is given by $R^2_{adj} = 1 - MSR/MST $; where MSR is the mean squared error due to residuals, and MST is the total mean squared error.
%%----------------------------------------------------%%
Equivalently \(R^2\) provides a measure of how well future outcomes are likely to be predicted by the model.
This relationship is co-incidental when there are just two variables.
}
%%----------------------------------------------------%%
The adjusted R-square value is found on the summary output for a fitted model. It is called because it takes into account the number of predictor variables being used. The law of parsimony states the simplest model that adequately explains the outcomes is the best. The candidate model with the higher adjusted R squared is considered preferable.