This is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see http://rmarkdown.rstudio.com.
When you click the Knit button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document. You can embed an R code chunk like this:
This chapter contains articles describing statistical tests to use for comparing means. These tests include:
T-test Wilcoxon test ANOVA test and Kruskal-Wallis test
How this chapter is organized?
Comparing one-sample mean to a standard known mean: One-Sample T-test (parametric) One-Sample Wilcoxon Test (non-parametric) Comparing the means of two independent groups: Unpaired Two Samples T-test (parametric) Unpaired Two-Samples Wilcoxon Test (non-parametric) Comparing the means of paired samples: Paired Samples T-test (parametric) Paired Samples Wilcoxon Test (non-parametric) Comparing the means of more than two groups Analysis of variance (ANOVA, parametric): One-Way ANOVA Test in R Two-Way ANOVA Test in R MANOVA Test in R: Multivariate Analysis of Variance Kruskal-Wallis Test in R (non parametric alternative to one-way ANOVA)
What is one-sample t-test?
one-sample t-test is used to compare the mean of one sample to a known standard (or theoretical/hypothetical) mean (μ ).
Note that, one-sample t-test can be used only, when the data are normally distributed . This can be checked using Shapiro-Wilk test .
n statistics, we can define the corresponding null hypothesis (H0) as follow:
H0:m=μ H0:m≤μ H0:m≥μ The corresponding alternative hypotheses (Ha) are as follow:
Ha:m≠μ (different) Ha:m>μ (greater) Ha:m<μ (less)
Note that:
Hypotheses 1) are called two-tailed tests Hypotheses 2) and 3) are called one-tailed tests
Formula of one-sample t-test The t-statistic can be calculated as follow:
t=m−μs/n−−√
where,
m is the sample mean n is the sample size s is the sample standard deviation with n−1 degrees of freedom μ is the theoretical value
We can compute the p-value corresponding to the absolute value of the t-test statistics (|t|) for the degrees of freedom (df): df=n−1
How to interpret the results?
If the p-value is inferior or equal to the significance level 0.05, we can reject the null hypothesis and accept the alternative hypothesis. In other words, we conclude that the sample mean is significantly different from the theoretical mean.
R function to compute one-sample t-test To perform one-sample t-test, the R function t.test() can be used as follow:
t.test(x, mu = 0, alternative = “two.sided”)
x: a numeric vector containing your data values mu: the theoretical mean. Default is 0 but you can change it. alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.
set.seed(1234)
my_data <- data.frame(
name = paste0(rep("M_", 10), 1:10),
weight = round(rnorm(10, 20, 2), 1)
)
my_data
## name weight
## 1 M_1 17.6
## 2 M_2 20.6
## 3 M_3 22.2
## 4 M_4 15.3
## 5 M_5 20.9
## 6 M_6 21.0
## 7 M_7 18.9
## 8 M_8 18.9
## 9 M_9 18.9
## 10 M_10 18.2
#rep("M_", 10) creates a vector repeating the string "M_" ten times.
#1:10 generates a sequence of numbers from 1 to 10.
#paste0() is a function in R used to concatenate vectors after converting them to character strings.
#So, paste0(rep("M_", 10), 1:10) creates a vector of strings like "M_1", "M_2", ..., "M_10". Each string is formed by pasting "M_" with a number from 1 to 10.
#This vector becomes the name column of the my_data data frame.
#weight = round(rnorm(10, 20, 2), 1):
#rnorm(10, 20, 2) generates 10 random numbers from a normal distribution with mean 20 and standard deviation 2.
#round(..., 1) rounds each of these random numbers to one decimal place.
#This rounded vector becomes the weight column of the my_data data frame.
summary(my_data$weight)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 15.30 18.38 18.90 19.25 20.82 22.20
Min.: the minimum value 1st Qu.: The first quartile. 25% of values are lower than this. Median: the median value. Half the values are lower; half are higher. 3rd Qu.: the third quartile. 75% of values are higher than this. Max.: the maximum value
Visualize your data using box plots
library(ggplot2)
library(ggpubr)
ggboxplot(my_data$weight,
ylab = "Weight (g)", xlab = FALSE,
ggtheme = theme_minimal())
Preleminary test to check one-sample t-test assumptions * Is this a large sample? - No, because n < 30. * Since the sample size is not large enough (less than 30, central limit theorem), we need to check whether the data follow a normal distribution.
How to check the normality?
Briefly, it’s possible to use the Shapiro-Wilk normality test and to look at the normality plot.
Shapiro-Wilk test: Null hypothesis: the data are normally distributed Alternative hypothesis: the data are not normally distributed
shapiro.test(my_data$weight) # => p-value = 0.6993
##
## Shapiro-Wilk normality test
##
## data: my_data$weight
## W = 0.9526, p-value = 0.6993
From the output, the p-value is greater than the significance level 0.05 implying that the distribution of the data are not significantly different from normal distribtion. In other words, we can assume the normality.
Visual inspection of the data normality using Q-Q plots (quantile-quantile plots). Q-Q plot draws the correlation between a given sample and the normal distribution.
library("ggpubr")
ggqqplot(my_data$weight, ylab = "Men's weight",
ggtheme = theme_minimal())
From the normality plots, we conclude that the data may come from normal distributions.
Note that, if the data are not normally distributed, it’s recommended to use the non parametric one-sample Wilcoxon rank test.
Compute one-sample t-test We want to know, if the average weight of the mice differs from 25g (two-tailed test)?
res<-t.test(my_data$weight,mu=25,alternative="two.sided",paired=FALSE)
res
##
## One Sample t-test
##
## data: my_data$weight
## t = -9.0783, df = 9, p-value = 7.953e-06
## alternative hypothesis: true mean is not equal to 25
## 95 percent confidence interval:
## 17.8172 20.6828
## sample estimates:
## mean of x
## 19.25
In the result above :
t is the t-test statistic value (t = -9.078), df is the degrees of freedom (df= 9), p-value is the significance level of the t-test (p-value = 7.95310^{-6}). conf.int is the confidence interval of the mean at 95% (conf.int = [17.8172, 20.6828]); sample estimates is he mean value of the sample (mean = 19.25).
Note that:
if you want to test whether the mean weight of mice is less than 25g (one-tailed test), type this:
t.test(my_data$weight, mu = 25,
alternative = "less")
##
## One Sample t-test
##
## data: my_data$weight
## t = -9.0783, df = 9, p-value = 3.977e-06
## alternative hypothesis: true mean is less than 25
## 95 percent confidence interval:
## -Inf 20.41105
## sample estimates:
## mean of x
## 19.25
Or, if you want to test whether the mean weight of mice is greater than 25g (one-tailed test), type this:
t.test(my_data$weight, mu = 25,
alternative = "greater")
##
## One Sample t-test
##
## data: my_data$weight
## t = -9.0783, df = 9, p-value = 1
## alternative hypothesis: true mean is greater than 25
## 95 percent confidence interval:
## 18.08895 Inf
## sample estimates:
## mean of x
## 19.25
Interpretation of the result The p-value of the test is 7.95310^{-6}, which is less than the significance level alpha = 0.05. We can conclude that the mean weight of the mice is significantly different from 25g with a p-value = 7.95310^{-6}.
Access to the values returned by t.test() function The result of t.test() function is a list containing the following components:
statistic: the value of the t test statistics parameter: the degrees of freedom for the t test statistics p.value: the p-value for the test conf.int: a confidence interval for the mean appropriate to the specified alternative hypothesis. estimate: the means of the two groups being compared (in the case of independent t test) or difference in means (in the case of paired t test).
#printing the p-value
res$p.value
## [1] 7.953383e-06
# printing the mean
res$estimate
## mean of x
## 19.25
# printing the confidence interval
res$conf.int
## [1] 17.8172 20.6828
## attr(,"conf.level")
## [1] 0.95
What’s one-sample Wilcoxon signed rank test?
The one-sample Wilcoxon signed rank test is a non-parametric alternative to one-sample t-test when the data cannot be assumed to be normally distributed.
It’s used to determine whether the median of the sample is equal to a known standard value (i.e. theoretical value).
Note that, the data should be distributed symmetrically around the median. In other words, there should be roughly the same number of values above and below the median.
Typical research questions are:
whether the median (m) of the sample is equal to the theoretical value (m0)? whether the median (m) of the sample is less than to the theoretical value (m0)? whether the median (m) of the sample is greater than to the theoretical value(m0)?
In statistics, we can define the corresponding null hypothesis (H0) as follow:
H0:m=m0 H0:m≤m0 H0:m≥m0 The corresponding alternative hypotheses (Ha) are as follow:
Ha:m≠m0 (different) Ha:m>m0 (greater) Ha:m<m0 (less) Note that:
Hypotheses 1) are called two-tailed tests Hypotheses 2) and 3) are called one-tailed tests
R function to compute one-sample Wilcoxon test To perform one-sample Wilcoxon-test, the R function wilcox.test() can be used as follow:
wilcox.test(x, mu = 0, alternative = “two.sided”)
x: a numeric vector containing your data values mu: the theoretical mean/median value. Default is 0 but you can change it. alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.
library(ggpubr)
ggboxplot(my_data$weight,
ylab = "Weight (g)", xlab = FALSE,
ggtheme = theme_minimal())
Compute one-sample Wilcoxon test We want to know, if the average weight of the mice differs from 25g (two-tailed test)?
# One-sample wilcoxon test
res <- wilcox.test(my_data$weight, mu = 25)
## Warning in wilcox.test.default(my_data$weight, mu = 25): cannot compute exact
## p-value with ties
# Printing the results
res
##
## Wilcoxon signed rank test with continuity correction
##
## data: my_data$weight
## V = 0, p-value = 0.005793
## alternative hypothesis: true location is not equal to 25
# print only the p-value
res$p.value
## [1] 0.005793045
The p-value of the test is 0.005793, which is less than the significance level alpha = 0.05. We can reject the null hypothesis and conclude that the average weight of the mice is significantly different from 25g with a p-value = 0.005793.
Note that:
if you want to test whether the median weight of mice is less than 25g (one-tailed test), type this:
wilcox.test(my_data$weight, mu = 25,
alternative = "less")
## Warning in wilcox.test.default(my_data$weight, mu = 25, alternative = "less"):
## cannot compute exact p-value with ties
##
## Wilcoxon signed rank test with continuity correction
##
## data: my_data$weight
## V = 0, p-value = 0.002897
## alternative hypothesis: true location is less than 25
Or, if you want to test whether the median weight of mice is greater than 25g (one-tailed test), type this:
wilcox.test(my_data$weight, mu = 25,
alternative = "greater")
## Warning in wilcox.test.default(my_data$weight, mu = 25, alternative =
## "greater"): cannot compute exact p-value with ties
##
## Wilcoxon signed rank test with continuity correction
##
## data: my_data$weight
## V = 0, p-value = 0.9979
## alternative hypothesis: true location is greater than 25
Comparing the means of two independent groups 3.1 Unpaired two samples t-test (parametric) What is unpaired two-samples t-test? Research questions and statistical hypotheses Formula of unpaired two-samples t-test Visualize your data and compute unpaired two-samples t-test in R R function to compute unpaired two-samples t-test Visualize your data using box plots Preliminary test to check independent t-test assumptions Compute unpaired two-samples t-test Interpretation of the result
What is unpaired two-samples t-test?
The unpaired two-samples t-test is used to compare the mean of two independent groups.
For example, suppose that we have measured the weight of 100 individuals: 50 women (group A) and 50 men (group B). We want to know if the mean weight of women (mA) is significantly different from that of men (mB).
In this case, we have two unrelated (i.e., independent or unpaired) groups of samples. Therefore, it’s possible to use an independent t-test to evaluate whether the means are different.
Note that, unpaired two-samples t-test can be used only under certain conditions:
when the two groups of samples (A and B), being compared, are normally distributed. This can be checked using Shapiro-Wilk test.
and when the variances of the two groups are equal. This can be checked using F-test.
whether the mean of group A (mA) is equal to the mean of group B (mB)? whether the mean of group A (mA) is less than the mean of group B (mB)? whether the mean of group A (mA) is greather than the mean of group B (mB)?
In statistics, we can define the corresponding null hypothesis (H0) as follow:
H0:mA=mB H0:mA≤mB H0:mA≥mB The corresponding alternative hypotheses (Ha) are as follow:
Ha:mA≠mB (different) Ha:mA>mB (greater) Ha:mA<mB (less) Note that:
Hypotheses 1) are called two-tailed tests Hypotheses 2) and 3) are called one-tailed tests
Formula of unpaired two-samples t-test Classical t-test: If the variance of the two groups are equivalent (homoscedasticity), the t-test value, comparing the two samples (Aand B),
where,mA and mB represent the mean value of the group A and B, respectively. nA and nB represent the sizes of the group A and B, respectively. S2 is an estimator of the pooled variance of the two groups.
with degrees of freedom (df): df=nA+nB−2.
Welch t-statistic:
If the variances of the two groups being compared are different (heteroscedasticity), it’s possible to use the Welch t test, an adaptation of Student t-test.
Unlike the classic Student’s t-test, Welch t-test formula involves the variance of each of the two groups (S2A and S2B) being compared. In other words, it does not use the pooled varianceS.
A p-value can be computed for the corresponding absolute value of t-statistic (|t|).
Note that, the Welch t-test is considered as the safer one. Usually, the results of the classical t-test and the Welch t-test are very similar unless both the group sizes and the standard deviations are very different.
How to interpret the results?
If the p-value is inferior or equal to the significance level 0.05, we can reject the null hypothesis and accept the alternative hypothesis. In other words, we can conclude that the mean values of group A and B are significantly different.
R function to compute unpaired two-samples t-test To perform two-samples t-test comparing the means of two independent samples (x & y), the R function t.test() can be used as follow:
t.test(x, y, alternative = “two.sided”, var.equal = FALSE)
x,y: numeric vectors alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”. var.equal: a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch test is used.
#Data in two numeric vectors
women_weight <- c(38.9, 61.2, 73.3, 21.8, 63.4, 64.6, 48.4, 48.8, 48.5)
men_weight <- c(67.8, 60, 63.4, 76, 89.4, 73.3, 67.3, 61.3, 62.4)
# Create a data frame
my_data <- data.frame(
group = rep(c("Woman", "Man"), each = 9),
weight = c(women_weight, men_weight)
)
We want to know, if the average women’s weight differs from the average men’s weight?
Check your data
# Print all data
print(my_data)
## group weight
## 1 Woman 38.9
## 2 Woman 61.2
## 3 Woman 73.3
## 4 Woman 21.8
## 5 Woman 63.4
## 6 Woman 64.6
## 7 Woman 48.4
## 8 Woman 48.8
## 9 Woman 48.5
## 10 Man 67.8
## 11 Man 60.0
## 12 Man 63.4
## 13 Man 76.0
## 14 Man 89.4
## 15 Man 73.3
## 16 Man 67.3
## 17 Man 61.3
## 18 Man 62.4
It’s possible to compute summary statistics (mean and sd) by groups. The dplyr package can be used.
To install dplyr package, type this:
#install.packages("dplyr")
Compute summary statistics by groups:
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
group_by(my_data, group) %>%
summarise(
count = n(),
mean = mean(weight, na.rm = TRUE),
sd = sd(weight, na.rm = TRUE)
)
## # A tibble: 2 × 4
## group count mean sd
## <chr> <int> <dbl> <dbl>
## 1 Man 9 69.0 9.38
## 2 Woman 9 52.1 15.6
#Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800"),
ylab = "Weight", xlab = "Groups")
Preleminary test to check independent t-test assumptions
Assumption 1: Are the two samples independents? Yes, since the samples from men and women are not related.
Assumtion 2: Are the data from each of the 2 groups follow a normal distribution? Use Shapiro-Wilk normality test as described at: Normality Test in R. - Null hypothesis: the data are normally distributed - Alternative hypothesis: the data are not normally distributed
We’ll use the functions with() and shapiro.test() to compute Shapiro-Wilk test for each group of samples.
# Shapiro-Wilk normality test for Women's weights
with(my_data, shapiro.test(weight[group == "Woman"])) # p = 0.6
##
## Shapiro-Wilk normality test
##
## data: weight[group == "Woman"]
## W = 0.94266, p-value = 0.6101
#Shapiro-Wilk normality test for Men's weights
with(my_data, shapiro.test(weight[group == "Man"]))# p = 0.1
##
## Shapiro-Wilk normality test
##
## data: weight[group == "Man"]
## W = 0.86425, p-value = 0.1066
from the output, the two p-values are greater than the significance level 0.05 implying that the distribution of the data are not significantly different from the normal distribution. In other words, we can assume the normality.
Note that, if the data are not normally distributed, it’s recommended to use the non parametric two-samples Wilcoxon rank test.
res.ftest <- var.test(weight ~ group, data = my_data)
res.ftest
##
## F test to compare two variances
##
## data: weight by group
## F = 0.36134, num df = 8, denom df = 8, p-value = 0.1714
## alternative hypothesis: true ratio of variances is not equal to 1
## 95 percent confidence interval:
## 0.08150656 1.60191315
## sample estimates:
## ratio of variances
## 0.3613398
The p-value of F-test is p = 0.1713596. It’s greater than the significance level alpha = 0.05. In conclusion, there is no significant difference between the variances of the two sets of data. Therefore, we can use the classic t-test witch assume equality of the two variances.
Compute unpaired two-samples t-test Question : Is there any significant difference between women and men weights?
# Compute t-test
res <- t.test(women_weight, men_weight, var.equal = TRUE)
res
##
## Two Sample t-test
##
## data: women_weight and men_weight
## t = -2.7842, df = 16, p-value = 0.01327
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -29.748019 -4.029759
## sample estimates:
## mean of x mean of y
## 52.10000 68.98889
# Compute t-test
res <- t.test(weight ~ group, data = my_data, var.equal = TRUE)
res
##
## Two Sample t-test
##
## data: weight by group
## t = 2.7842, df = 16, p-value = 0.01327
## alternative hypothesis: true difference in means between group Man and group Woman is not equal to 0
## 95 percent confidence interval:
## 4.029759 29.748019
## sample estimates:
## mean in group Man mean in group Woman
## 68.98889 52.10000
As you can see, the two methods give the same results.
In the result above :
t is the t-test statistic value (t = 2.784), df is the degrees of freedom (df= 16), p-value is the significance level of the t-test (p-value = 0.01327). conf.int is the confidence interval of the mean at 95% (conf.int = [4.0298, 29.748]); sample estimates is he mean value of the sample (mean = 68.9888889, 52.1).
Note that:
if you want to test whether the average men’s weight is less than the average women’s weight, type this:
t.test(weight ~ group, data = my_data,
var.equal = TRUE, alternative = "less")
##
## Two Sample t-test
##
## data: weight by group
## t = 2.7842, df = 16, p-value = 0.9934
## alternative hypothesis: true difference in means between group Man and group Woman is less than 0
## 95 percent confidence interval:
## -Inf 27.47924
## sample estimates:
## mean in group Man mean in group Woman
## 68.98889 52.10000
Or, if you want to test whether the average men’s weight is greater than the average women’s weight, type this
t.test(weight ~ group, data = my_data,
var.equal = TRUE, alternative = "greater")
##
## Two Sample t-test
##
## data: weight by group
## t = 2.7842, df = 16, p-value = 0.006633
## alternative hypothesis: true difference in means between group Man and group Woman is greater than 0
## 95 percent confidence interval:
## 6.298536 Inf
## sample estimates:
## mean in group Man mean in group Woman
## 68.98889 52.10000
Interpretation of the result The p-value of the test is 0.01327, which is less than the significance level alpha = 0.05. We can conclude that men’s average weight is significantly different from women’s average weight with a p-value = 0.01327.
Access to the values returned by t.test() function
The result of t.test() function is a list containing the following components:
statistic: the value of the t test statistics parameter: the degrees of freedom for the t test statistics p.value: the p-value for the test conf.int: a confidence interval for the mean appropriate to the specified alternative hypothesis. estimate: the means of the two groups being compared (in the case of independent t test) or difference in means (in the case of paired t test).
Unpaired Two-Samples Wilcoxon Test in R
R function to compute Wilcoxon test To perform two-samples Wilcoxon test comparing the means of two independent samples (x & y), the R function wilcox.test() can be used as follow:
wilcox.test(x, y, alternative = “two.sided”)
x,y: numeric vectors alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.
Here, we’ll use an example data set, which contains the weight of 18 individuals (9 women and 9 men):
# Data in two numeric vectors
women_weight <- c(38.9, 61.2, 73.3, 21.8, 63.4, 64.6, 48.4, 48.8, 48.5)
men_weight <- c(67.8, 60, 63.4, 76, 89.4, 73.3, 67.3, 61.3, 62.4)
# Create a data frame
my_data <- data.frame(
group = rep(c("Woman", "Man"), each = 9),
weight = c(women_weight, men_weight)
)
We want to know, if the median women’s weight differs from the median men’s weight?
#Check your data
print(my_data)
## group weight
## 1 Woman 38.9
## 2 Woman 61.2
## 3 Woman 73.3
## 4 Woman 21.8
## 5 Woman 63.4
## 6 Woman 64.6
## 7 Woman 48.4
## 8 Woman 48.8
## 9 Woman 48.5
## 10 Man 67.8
## 11 Man 60.0
## 12 Man 63.4
## 13 Man 76.0
## 14 Man 89.4
## 15 Man 73.3
## 16 Man 67.3
## 17 Man 61.3
## 18 Man 62.4
It’s possible to compute summary statistics (median and interquartile range (IQR)) by groups. The dplyr package can be used.
library(dplyr)
group_by(my_data, group) %>%
summarise(
count = n(),
median = median(weight, na.rm = TRUE),
IQR = IQR(weight, na.rm = TRUE)
)
## # A tibble: 2 × 4
## group count median IQR
## <chr> <int> <dbl> <dbl>
## 1 Man 9 67.3 10.9
## 2 Woman 9 48.8 15
Visualize your data using box plots
#Visualize your data:
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800"),
ylab = "Weight", xlab = "Groups")
Compute unpaired two-samples Wilcoxon test Question : Is there any significant difference between women and men weights?
res <- wilcox.test(women_weight, men_weight)
## Warning in wilcox.test.default(women_weight, men_weight): cannot compute exact
## p-value with ties
res
##
## Wilcoxon rank sum test with continuity correction
##
## data: women_weight and men_weight
## W = 15, p-value = 0.02712
## alternative hypothesis: true location shift is not equal to 0
It will give a warning message, saying that “cannot compute exact p-value with tie”. It comes from the assumption of a Wilcoxon test that the responses are continuous. You can suppress this message by adding another argument exact = FALSE, but the result will be the same.
res <- wilcox.test(weight ~ group, data = my_data,
exact = FALSE)
res
##
## Wilcoxon rank sum test with continuity correction
##
## data: weight by group
## W = 66, p-value = 0.02712
## alternative hypothesis: true location shift is not equal to 0
As you can see, the two methods give the same results.
The p-value of the test is 0.02712, which is less than the significance level alpha = 0.05. We can conclude that men’s median weight is significantly different from women’s median weight with a p-value = 0.02712.
Note that:
if you want to test whether the median men’s weight is less than the median women’s weight, type this:
wilcox.test(weight ~ group, data = my_data,
exact = FALSE, alternative = "less")
##
## Wilcoxon rank sum test with continuity correction
##
## data: weight by group
## W = 66, p-value = 0.9892
## alternative hypothesis: true location shift is less than 0
Or, if you want to test whether the median men’s weight is greater than the median women’s weight, type this
wilcox.test(weight ~ group, data = my_data,
exact = FALSE, alternative = "greater")
##
## Wilcoxon rank sum test with continuity correction
##
## data: weight by group
## W = 66, p-value = 0.01356
## alternative hypothesis: true location shift is greater than 0
Paired Samples T-test in R
What is paired samples t-test?
The paired samples t-test is used to compare the means between two related groups of samples. In this case, you have two values (i.e., pair of values) for the same samples. This article describes how to compute paired samples t-test using R software.
As an example of data, 20 mice received a treatment X during 3 months. We want to know whether the treatment X has an impact on the weight of the mice.
To answer to this question, the weight of the 20 mice has been measured before and after the treatment. This gives us 20 sets of values before treatment and 20 sets of values after treatment from measuring twice the weight of the same mice.
In such situations, paired t-test can be used to compare the mean weights before and after treatment.
Paired t-test analysis is performed as follow:
Calculate the difference (d) between each pair of value Compute the mean (m) and the standard deviation (s) of d Compare the average difference to 0. If there is any significant difference between the two pairs of samples, then the mean of d (m) is expected to be far from 0. Paired t-test can be used only when the difference d is normally distributed. This can be checked using Shapiro-Wilk test.
Typical research questions are:
whether the mean difference (m) is equal to 0? whether the mean difference (m) is less than 0? whether the mean difference (m) is greather than 0?
In statistics, we can define the corresponding null hypothesis (H0) as follow:
H0:m=0 H0:m≤0 H0:m≥0 The corresponding alternative hypotheses (Ha) are as follow:
Ha:m≠0 (different) Ha:m>0 (greater) Ha:m<0 (less) Note that:
Hypotheses 1) are called two-tailed tests Hypotheses 2) and 3) are called one-tailed tests
Formula of paired samples t-test
We can compute the p-value corresponding to the absolute value of the t-test statistics (|t|) for the degrees of freedom (df): df=n−1.
If the p-value is inferior or equal to 0.05, we can conclude that the difference between the two paired samples are significantly different.
Visualize your data and compute paired t-test in R
R function to compute paired t-test
To perform paired samples t-test comparing the means of two paired samples (x & y), the R function t.test() can be used as follow:
t.test(x, y, paired = TRUE, alternative = “two.sided”)
x,y: numeric vectors paired: a logical value specifying that we want to compute a paired t-test alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.
#Data in two numeric vectors
# ++++++++++++++++++++++++++
# Weight of the mice before treatment
before <-c(200.1, 190.9, 192.7, 213, 241.4, 196.9, 172.2, 185.5, 205.2, 193.7)
# Weight of the mice after treatment
after <-c(392.9, 393.2, 345.1, 393, 434, 427.9, 422, 383.9, 392.3, 352.2)
# Create a data frame
my_data <- data.frame(
group = rep(c("before", "after"), each = 10),
weight = c(before, after)
)
We want to know, if there is any significant difference in the mean weights after treatment?
Check your data
# Print all data
print(my_data)
## group weight
## 1 before 200.1
## 2 before 190.9
## 3 before 192.7
## 4 before 213.0
## 5 before 241.4
## 6 before 196.9
## 7 before 172.2
## 8 before 185.5
## 9 before 205.2
## 10 before 193.7
## 11 after 392.9
## 12 after 393.2
## 13 after 345.1
## 14 after 393.0
## 15 after 434.0
## 16 after 427.9
## 17 after 422.0
## 18 after 383.9
## 19 after 392.3
## 20 after 352.2
#compute summary statistics by groups:
library("dplyr")
group_by(my_data, group) %>%
summarise(
count = n(),
mean = mean(weight, na.rm = TRUE),
sd = sd(weight, na.rm = TRUE)
)
## # A tibble: 2 × 4
## group count mean sd
## <chr> <int> <dbl> <dbl>
## 1 after 10 394. 29.4
## 2 before 10 199. 18.5
Visualize your data using box plots
#Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800"),
order = c("before", "after"),
ylab = "Weight", xlab = "Groups")
Box plots show you the increase, but lose the paired information. You can use the function plot.paired() [in pairedData package] to plot paired data (“before - after” plot).
Install pairedData package:
#install.packages("PairedData")
Plot paired data:
# Subset weight data before treatment
before <- subset(my_data, group == "before", weight,
drop = TRUE)
# subset weight data after treatment
after <- subset(my_data, group == "after", weight,
drop = TRUE)
# Plot paired data
library(PairedData)
## Warning: package 'PairedData' was built under R version 4.3.2
## Loading required package: MASS
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
## Loading required package: gld
## Warning: package 'gld' was built under R version 4.3.2
## Loading required package: mvtnorm
## Warning: package 'mvtnorm' was built under R version 4.3.2
## Loading required package: lattice
##
## Attaching package: 'PairedData'
## The following object is masked from 'package:base':
##
## summary
pd <- paired(before, after)
plot(pd, type = "profile") + theme_bw()
Preleminary test to check paired t-test assumptions Assumption 1: Are the two samples paired?
Yes, since the data have been collected from measuring twice the weight of the same mice.
Assumption 2: Is this a large sample?
No, because n < 30. Since the sample size is not large enough (less than 30), we need to check whether the differences of the pairs follow a normal distribution.
How to check the normality?
Use Shapiro-Wilk normality test as described at: Normality Test in R.
Null hypothesis: the data are normally distributed Alternative hypothesis: the data are not normally distributed
# compute the difference
d <- with(my_data,weight[group == "before"] - weight[group == "after"])
# Shapiro-Wilk normality test for the differences
shapiro.test(d) # => p-value = 0.6141
##
## Shapiro-Wilk normality test
##
## data: d
## W = 0.94536, p-value = 0.6141
From the output, the p-value is greater than the significance level 0.05 implying that the distribution of the differences (d) are not significantly different from normal distribution. In other words, we can assume the normality.
Note that, if the data are not normally distributed, it’s recommended to use the non parametric paired two-samples Wilcoxon test.
Compute paired samples t-test
Question : Is there any significant changes in the weights of mice after treatment?
# Compute t-test
res <- t.test(before, after, paired = TRUE)
res
##
## Paired t-test
##
## data: before and after
## t = -20.883, df = 9, p-value = 6.2e-09
## alternative hypothesis: true mean difference is not equal to 0
## 95 percent confidence interval:
## -215.5581 -173.4219
## sample estimates:
## mean difference
## -194.49
# Compute t-test
res <- t.test(weight ~ group, data = my_data, paired = TRUE)
res
##
## Paired t-test
##
## data: weight by group
## t = 20.883, df = 9, p-value = 6.2e-09
## alternative hypothesis: true mean difference is not equal to 0
## 95 percent confidence interval:
## 173.4219 215.5581
## sample estimates:
## mean difference
## 194.49
As you can see, the two methods give the same results.
In the result above :
t is the t-test statistic value (t = 20.88), df is the degrees of freedom (df= 9), p-value is the significance level of the t-test (p-value = 6.210^{-9}). conf.int is the confidence interval (conf.int) of the mean differences at 95% is also shown (conf.int= [173.42, 215.56]) sample estimates is the mean differences between pairs (mean = 194.49).
Note that:
if you want to test whether the average weight before treatment is less than the average weight after treatment, type this:
t.test(weight ~ group, data = my_data, paired = TRUE,
alternative = "less")
##
## Paired t-test
##
## data: weight by group
## t = 20.883, df = 9, p-value = 1
## alternative hypothesis: true mean difference is less than 0
## 95 percent confidence interval:
## -Inf 211.5623
## sample estimates:
## mean difference
## 194.49
Or, if you want to test whether the average weight before treatment is greater than the average weight after treatment, type this
t.test(weight ~ group, data = my_data, paired = TRUE,
alternative = "greater")
##
## Paired t-test
##
## data: weight by group
## t = 20.883, df = 9, p-value = 3.1e-09
## alternative hypothesis: true mean difference is greater than 0
## 95 percent confidence interval:
## 177.4177 Inf
## sample estimates:
## mean difference
## 194.49
Interpretation of the result
The p-value of the test is 6.210^{-9}, which is less than the significance level alpha = 0.05. We can then reject null hypothesis and conclude that the average weight of the mice before treatment is significantly different from the average weight after treatment with a p-value = 6.210^{-9}.
Access to the values returned by t.test() function The result of t.test() function is a list containing the following components:
statistic: the value of the t test statistics parameter: the degrees of freedom for the t test statistics p.value: the p-value for the test conf.int: a confidence interval for the mean appropriate to the specified alternative hypothesis. estimate: the means of the two groups being compared (in the case of independent t test) or difference in means (in the case of paired t test).
What is one-way ANOVA test?
The one-way analysis of variance (ANOVA), also known as one-factor ANOVA, is an extension of independent two-samples t-test for comparing means in a situation where there are more than two groups. In one-way ANOVA, the data is organized into several groups base on one single grouping variable (also called factor variable).
ANOVA test hypotheses:
Null hypothesis: the means of the different groups are the same Alternative hypothesis: At least one sample mean is not equal to the others.
Note that, if you have only two groups, you can use t-test. In this case the F-test and the t-test are equivalent.
Assumptions of ANOVA test
Here we describe the requirement for ANOVA test. ANOVA test can be applied only when:
The observations are obtained independently and randomly from the population defined by the factor levels The data of each factor level are normally distributed. These normal populations have a common variance. (Levene’s test can be used to check this.)
How one-way ANOVA test works? Assume that we have 3 groups (A, B, C) to compare:
Compute the common variance, which is called variance within samples (S2within) or residual variance. Compute the variance between sample means as follow: Compute the mean of each group Compute the variance between sample means (S2between)
Produce F-statistic as the ratio of S2between/S2within.
Note that, a lower ratio (ratio < 1) indicates that there are no significant difference between the means of the samples being compared. However, a higher ratio implies that the variation among group means are significant.
Visualize your data and compute one-way ANOVA in R
my_data <- PlantGrowth
Check your data
To have an idea of what the data look like, we use the the function sample_n()[in dplyr package]. The sample_n() function randomly picks a few of the observations in the data frame to print out:
# Show a random sample
set.seed(1234)
dplyr::sample_n(my_data, 10)
## weight group
## 1 6.15 trt2
## 2 3.83 trt1
## 3 5.29 trt2
## 4 5.12 trt2
## 5 4.50 ctrl
## 6 4.17 trt1
## 7 5.87 trt1
## 8 5.33 ctrl
## 9 5.26 trt2
## 10 4.61 ctrl
In R terminology, the column “group” is called factor and the different categories (“ctr”, “trt1”, “trt2”) are named factor levels. The levels are ordered alphabetically.
# Show the levels
levels(my_data$group)
## [1] "ctrl" "trt1" "trt2"
If the levels are not automatically in the correct order, re-order them as follow:
my_data$group <- ordered(my_data$group,
levels = c("ctrl", "trt1", "trt2"))
It’s possible to compute summary statistics (mean and sd) by groups using the dplyr package.
Compute summary statistics by groups - count, mean, sd:
library(dplyr)
group_by(my_data, group) %>%
summarise(
count = n(),
mean = mean(weight, na.rm = TRUE),
sd = sd(weight, na.rm = TRUE)
)
## # A tibble: 3 × 4
## group count mean sd
## <ord> <int> <dbl> <dbl>
## 1 ctrl 10 5.03 0.583
## 2 trt1 10 4.66 0.794
## 3 trt2 10 5.53 0.443
# Box plots
# ++++++++++++++++++++
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800", "#FC4E07"),
order = c("ctrl", "trt1", "trt2"),
ylab = "Weight", xlab = "Treatment")
# Mean plots
# ++++++++++++++++++++
# Plot weight by group
# Add error bars: mean_se
# (other values include: mean_sd, mean_ci, median_iqr, ....)
library("ggpubr")
ggline(my_data, x = "group", y = "weight",
add = c("mean_se", "jitter"),
order = c("ctrl", "trt1", "trt2"),
ylab = "Weight", xlab = "Treatment")
#install.packages("gplots")
# Box plot
boxplot(weight ~ group, data = my_data,
xlab = "Treatment", ylab = "Weight",
frame = FALSE, col = c("#00AFBB", "#E7B800", "#FC4E07"))
# plotmeans
library("gplots")
## Warning: package 'gplots' was built under R version 4.3.2
##
## Attaching package: 'gplots'
## The following object is masked from 'package:stats':
##
## lowess
plotmeans(weight ~ group, data = my_data, frame = FALSE,
xlab = "Treatment", ylab = "Weight",
main="Mean Plot with 95% CI")
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
## graphical parameter
## Warning in axis(1, at = 1:length(means), labels = legends, ...): "frame" is not
## a graphical parameter
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
## graphical parameter
Compute one-way ANOVA test We want to know if there is any significant difference between the average weights of plants in the 3 experimental conditions.
The R function aov() can be used to answer to this question. The function summary.aov() is used to summarize the analysis of variance model.
# Compute the analysis of variance
res.aov <- aov(weight ~ group, data = my_data)
# Summary of the analysis
summary(res.aov)
## Df Sum Sq Mean Sq F value Pr(>F)
## group 2 3.766 1.8832 4.846 0.0159 *
## Residuals 27 10.492 0.3886
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The output includes the columns F value and Pr(>F) corresponding to the p-value of the test.
Interpret the result of one-way ANOVA tests As the p-value is less than the significance level 0.05, we can conclude that there are significant differences between the groups highlighted with “*” in the model summary.
Multiple pairwise-comparison between the means of groups
In one-way ANOVA test, a significant p-value indicates that some of the group means are different, but we don’t know which pairs of groups are different.
It’s possible to perform multiple pairwise-comparison, to determine if the mean difference between specific pairs of group are statistically significant.
Tukey multiple pairwise-comparisons
As the ANOVA test is significant, we can compute Tukey HSD (Tukey Honest Significant Differences, R function: TukeyHSD()) for performing multiple pairwise-comparison between the means of groups.
The function TukeyHD() takes the fitted ANOVA as an argument.
TukeyHSD(res.aov)
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = weight ~ group, data = my_data)
##
## $group
## diff lwr upr p adj
## trt1-ctrl -0.371 -1.0622161 0.3202161 0.3908711
## trt2-ctrl 0.494 -0.1972161 1.1852161 0.1979960
## trt2-trt1 0.865 0.1737839 1.5562161 0.0120064
diff: difference between means of the two groups lwr, upr: the lower and the upper end point of the confidence interval at 95% (default) p adj: p-value after adjustment for the multiple comparisons.
It can be seen from the output, that only the difference between trt2 and trt1 is significant with an adjusted p-value of 0.012.
Pairewise t-test
The function pairewise.t.test() can be also used to calculate pairwise comparisons between group levels with corrections for multiple testing.
pairwise.t.test(my_data$weight, my_data$group,
p.adjust.method = "BH")
##
## Pairwise comparisons using t tests with pooled SD
##
## data: my_data$weight and my_data$group
##
## ctrl trt1
## trt1 0.194 -
## trt2 0.132 0.013
##
## P value adjustment method: BH
Check ANOVA assumptions: test validity?
The ANOVA test assumes that, the data are normally distributed and the variance across groups are homogeneous. We can check that with some diagnostic plots.
Check the homogeneity of variance assumption The residuals versus fits plot can be used to check the homogeneity of variances.
In the plot below, there is no evident relationships between residuals and fitted values (the mean of each groups), which is good. So, we can assume the homogeneity of variances.
# 1. Homogeneity of variances
plot(res.aov, 1)
Points 17, 15, 4 are detected as outliers, which can severely affect normality and homogeneity of variance. It can be useful to remove outliers to meet the test assumptions.
It’s also possible to use Bartlett’s test or Levene’s test to check the homogeneity of variances.
We recommend Levene’s test, which is less sensitive to departures from normal distribution. The function leveneTest() [in car package] will be used:
library(car)
## Warning: package 'car' was built under R version 4.3.2
## Loading required package: carData
## Warning: package 'carData' was built under R version 4.3.2
##
## Attaching package: 'car'
## The following object is masked from 'package:dplyr':
##
## recode
leveneTest(weight ~ group, data = my_data)
## Levene's Test for Homogeneity of Variance (center = median)
## Df F value Pr(>F)
## group 2 1.1192 0.3412
## 27
From the output above we can see that the p-value is not less than the significance level of 0.05. This means that there is no evidence to suggest that the variance across groups is statistically significantly different. Therefore, we can assume the homogeneity of variances in the different treatment groups.
Relaxing the homogeneity of variance assumption
The classical one-way ANOVA test requires an assumption of equal variances for all groups.
In our example, the homogeneity of variance assumption turned out to be fine: the Levene test is not significant.
How do we save our ANOVA test, in a situation where the homogeneity of variance assumption is violated?
An alternative procedure (i.e.: Welch one-way test), that does not require that assumption have been implemented in the function oneway.test().
ANOVA test with no assumption of equal variances
oneway.test(weight ~ group, data = my_data)
##
## One-way analysis of means (not assuming equal variances)
##
## data: weight and group
## F = 5.181, num df = 2.000, denom df = 17.128, p-value = 0.01739
Check the normality assumption
Normality plot of residuals. In the plot below, the quantiles of the residuals are plotted against the quantiles of the normal distribution. A 45-degree reference line is also plotted.
The normal probability plot of residuals is used to check the assumption that the residuals are normally distributed. It should approximately follow a straight line.
# 2. Normality
plot(res.aov, 2)
As all the points fall approximately along this reference line, we can assume normality.
The conclusion above, is supported by the Shapiro-Wilk test on the ANOVA residuals (W = 0.96, p = 0.6) which finds no indication that normality is violated.
# Extract the residuals
aov_residuals <- residuals(object = res.aov )
# Run Shapiro-Wilk test
shapiro.test(x = aov_residuals )
##
## Shapiro-Wilk normality test
##
## data: aov_residuals
## W = 0.96607, p-value = 0.4379
As all the points fall approximately along this reference line, we can assume normality.
The conclusion above, is supported by the Shapiro-Wilk test on the ANOVA residuals (W = 0.96, p = 0.6) which finds no indication that normality is violated.
# Extract the residuals
aov_residuals <- residuals(object = res.aov )
# Run Shapiro-Wilk test
shapiro.test(x = aov_residuals )
##
## Shapiro-Wilk normality test
##
## data: aov_residuals
## W = 0.96607, p-value = 0.4379
Two-Way ANOVA Test in R
What is two-way ANOVA test?
Two-way ANOVA test is used to evaluate simultaneously the effect of two grouping variables (A and B) on a response variable.
The grouping variables are also known as factors. The different categories (groups) of a factor are called levels. The number of levels can vary between factors. The level combinations of factors are called cell.
When the sample sizes within cells are equal, we have the so-called balanced design. In this case the standard two-way ANOVA test can be applied.
When the sample sizes within each level of the independent variables are not the same (case of unbalanced designs), the ANOVA test should be handled differently.
Two-way ANOVA test hypotheses
There is no difference in the means of factor A There is no difference in means of factor B There is no interaction between factors A and B The alternative hypothesis for cases 1 and 2 is: the means are not equal.
The alternative hypothesis for case 3 is: there is an interaction between A and B.
Assumptions of two-way ANOVA test
Two-way ANOVA, like all ANOVA tests, assumes that the observations within each cell are normally distributed and have equal variances. We’ll show you how to check these assumptions after fitting ANOVA.
Compute two-way ANOVA test in R: balanced designs
Balanced designs correspond to the situation where we have equal sample sizes within levels of our independent grouping levels.
Here, we’ll use the built-in R data set named ToothGrowth. It contains data from a study evaluating the effect of vitamin C on tooth growth in Guinea pigs. The experiment has been performed on 60 pigs, where each animal received one of three dose levels of vitamin C (0.5, 1, and 2 mg/day) by one of two delivery methods, (orange juice or ascorbic acid (a form of vitamin C and coded as VC). Tooth length was measured and a sample of the data is shown below.
# Store the data in the variable my_data
my_data <- ToothGrowth
Check your data To get an idea of what the data look like, we display a random sample of the data using the function sample_n()[in dplyr package]. First, install dplyr if you don’t have it:
install.packages(“dplyr”)
# Show a random sample
set.seed(1234)
dplyr::sample_n(my_data, 10)
## len supp dose
## 1 21.5 VC 2.0
## 2 17.3 VC 1.0
## 3 27.3 OJ 2.0
## 4 18.5 VC 2.0
## 5 8.2 OJ 0.5
## 6 26.4 OJ 1.0
## 7 25.8 OJ 1.0
## 8 5.2 VC 0.5
## 9 6.4 VC 0.5
## 10 9.4 OJ 0.5
# Check the structure
str(my_data)
## 'data.frame': 60 obs. of 3 variables:
## $ len : num 4.2 11.5 7.3 5.8 6.4 10 11.2 11.2 5.2 7 ...
## $ supp: Factor w/ 2 levels "OJ","VC": 2 2 2 2 2 2 2 2 2 2 ...
## $ dose: num 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 ...
From the output above, R considers “dose” as a numeric variable. We’ll convert it as a factor variable (i.e., grouping variable) as follow.
# Convert dose as a factor and recode the levels
# as "D0.5", "D1", "D2"
my_data$dose <- factor(my_data$dose,
levels = c(0.5, 1, 2),
labels = c("D0.5", "D1", "D2"))
head(my_data)
## len supp dose
## 1 4.2 VC D0.5
## 2 11.5 VC D0.5
## 3 7.3 VC D0.5
## 4 5.8 VC D0.5
## 5 6.4 VC D0.5
## 6 10.0 VC D0.5
Question: We want to know if tooth length depends on supp and dose.
Generate frequency tables:
table(my_data$supp, my_data$dose)
##
## D0.5 D1 D2
## OJ 10 10 10
## VC 10 10 10
Visualize your data
Box plots and line plots can be used to visualize group differences:
Box plot to plot the data grouped by the combinations of the levels of the two factors. Two-way interaction plot, which plots the mean (or other summary) of the response for two-way combinations of factors, thereby illustrating possible interactions.
# Box plot with multiple groups
# +++++++++++++++++++++
# Plot tooth length ("len") by groups ("dose")
# Color box plot by a second group: "supp"
library("ggpubr")
ggboxplot(my_data, x = "dose", y = "len", color = "supp",
palette = c("#00AFBB", "#E7B800"))
# Line plots with multiple groups
# +++++++++++++++++++++++
# Plot tooth length ("len") by groups ("dose")
# Color box plot by a second group: "supp"
# Add error bars: mean_se
# (other values include: mean_sd, mean_ci, median_iqr, ....)
library("ggpubr")
ggline(my_data, x = "dose", y = "len", color = "supp",
add = c("mean_se", "dotplot"),
palette = c("#00AFBB", "#E7B800"))
## Bin width defaults to 1/30 of the range of the data. Pick better value with
## `binwidth`.
If you still want to use R base graphs, type the following scripts:
# Box plot with two factor variables
boxplot(len ~ supp * dose, data=my_data, frame = FALSE,
col = c("#00AFBB", "#E7B800"), ylab="Tooth Length")
# Two-way interaction plot
interaction.plot(x.factor = my_data$dose, trace.factor = my_data$supp,
response = my_data$len, fun = mean,
type = "b", legend = TRUE,
xlab = "Dose", ylab="Tooth Length",
pch=c(1,19), col = c("#00AFBB", "#E7B800"))
Arguments used for the function interaction.plot():
x.factor: the factor to be plotted on x axis. trace.factor: the factor to be plotted as lines response: a numeric variable giving the response type: the type of plot. Allowed values include p (for point only), l (for line only) and b (for both point and line).
Compute two-way ANOVA test We want to know if tooth length depends on supp and dose.
The R function aov() can be used to answer this question. The function summary.aov() is used to summarize the analysis of variance model.
res.aov2 <- aov(len ~ supp + dose, data = my_data)
summary(res.aov2)
## Df Sum Sq Mean Sq F value Pr(>F)
## supp 1 205.4 205.4 14.02 0.000429 ***
## dose 2 2426.4 1213.2 82.81 < 2e-16 ***
## Residuals 56 820.4 14.7
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The output includes the columns F value and Pr(>F) corresponding to the p-value of the test.
From the ANOVA table we can conclude that both supp and dose are statistically significant. dose is the most significant factor variable. These results would lead us to believe that changing delivery methods (supp) or the dose of vitamin C, will impact significantly the mean tooth length.
Not the above fitted model is called additive model. It makes an assumption that the two factor variables are independent. If you think that these two variables might interact to create an synergistic effect, replace the plus symbol (+) by an asterisk (*), as follow
# Two-way ANOVA with interaction effect
# These two calls are equivalent
res.aov3 <- aov(len ~ supp * dose, data = my_data)
res.aov3 <- aov(len ~ supp + dose + supp:dose, data = my_data)
summary(res.aov3)
## Df Sum Sq Mean Sq F value Pr(>F)
## supp 1 205.4 205.4 15.572 0.000231 ***
## dose 2 2426.4 1213.2 92.000 < 2e-16 ***
## supp:dose 2 108.3 54.2 4.107 0.021860 *
## Residuals 54 712.1 13.2
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
It can be seen that the two main effects (supp and dose) are statistically significant, as well as their interaction.
Note that, in the situation where the interaction is not significant you should use the additive model.
Interpret the results
From the ANOVA results, you can conclude the following, based on the p-values and a significance level of 0.05:
the p-value of supp is 0.000429 (significant), which indicates that the levels of supp are associated with significant different tooth length.
the p-value of dose is < 2e-16 (significant), which indicates that the levels of dose are associated with significant different tooth length.
the p-value for the interaction between supp*dose is 0.02 (significant), which indicates that the relationships between dose and tooth length depends on the supp method.
Kruskal-Wallis Test in R
What is Kruskal-Wallis test?
Kruskal-Wallis test by rank is a non-parametric alternative to one-way ANOVA test, which extends the two-samples Wilcoxon test in the situation where there are more than two groups. It’s recommended when the assumptions of one-way ANOVA test are not met.
my_data <- PlantGrowth
Check your data
# print the head of the file
head(my_data)
## weight group
## 1 4.17 ctrl
## 2 5.58 ctrl
## 3 5.18 ctrl
## 4 6.11 ctrl
## 5 4.50 ctrl
## 6 4.61 ctrl
In R terminology, the column “group” is called factor and the different categories (“ctr”, “trt1”, “trt2”) are named factor levels. The levels are ordered alphabetically.
# Show the group levels
levels(my_data$group)
## [1] "ctrl" "trt1" "trt2"
If the levels are not automatically in the correct order, re-order them as follow:
my_data$group <- ordered(my_data$group,
levels = c("ctrl", "trt1", "trt2"))
Compute summary statistics by groups:
library(dplyr)
group_by(my_data, group) %>%
summarise(
count = n(),
mean = mean(weight, na.rm = TRUE),
sd = sd(weight, na.rm = TRUE),
median = median(weight, na.rm = TRUE),
IQR = IQR(weight, na.rm = TRUE)
)
## # A tibble: 3 × 6
## group count mean sd median IQR
## <ord> <int> <dbl> <dbl> <dbl> <dbl>
## 1 ctrl 10 5.03 0.583 5.15 0.743
## 2 trt1 10 4.66 0.794 4.55 0.662
## 3 trt2 10 5.53 0.443 5.44 0.467
Visualize your data with ggpubr:
# Box plots
# ++++++++++++++++++++
# Plot weight by group and color by group
library("ggpubr")
ggboxplot(my_data, x = "group", y = "weight",
color = "group", palette = c("#00AFBB", "#E7B800", "#FC4E07"),
order = c("ctrl", "trt1", "trt2"),
ylab = "Weight", xlab = "Treatment")
# Mean plots
# ++++++++++++++++++++
# Plot weight by group
# Add error bars: mean_se
# (other values include: mean_sd, mean_ci, median_iqr, ....)
library("ggpubr")
ggline(my_data, x = "group", y = "weight",
add = c("mean_se", "jitter"),
order = c("ctrl", "trt1", "trt2"),
ylab = "Weight", xlab = "Treatment")
Compute Kruskal-Wallis test
We want to know if there is any significant difference between the average weights of plants in the 3 experimental conditions.
The test can be performed using the function kruskal.test() as follow:
kruskal.test(weight ~ group, data = my_data)
##
## Kruskal-Wallis rank sum test
##
## data: weight by group
## Kruskal-Wallis chi-squared = 7.9882, df = 2, p-value = 0.01842
Interpret As the p-value is less than the significance level 0.05, we can conclude that there are significant differences between the treatment groups.