Overview

Statistical tests to use for comparing means include:

  • T-test
  • Wilcoxon test
  • ANOVA test and
  • Kruskal-Wallis test
  1. Comparing one-sample mean to a standard known mean

    • One-Sample T-test (parametric)
    • One-Sample Wilcoxon Test (non-parametric)
  2. Comparing the means of two independent groups:

    • Unpaired Two Samples T-test (parametric)
    • Unpaired Two-Samples Wilcoxon Test (non-parametric)
  3. Comparing the means of paired samples:

    • Paired Samples T-test (parametric)
    • Paired Samples Wilcoxon Test (non-parametric)
  4. Comparing the means of more than two groups

    • Analysis of variance (ANOVA, parametric):
      • One-Way ANOVA Test in R
      • Two-Way ANOVA Test in R
  5. MANOVA Test in R: Multivariate Analysis of Variance

  6. Kruskal-Wallis Test in R (non parametric alternative to one-way ANOVA)

1. Comparing one-sample mean to a standard known mean

1.1 One-Sample T-test (parametric)

What is one-sample t-test?

one-sample t-test is used to compare the mean of one sample to a known standard (or theoretical/hypothetical) mean (μ).

Generally, the theoretical mean comes from:

a previous experiment. For example, compare whether the mean weight of mice differs from 200 mg, a value determined in a previous study. or from an experiment where you have control and treatment conditions. If you express your data as “percent of control”, you can test whether the average value of treatment condition differs significantly from 100.

Note that, one-sample t-test can be used only, when the data are normally distributed . This can be checked using Shapiro-Wilk test .

Research questions and statistical hypotheses

Typical research questions are:

  1. whether the mean (m) of the sample is equal to the theoretical mean (μ)?
  2. whether the mean (m) of the sample is less than the theoretical mean (μ)?
  3. whether the mean (m) of the sample is greater than the theoretical mean (μ)?

In statistics, we can define the corresponding null hypothesis (H0) as follow:

  1. H0:m=μ
  2. H0:m≤μ
  3. H0:m≥μ

The corresponding alternative hypotheses (Ha) are as follow:

  1. Ha:m≠μ (different)
  2. Ha:m>μ (greater)
  3. Ha:m<μ (less) Note that:
  • Hypotheses 1) are called two-tailed tests

  • Hypotheses 2) and 3) are called one-tailed tests

  • Formula of one-sample t-test

The t-statistic can be calculated as follow:

t=(m−μ)/(s/√n)

where,

m is the sample mean n is the sample size s is the sample standard deviation with n−1 degrees of freedom μ is the theoretical value

We can compute the p-value corresponding to the absolute value of the t-test statistics (|t|) for the degrees of freedom (df): df=n−1.

How to interpret the results?

If the p-value is inferior or equal to the significance level 0.05, we can reject the null hypothesis and accept the alternative hypothesis. In other words, we conclude that the sample mean is significantly different from the theoretical mean.

Visualize your data and compute one-sample t-test in R

To perform one-sample t-test, the R function t.test() can be used as follow:

t.test(x, mu = 0, alternative = "two.sided")

  • x: a numeric vector containing your data values

  • mu: the theoretical mean. Default is 0 but you can change it.

  • alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

Here, we’ll use an example data set containing the weight of 10 mice.

? We want to know, if the average weight of the mice differs from 25g?

set.seed(1234)
my_data <- data.frame(
  name = paste0(rep("M_", 10), 1:10),
  weight = round(rnorm(10, 20, 2), 1)
)

summary(my_data$weight)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   15.30   18.38   18.90   19.25   20.82   22.20
ggboxplot(my_data$weight, 
          ylab = "Weight (g)", xlab = FALSE,
          ggtheme = theme_minimal())

Preleminary test to check one-sample t-test assumptions

  • Is this a large sample? - No, because n < 30.

  • Since the sample size is not large enough (less than 30, central limit theorem), we need to check whether the data follow a normal distribution.

How to check the normality?

Briefly, it’s possible to use the Shapiro-Wilk normality test and to look at the normality plot.

Shapiro-Wilk test: Null hypothesis: the data are normally distributed Alternative hypothesis: the data are not normally distributed

shapiro.test(my_data$weight)
## 
##  Shapiro-Wilk normality test
## 
## data:  my_data$weight
## W = 0.9526, p-value = 0.6993

From the output, the p-value is greater than the significance level 0.05 implying that the distribution of the data are not significantly different from normal distribtion. In other words, we can assume the normality.

Visual inspection of the data normality using Q-Q plots (quantile-quantile plots). Q-Q plot draws the correlation between a given sample and the normal distribution.

ggqqplot(my_data$weight, ylab = "Men's weight",
         ggtheme = theme_minimal())

From the normality plots, we conclude that the data may come from normal distributions.

Note that, if the data are not normally distributed, it’s recommended to use the non parametric one-sample Wilcoxon rank test.

  • Compute one-samle t-test

? We want to know, if the average weight of the mice differs from 25g (two-tailed test)?

# One-sample t-test
res <- t.test(my_data$weight, mu = 25)
# Printing the results
res 
## 
##  One Sample t-test
## 
## data:  my_data$weight
## t = -9.0783, df = 9, p-value = 0.000007953
## alternative hypothesis: true mean is not equal to 25
## 95 percent confidence interval:
##  17.8172 20.6828
## sample estimates:
## mean of x 
##     19.25

In the result above :

  • t is the t-test statistic value (t = -9.078),

  • df is the degrees of freedom (df= 9),

  • p-value is the significance level of the t-test (p-value = 7.95310^{-6}).

  • conf.int is the confidence interval of the mean at 95% (conf.int = [17.8172, 20.6828]);

  • sample estimates is the mean value of the sample (mean = 19.25).

Note that:

if you want to test whether the mean weight of mice is less than 25g (one-tailed test), type this:

t.test(my_data$weight, mu = 25,alternative = "less")

Or, if you want to test whether the mean weight of mice is greater than 25g (one-tailed test), type this:

t.test(my_data$weight, mu = 25,alternative = "greater")

Interpretation of the result

The p-value of the test is 7.95310^{-6}, which is less than the significance level alpha = 0.05. We can conclude that the mean weight of the mice is significantly different from 25g with a p-value = 7.95310^{-6}.

Access to the values returned by t.test() function The result of t.test() function is a list containing the following components:

  • statistic: the value of the t test statistics
  • parameter: the degrees of freedom for the t test statistics
  • p.value: the p-value for the test
  • conf.int: a confidence interval for the mean appropriate to the specified alternative hypothesis.
  • estimate: the means of the two groups being compared (in the case of independent t test) or difference in means (in the case of paired t test).

The format of the R code to use for getting these values is as follow:

# printing the p-value
res$p.value
## [1] 0.000007953383
# printing the mean
res$estimate
## mean of x 
##     19.25
# printing the confidence interval
res$conf.int
## [1] 17.8172 20.6828
## attr(,"conf.level")
## [1] 0.95

1.2 One-Sample Wilcoxon Test (non-parametric)

What’s one-sample Wilcoxon signed rank test?

The one-sample Wilcoxon signed rank test is a non-parametric alternative to one-sample t-test when the data cannot be assumed to be normally distributed. It’s used to determine whether the median of the sample is equal to a known standard value (i.e. theoretical value).

Note that, the data should be distributed symmetrically around the median. In other words, there should be roughly the same number of values above and below the median.

Research questions and statistical hypotheses

Typical research questions are:

  1. whether the median (m) of the sample is equal to the theoretical value (m0)?

  2. whether the median (m) of the sample is less than to the theoretical value (m0)?

  3. whether the median (m) of the sample is greater than to the theoretical value(m0)?

In statistics, we can define the corresponding null hypothesis (H0) as follow:

H0:m=m0 H0:m≤m0 H0:m≥m0

The corresponding alternative hypotheses (Ha) are as follow:

Ha:m≠m0 (different) Ha:m>m0 (greater) Ha:m<m0 (less)

Note that:

  • Hypotheses 1) are called two-tailed tests
  • Hypotheses 2) and 3) are called one-tailed tests

Practical Examples in R

To perform one-sample Wilcoxon-test, the R function wilcox.test() can be used as follow:

wilcox.test(x, mu = 0, alternative = "two.sided")

  • x: a numeric vector containing your data values
  • mu: the theoretical mean/median value. Default is 0 but you can change it.
  • alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

Here, we’ll use an example data set containing the weight of 10 mice.

? We want to know, if the median weight of the mice differs from 25g?

set.seed(1234)
my_data <- data.frame(
  name = paste0(rep("M_", 10), 1:10),
  weight = round(rnorm(10, 20, 2), 1)
)

# Statistical summaries of weight
summary(my_data$weight)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   15.30   18.38   18.90   19.25   20.82   22.20
ggboxplot(my_data$weight, 
          ylab = "Weight (g)", xlab = FALSE,
          ggtheme = theme_minimal())

  • Compute one-sample Wilcoxon test

? We want to know, if the average weight of the mice differs from 25g (two-tailed test)?

# One-sample wilcoxon test
res <- wilcox.test(my_data$weight, mu = 25)
## Warning in wilcox.test.default(my_data$weight, mu = 25): cannot compute exact p-
## value with ties
# Printing the results
res 
## 
##  Wilcoxon signed rank test with continuity correction
## 
## data:  my_data$weight
## V = 0, p-value = 0.005793
## alternative hypothesis: true location is not equal to 25

Interpret

The p-value of the test is 0.005793, which is less than the significance level alpha = 0.05. We can reject the null hypothesis and conclude that the average weight of the mice is significantly different from 25g with a p-value = 0.005793.

Note that:

  • if you want to test whether the median weight of mice is less than 25g (one-tailed test), type this:

wilcox.test(my_data$weight, mu = 25,alternative = "less")

  • Or, if you want to test whether the median weight of mice is greater than 25g (one-tailed test), type this:

wilcox.test(my_data$weight, mu = 25,alternative = "greater")

2. Comparing the means of two independent groups:

2.1 Unpaired Two Samples T-test (parametric)

What is unpaired two-samples t-test?

The unpaired two-samples t-test is used to compare the mean of two independent groups.

For example, suppose that we have measured the weight of 100 individuals: 50 women (group A) and 50 men (group B). We want to know if the mean weight of women (mA) is significantly different from that of men (mB).

In this case, we have two unrelated (i.e., independent or unpaired) groups of samples. Therefore, it’s possible to use an independent t-test to evaluate whether the means are different.

Note that, unpaired two-samples t-test can be used only under certain conditions:

  • when the two groups of samples (A and B), being compared, are normally distributed. This can be checked using Shapiro-Wilk test.

  • and when the variances of the two groups are equal. This can be checked using F-test.

Research questions and statistical hypotheses

Typical research questions are:

  1. whether the mean of group A (mA) is equal to the mean of group B (mB)?
  2. whether the mean of group A (mA) is less than the mean of group B (mB)?
  3. whether the mean of group A (mA) is greather than the mean of group B (mB)?

In statistics, we can define the corresponding null hypothesis (H0) as follow:

  1. H0:mA=mB
  2. H0:mA≤mB
  3. H0:mA≥mB

The corresponding alternative hypotheses (Ha) are as follow:

  1. Ha:mA≠mB (different)
  2. Ha:mA>mB (greater)
  3. Ha:mA<mB (less)

Note that:

  • Hypotheses 1) are called two-tailed tests
  • Hypotheses 2) and 3) are called one-tailed tests

Formula of unpaired two-samples t-test

  1. Classical t-test:

If the variance of the two groups are equivalent (homoscedasticity), the t-test value, comparing the two samples (A and B), can be calculated as follow.

t=(mA−mB)/√(S2/nA+S2/nB)‾‾‾‾‾‾‾‾

where,

  • mA and mB represent the mean value of the group A and B, respectively.
  • nA and nB represent the sizes of the group A and B, respectively.
  • S2 is an estimator of the pooled variance of the two groups. It can be calculated as follow :

S2=∑(x−mA)2+∑(x−mB)2/(nA+nB−2)

with degrees of freedom (df): df=nA+nB−2.

  1. Welch t-statistic:

If the variances of the two groups being compared are different (heteroscedasticity), it’s possible to use the Welch t test, an adaptation of Student t-test.

Welch t-statistic is calculated as follow :

t=(mA−mB)/√(S2A/nA+S2B/nB)‾‾‾‾‾‾‾‾

where, SA and SB are the standard deviation of the the two groups A and B, respectively.

Unlike the classic Student’s t-test, Welch t-test formula involves the variance of each of the two groups (S2A and S2B) being compared. In other words, it does not use the pooled varianceS.

The degrees of freedom of Welch t-test is estimated as follow :

df=(S2A/nA+S2B/n2B)/(S4A/n2A(nB−1)+S4B/n2B(nB−1))

A p-value can be computed for the corresponding absolute value of t-statistic (|t|).

Note that, the Welch t-test is considered as the safer one. Usually, the results of the classical t-test and the Welch t-test are very similar unless both the group sizes and the standard deviations are very different.

How to interpret the results?

If the p-value is inferior or equal to the significance level 0.05, we can reject the null hypothesis and accept the alternative hypothesis. In other words, we can conclude that the mean values of group A and B are significantly different.

Visualize your data and compute unpaired two-samples t-test in R

  • R function to compute unpaired two-samples t-test

To perform two-samples t-test comparing the means of two independent samples (x & y), the R function t.test() can be used as follow:

t.test(x, y, alternative = "two.sided", var.equal = FALSE)

  • x,y: numeric vectors

  • alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

  • var.equal: a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch test is used.

  • Visualize your data using box plots

We want to know, if the average women’s weight differs from the average men’s weight?

# Data in two numeric vectors
women_weight <- c(38.9, 61.2, 73.3, 21.8, 63.4, 64.6, 48.4, 48.8, 48.5)
men_weight <- c(67.8, 60, 63.4, 76, 89.4, 73.3, 67.3, 61.3, 62.4) 
# Create a data frame
my_data <- data.frame( 
                group = rep(c("Woman", "Man"), each = 9),
                weight = c(women_weight,  men_weight)
                )
my_data %>% group_by(group) %>%
  summarise(
    count = n(),
    mean = mean(weight, na.rm = TRUE),
    sd = sd(weight, na.rm = TRUE)
  )
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 2 x 4
##   group count  mean    sd
##   <fct> <int> <dbl> <dbl>
## 1 Man       9  69.0  9.38
## 2 Woman     9  52.1 15.6
# Plot weight by group and color by group
ggboxplot(my_data, x = "group", y = "weight", 
          color = "group", palette = c("#00AFBB", "#E7B800"),
        ylab = "Weight", xlab = "Groups")

  • Preliminary test to check independent t-test assumptions

    • Assumption 1: Are the two samples independents?

    Yes, since the samples from men and women are not related.

    • Assumtion 2: Are the data from each of the 2 groups follow a normal distribution?

    Use Shapiro-Wilk normality test as described at: Normality Test in R. - Null hypothesis: the data are normally distributed - Alternative hypothesis: the data are not normally distributed

We’ll use the functions with() and shapiro.test() to compute Shapiro-Wilk test for each group of samples.

# Shapiro-Wilk normality test for Men's weights
with(my_data, shapiro.test(weight[group == "Man"]))# p = 0.1
## 
##  Shapiro-Wilk normality test
## 
## data:  weight[group == "Man"]
## W = 0.86425, p-value = 0.1066
# Shapiro-Wilk normality test for Women's weights
with(my_data, shapiro.test(weight[group == "Woman"])) # p = 0.6
## 
##  Shapiro-Wilk normality test
## 
## data:  weight[group == "Woman"]
## W = 0.94266, p-value = 0.6101

From the output, the two p-values are greater than the significance level 0.05 implying that the distribution of the data are not significantly different from the normal distribution. In other words, we can assume the normality.

Note that, if the data are not normally distributed, it’s recommended to use the non parametric two-samples Wilcoxon rank test.

+ Assumption 3. Do the two populations have the same variances?

We’ll use F-test to test for homogeneity in variances. This can be performed with the function var.test() as follow:

res.ftest <- var.test(weight ~ group, data = my_data)
res.ftest
## 
##  F test to compare two variances
## 
## data:  weight by group
## F = 0.36134, num df = 8, denom df = 8, p-value = 0.1714
## alternative hypothesis: true ratio of variances is not equal to 1
## 95 percent confidence interval:
##  0.08150656 1.60191315
## sample estimates:
## ratio of variances 
##          0.3613398

The p-value of F-test is p = 0.1713596. It’s greater than the significance level alpha = 0.05. In conclusion, there is no significant difference between the variances of the two sets of data. Therefore, we can use the classic t-test witch assume equality of the two variances.

  • Compute unpaired two-samples t-test

Question : Is there any significant difference between women and men weights?

  • Compute independent t-test - Method 1: The data are saved in two different numeric vectors.
# Compute t-test
res <- t.test(women_weight, men_weight, var.equal = TRUE)
res
## 
##  Two Sample t-test
## 
## data:  women_weight and men_weight
## t = -2.7842, df = 16, p-value = 0.01327
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -29.748019  -4.029759
## sample estimates:
## mean of x mean of y 
##  52.10000  68.98889
  • Compute independent t-test - Method 2: The data are saved in a data frame.
# Compute t-test
res <- t.test(weight ~ group, data = my_data, var.equal = TRUE)
res
## 
##  Two Sample t-test
## 
## data:  weight by group
## t = 2.7842, df = 16, p-value = 0.01327
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##   4.029759 29.748019
## sample estimates:
##   mean in group Man mean in group Woman 
##            68.98889            52.10000

As you can see, the two methods give the same results.

In the result above :

  • t is the t-test statistic value (t = 2.784),
  • df is the degrees of freedom (df= 16),
  • p-value is the significance level of the t-test (p-value = 0.01327).
  • conf.int is the confidence interval of the mean at 95% (conf.int = [4.0298, 29.748]);
  • sample estimates is he mean value of the sample (mean = 68.9888889, 52.1).

Note that:

  • if you want to test whether the average men’s weight is less than the average women’s weight, type this: t.test(weight ~ group, data = my_data,var.equal = TRUE, alternative = "less")

  • Or, if you want to test whether the average men’s weight is greater than the average women’s weight, type this t.test(weight ~ group, data = my_data,var.equal = TRUE, alternative = "greater")

Interpretation of the result

The p-value of the test is 0.01327, which is less than the significance level alpha = 0.05. We can conclude that men’s average weight is significantly different from women’s average weight with a p-value = 0.01327.

Access to the values returned by t.test() function The result of t.test() function is a list containing the following components:

  • statistic: the value of the t test statistics
  • parameter: the degrees of freedom for the t test statistics
  • p.value: the p-value for the test
  • conf.int: a confidence interval for the mean appropriate to the specified alternative hypothesis.
  • estimate: the means of the two groups being compared (in the case of independent t test) or difference in means (in the case of paired t test).

The format of the R code to use for getting these values is as follow:

# printing the p-value
res$p.value
## [1] 0.0132656
# printing the mean
res$estimate
##   mean in group Man mean in group Woman 
##            68.98889            52.10000
# printing the confidence interval
res$conf.int
## [1]  4.029759 29.748019
## attr(,"conf.level")
## [1] 0.95

2.2 Unpaired Two-Samples Wilcoxon Test (non-parametric)

Definition

The unpaired two-samples Wilcoxon test (also known as Wilcoxon rank sum test or Mann-Whitney test) is a non-parametric alternative to the unpaired two-samples t-test, which can be used to compare two independent groups of samples. It’s used when your data are not normally distributed.

R function to compute Wilcoxon test

To perform two-samples Wilcoxon test comparing the means of two independent samples (x & y), the R function wilcox.test() can be used as follow:

wilcox.test(x, y, alternative = "two.sided")

  • x,y: numeric vectors
  • alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

Visualize your data using box plots

Here, we’ll use an example data set, which contains the weight of 18 individuals (9 women and 9 men):

# Data in two numeric vectors
women_weight <- c(38.9, 61.2, 73.3, 21.8, 63.4, 64.6, 48.4, 48.8, 48.5)
men_weight <- c(67.8, 60, 63.4, 76, 89.4, 73.3, 67.3, 61.3, 62.4) 
# Create a data frame
my_data <- data.frame( 
                group = rep(c("Woman", "Man"), each = 9),
                weight = c(women_weight,  men_weight)
                )
group_by(my_data, group) %>%
  summarise(
    count = n(),
    median = median(weight, na.rm = TRUE),
    IQR = IQR(weight, na.rm = TRUE)
  )
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 2 x 4
##   group count median   IQR
##   <fct> <int>  <dbl> <dbl>
## 1 Man       9   67.3  10.9
## 2 Woman     9   48.8  15

We want to know, if the median women’s weight differs from the median men’s weight?

ggboxplot(my_data, x = "group", y = "weight", 
          color = "group", palette = c("#00AFBB", "#E7B800"),
          ylab = "Weight", xlab = "Groups")

Compute unpaired two-samples Wilcoxon test

Question : Is there any significant difference between women and men weights?

  1. Compute two-samples Wilcoxon test - Method 1: The data are saved in two different numeric vectors.
res <- wilcox.test(women_weight, men_weight)
## Warning in wilcox.test.default(women_weight, men_weight): cannot compute exact
## p-value with ties
res
## 
##  Wilcoxon rank sum test with continuity correction
## 
## data:  women_weight and men_weight
## W = 15, p-value = 0.02712
## alternative hypothesis: true location shift is not equal to 0

It will give a warning message, saying that “cannot compute exact p-value with tie”. It comes from the assumption of a Wilcoxon test that the responses are continuous. You can suppress this message by adding another argument exact = FALSE, but the result will be the same.

  1. Compute two-samples Wilcoxon test - Method 2: The data are saved in a data frame.
res <- wilcox.test(weight ~ group, data = my_data,
                   exact = FALSE)
res
## 
##  Wilcoxon rank sum test with continuity correction
## 
## data:  weight by group
## W = 66, p-value = 0.02712
## alternative hypothesis: true location shift is not equal to 0
# Print the p-value only
res$p.value
## [1] 0.02711657

As you can see, the two methods give the same results.

The p-value of the test is 0.02712, which is less than the significance level alpha = 0.05. We can conclude that men’s median weight is significantly different from women’s median weight with a p-value = 0.02712.

Note that:

  • if you want to test whether the median men’s weight is less than the median women’s weight, type this:

wilcox.test(weight ~ group, data = my_data, exact = FALSE, alternative = "less")

  • Or, if you want to test whether the median men’s weight is greater than the median women’s weight, type this

wilcox.test(weight ~ group, data = my_data,exact = FALSE, alternative = "greater")

3. Comparing the means of paired samples:

3.1 Paired Samples T-test (parametric)

Definition

The paired samples t-test is used to compare the means between two related groups of samples. In this case, you have two values (i.e., pair of values) for the same samples. This article describes how to compute paired samples t-test using R software.

As an example of data, 20 mice received a treatment X during 3 months. We want to know whether the treatment X has an impact on the weight of the mice.

To answer to this question, the weight of the 20 mice has been measured before and after the treatment. This gives us 20 sets of values before treatment and 20 sets of values after treatment from measuring twice the weight of the same mice.

In such situations, paired t-test can be used to compare the mean weights before and after treatment.

Paired t-test analysis is performed as follow:

  1. Calculate the difference (d) between each pair of value
  2. Compute the mean (m) and the standard deviation (s) of d
  3. Compare the average difference to 0. If there is any significant difference between the two pairs of samples, then the mean of d (m) is expected to be far from 0.

Paired t-test can be used only when the difference d is normally distributed. This can be checked using Shapiro-Wilk test.

Research Questions and statistical hypotheses

Typical research questions are:

  1. whether the mean difference (m) is equal to 0?
  2. whether the mean difference (m) is less than 0?
  3. whether the mean difference (m) is greather than 0?

In statistics, we can define the corresponding null hypothesis (H0) as follow:

  1. H0:m=0
  2. H0:m≤0
  3. H0:m≥0

The corresponding alternative hypotheses (Ha) are as follow:

  1. Ha:m≠0 (different)
  2. Ha:m>0 (greater)
  3. Ha:m<0 (less)

Note that:

  • Hypotheses 1) are called two-tailed tests
  • Hypotheses 2) and 3) are called one-tailed tests

Paired t-test Formula

t-test statistisc value can be calculated using the following formula:

t=m/(s/√n‾)

where,

  • m is the mean differences
  • n is the sample size (i.e., size of d).
  • s is the standard deviation of d

We can compute the p-value corresponding to the absolute value of the t-test statistics (|t|) for the degrees of freedom (df): df=n−1.

If the p-value is inferior or equal to 0.05, we can conclude that the difference between the two paired samples are significantly different.

Practical Examples in R

To perform paired samples t-test comparing the means of two paired samples (x & y), the R function t.test() can be used as follow:

t.test(x, y, paired = TRUE, alternative = "two.sided")

  • x,y: numeric vectors
  • paired: a logical value specifying that we want to compute a paired t-test
  • alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.
# Data in two numeric vectors
# Weight of the mice before treatment
before <-c(200.1, 190.9, 192.7, 213, 241.4, 196.9, 172.2, 185.5, 205.2, 193.7)
# Weight of the mice after treatment
after <-c(392.9, 393.2, 345.1, 393, 434, 427.9, 422, 383.9, 392.3, 352.2)
# Create a data frame
my_data <- data.frame( 
                group = rep(c("before", "after"), each = 10),
                weight = c(before,  after)
                )

We want to know, if there is any significant difference in the mean weights after treatment?

my_data %>% sample_n(5)
##    group weight
## 1  after  392.3
## 2  after  393.0
## 3 before  192.7
## 4 before  193.7
## 5 before  190.9
group_by(my_data, group) %>%
  summarise(
    count = n(),
    mean = mean(weight, na.rm = TRUE),
    sd = sd(weight, na.rm = TRUE)
  )
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 2 x 4
##   group  count  mean    sd
##   <fct>  <int> <dbl> <dbl>
## 1 after     10  394.  29.4
## 2 before    10  199.  18.5
ggboxplot(my_data, x = "group", y = "weight", 
          color = "group", palette = c("#00AFBB", "#E7B800"),
          order = c("before", "after"),
          ylab = "Weight", xlab = "Groups")

Box plots show you the increase, but lose the paired information. You can use the function plot.paired() [in pairedData package] to plot paired data (“before - after” plot).

# Subset weight data before treatment
before <- subset(my_data,  group == "before", weight,
                 drop = TRUE)
# subset weight data after treatment
after <- subset(my_data,  group == "after", weight,
                 drop = TRUE)
# Plot paired data
library(PairedData)
pd <- paired(before, after)
plot(pd, type = "profile") + theme_bw()
  • Preleminary test to check paired t-test assumptions
  1. Assumption 1: Are the two samples paired?

Yes, since the data have been collected from measuring twice the weight of the same mice.

  1. Assumption 2: Is this a large sample?

No, because n < 30. Since the sample size is not large enough (less than 30), we need to check whether the differences of the pairs follow a normal distribution.

  1. How to check the normality?

Use Shapiro-Wilk normality test as described at: Normality Test in R. - Null hypothesis: the data are normally distributed - Alternative hypothesis: the data are not normally distributed

# compute the difference
d <- with(my_data, 
        weight[group == "before"] - weight[group == "after"])
# Shapiro-Wilk normality test for the differences
shapiro.test(d) # => p-value = 0.6141
## 
##  Shapiro-Wilk normality test
## 
## data:  d
## W = 0.94536, p-value = 0.6141

From the output, the p-value is greater than the significance level 0.05 implying that the distribution of the differences (d) are not significantly different from normal distribution. In other words, we can assume the normality.

Note that, if the data are not normally distributed, it’s recommended to use the non parametric paired two-samples Wilcoxon test.

  • Compute paired samples t-test Question : Is there any significant changes in the weights of mice after treatment?
  1. Compute paired t-test - Method 1: The data are saved in two different numeric vectors.
# Compute t-test
res <- t.test(before, after, paired = TRUE)
res
## 
##  Paired t-test
## 
## data:  before and after
## t = -20.883, df = 9, p-value = 0.0000000062
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -215.5581 -173.4219
## sample estimates:
## mean of the differences 
##                 -194.49
  1. Compute paired t-test - Method 2: The data are saved in a data frame.
# Compute t-test
res <- t.test(weight ~ group, data = my_data, paired = TRUE)
res
## 
##  Paired t-test
## 
## data:  weight by group
## t = 20.883, df = 9, p-value = 0.0000000062
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  173.4219 215.5581
## sample estimates:
## mean of the differences 
##                  194.49

As you can see, the two methods give the same results.

In the result above :

  • t is the t-test statistic value (t = 20.88),
  • df is the degrees of freedom (df= 9),
  • p-value is the significance level of the t-test (p-value = 6.210^{-9}).
  • conf.int is the confidence interval (conf.int) of the mean differences at 95% is also shown (conf.int= [173.42, 215.56])
  • sample estimates is the mean differences between pairs (mean = 194.49).

Note that:

  • if you want to test whether the average weight before treatment is less than the average weight after treatment, type this: t.test(weight ~ group, data = my_data, paired = TRUE,alternative = "less")
  • Or, if you want to test whether the average weight before treatment is greater than the average weight after treatment, type this t.test(weight ~ group, data = my_data, paired = TRUE,alternative = "greater")

Interpretation

The p-value of the test is 6.210^{-9}, which is less than the significance level alpha = 0.05. We can then reject null hypothesis and conclude that the average weight of the mice before treatment is significantly different from the average weight after treatment with a p-value = 6.210^{-9}.

Access to the values returned by t.test() function The result of t.test() function is a list containing the following components:

  • statistic: the value of the t test statistics
  • parameter: the degrees of freedom for the t test statistics
  • p.value: the p-value for the test
  • conf.int: a confidence interval for the mean appropriate to the specified alternative hypothesis.
  • estimate: the means of the two groups being compared (in the case of independent t test) or difference in means (in the case of paired t test).

The format of the R code to use for getting these values is as follow:

# printing the p-value
res$p.value
## [1] 0.000000006200298
# printing the mean
res$estimate
## mean of the differences 
##                  194.49
# printing the confidence interval
res$conf.int
## [1] 173.4219 215.5581
## attr(,"conf.level")
## [1] 0.95

3.2 Paired Samples Wilcoxon Test (non-parametric)

Definition

The paired samples Wilcoxon test (also known as Wilcoxon signed-rank test) is a non-parametric alternative to paired t-test used to compare paired data. It’s used when your data are not normally distributed. This tutorial describes how to compute paired samples Wilcoxon test in R.

Differences between paired samples should be distributed symmetrically around the median.

Research Questions

Same as above paired t-test.

Practical Examples in R

The R function wilcox.test() can be used as follow:

wilcox.test(x, y, paired = TRUE, alternative = "two.sided")

  • x,y: numeric vectors
  • paired: a logical value specifying that we want to compute a paired Wilcoxon test
  • alternative: the alternative hypothesis. Allowed value is one of “two.sided” (default), “greater” or “less”.

Here, we’ll use an example data set, which contains the weight of 10 mice before and after the treatment.

# Data in two numeric vectors
# ++++++++++++++++++++++++++
# Weight of the mice before treatment
before <-c(200.1, 190.9, 192.7, 213, 241.4, 196.9, 172.2, 185.5, 205.2, 193.7)
# Weight of the mice after treatment
after <-c(392.9, 393.2, 345.1, 393, 434, 427.9, 422, 383.9, 392.3, 352.2)
# Create a data frame
my_data <- data.frame( 
                group = rep(c("before", "after"), each = 10),
                weight = c(before,  after)
                )

We want to know, if there is any significant difference in the median weights before and after treatment?

group_by(my_data, group) %>%
  summarise(
    count = n(),
    median = median(weight, na.rm = TRUE),
    IQR = IQR(weight, na.rm = TRUE)
  )
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 2 x 4
##   group  count median   IQR
##   <fct>  <int>  <dbl> <dbl>
## 1 after     10   393.  28.8
## 2 before    10   195.  12.6
ggboxplot(my_data, x = "group", y = "weight", 
          color = "group", palette = c("#00AFBB", "#E7B800"),
          order = c("before", "after"),
          ylab = "Weight", xlab = "Groups")

Box plots show you the increase, but lose the paired information. You can use the function plot.paired() [in pairedData package] to plot paired data (“before - after” plot).

  • plot paired data:
# Subset weight data before treatment
before <- subset(my_data,  group == "before", weight,
                 drop = TRUE)
# subset weight data after treatment
after <- subset(my_data,  group == "after", weight,
                 drop = TRUE)
# Plot paired data
library(PairedData)
pd <- paired(before, after)
plot(pd, type = "profile") + theme_bw()
  • Compute paired-sample Wilcoxon test Question : Is there any significant changes in the weights of mice before after treatment?
  1. Compute paired Wilcoxon test - Method 1: The data are saved in two different numeric vectors.
res <- wilcox.test(before, after, paired = TRUE)
res
## 
##  Wilcoxon signed rank test
## 
## data:  before and after
## V = 0, p-value = 0.001953
## alternative hypothesis: true location shift is not equal to 0
  1. Compute paired Wilcoxon-test - Method 2: The data are saved in a data frame.
# Compute t-test
res <- wilcox.test(weight ~ group, data = my_data, paired = TRUE)
res
## 
##  Wilcoxon signed rank test
## 
## data:  weight by group
## V = 55, p-value = 0.001953
## alternative hypothesis: true location shift is not equal to 0

Interpretation

# print only the p-value
res$p.value
## [1] 0.001953125

As you can see, the two methods give the same results.

The p-value of the test is 0.001953, which is less than the significance level alpha = 0.05. We can conclude that the median weight of the mice before treatment is significantly different from the median weight after treatment with a p-value = 0.001953.

Note that:

  • if you want to test whether the median weight before treatment is less than the median weight after treatment, type this: wilcox.test(weight ~ group, data = my_data, paired = TRUE,alternative = "less")
  • Or, if you want to test whether the median weight before treatment is greater than the median weight after treatment, type this wilcox.test(weight ~ group, data = my_data, paired = TRUE,alternative = "greater")

4. Comparing the means of more than two groups

Analysis of variance (ANOVA, parametric)

4.1 One-Way ANOVA Test in R

An extension of independent two-samples t-test for comparing means in a situation where there are more than two groups.

What is one-way ANOVA test?

The one-way analysis of variance (ANOVA), also known as one-factor ANOVA, is an extension of independent two-samples t-test for comparing means in a situation where there are more than two groups. In one-way ANOVA, the data is organized into several groups base on one single grouping variable (also called factor variable). This tutorial describes the basic principle of the one-way ANOVA test and provides practical anova test examples in R software.

ANOVA test hypotheses:

  • Null hypothesis: the means of the different groups are the same
  • Alternative hypothesis: At least one sample mean is not equal to the others.

Note that, if you have only two groups, you can use t-test. In this case the F-test and the t-test are equivalent.

Assumptions of ANOVA test

Here we describe the requirement for ANOVA test. ANOVA test can be applied only when:

  • The observations are obtained independently and randomly from the population defined by the factor levels
  • The data of each factor level are normally distributed.
  • These normal populations have a common variance. (Levene’s test can be used to check this.)

How one-way ANOVA test works?

Assume that we have 3 groups (A, B, C) to compare:

  1. Compute the common variance, which is called variance within samples (S2within) or residual variance.
  2. Compute the variance between sample means as follow:
    • Compute the mean of each group
    • Compute the variance between sample means (S2between)
  3. Produce F-statistic as the ratio of S2between/S2within.

Note that, a lower ratio (ratio < 1) indicates that there are no significant difference between the means of the samples being compared. However, a higher ratio implies that the variation among group means are significant.

Visualize your data and compute one-way ANOVA in R

  • Visualize your data Here, we’ll use the built-in R data set named PlantGrowth. It contains the weight of plants obtained under a control and two different treatment conditions.
my_data <- PlantGrowth
my_data %>% sample_n(10)
##    weight group
## 1    4.32  trt1
## 2    3.59  trt1
## 3    5.18  ctrl
## 4    5.14  ctrl
## 5    4.89  trt1
## 6    5.12  trt2
## 7    4.81  trt1
## 8    4.50  ctrl
## 9    4.69  trt1
## 10   5.80  trt2

In R terminology, the column “group” is called factor and the different categories (“ctr”, “trt1”, “trt2”) are named factor levels. The levels are ordered alphabetically.

levels(my_data$group)
## [1] "ctrl" "trt1" "trt2"

If the levels are not automatically in the correct order, re-order them as follow:

my_data$group <- ordered(my_data$group,
                         levels = c("ctrl", "trt1", "trt2"))
group_by(my_data, group) %>%
  summarise(
    count = n(),
    mean = mean(weight, na.rm = TRUE),
    sd = sd(weight, na.rm = TRUE)
  )
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 3 x 4
##   group count  mean    sd
##   <ord> <int> <dbl> <dbl>
## 1 ctrl     10  5.03 0.583
## 2 trt1     10  4.66 0.794
## 3 trt2     10  5.53 0.443
# Plot weight by group and color by group
ggboxplot(my_data, x = "group", y = "weight", 
          color = "group", palette = c("#00AFBB", "#E7B800", "#FC4E07"),
          order = c("ctrl", "trt1", "trt2"),
          ylab = "Weight", xlab = "Treatment")

# Mean plots
# ++++++++++++++++++++
# Plot weight by group
# Add error bars: mean_se
# (other values include: mean_sd, mean_ci, median_iqr, ....)

ggline(my_data, x = "group", y = "weight", 
       add = c("mean_se", "jitter"), 
       order = c("ctrl", "trt1", "trt2"),
       ylab = "Weight", xlab = "Treatment")

If you still want to use R base graphs, type the following scripts:

# Box plot
boxplot(weight ~ group, data = my_data,
        xlab = "Treatment", ylab = "Weight",
        frame = FALSE, col = c("#00AFBB", "#E7B800", "#FC4E07"))
# plotmeans
library("gplots")
plotmeans(weight ~ group, data = my_data, frame = FALSE,
          xlab = "Treatment", ylab = "Weight",
          main="Mean Plot with 95% CI") 
  • Compute one-way ANOVA test

We want to know if there is any significant difference between the average weights of plants in the 3 experimental conditions.

The R function aov() can be used to answer to this question. The function summary.aov() is used to summarize the analysis of variance model.

# Compute the analysis of variance
res.aov <- aov(weight ~ group, data = my_data)
# Summary of the analysis
summary(res.aov)
##             Df Sum Sq Mean Sq F value Pr(>F)  
## group        2  3.766  1.8832   4.846 0.0159 *
## Residuals   27 10.492  0.3886                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The output includes the columns F value and Pr(>F) corresponding to the p-value of the test.

  • Interpret the result of one-way ANOVA tests

As the p-value is less than the significance level 0.05, we can conclude that there are significant differences between the groups highlighted with “*" in the model summary.

  • Multiple pairwise-comparison between the means of groups

In one-way ANOVA test, a significant p-value indicates that some of the group means are different, but we don’t know which pairs of groups are different.

It’s possible to perform multiple pairwise-comparison, to determine if the mean difference between specific pairs of group are statistically significant.

- Tukey multiple pairewise-comparisons

As the ANOVA test is significant, we can compute Tukey HSD (Tukey Honest Significant Differences, R function: TukeyHSD()) for performing multiple pairwise-comparison between the means of groups.

The function TukeyHD() takes the fitted ANOVA as an argument.
TukeyHSD(res.aov)
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = weight ~ group, data = my_data)
## 
## $group
##             diff        lwr       upr     p adj
## trt1-ctrl -0.371 -1.0622161 0.3202161 0.3908711
## trt2-ctrl  0.494 -0.1972161 1.1852161 0.1979960
## trt2-trt1  0.865  0.1737839 1.5562161 0.0120064
    + diff: difference between means of the two groups
    + lwr, upr: the lower and the upper end point of the confidence interval at 95% (default)
    + p adj: p-value after adjustment for the multiple comparisons.

    It can be seen from the output, that only the difference between trt2 and trt1 is significant with an adjusted p-value of 0.012.


- Multiple comparisons using multcomp package
It’s possible to use the function glht() [in multcomp package] to perform multiple comparison procedures for an ANOVA. glht stands for general linear hypothesis tests. The simplified format is as follow:

```glht(model, lincft)```

- model: a fitted model, for example an object returned by aov().
- lincft(): a specification of the linear hypotheses to be tested. Multiple comparisons in ANOVA models are specified by objects returned from the function mcp().

Use glht() to perform multiple pairwise-comparisons for a one-way ANOVA:

library(multcomp)
## Warning: package 'multcomp' was built under R version 3.6.2
## Loading required package: mvtnorm
## Warning: package 'mvtnorm' was built under R version 3.6.2
## Loading required package: survival
## Loading required package: TH.data
## Loading required package: MASS
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
## 
## Attaching package: 'TH.data'
## The following object is masked from 'package:MASS':
## 
##     geyser
summary(glht(res.aov, linfct = mcp(group = "Tukey")))
## 
##   Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: aov(formula = weight ~ group, data = my_data)
## 
## Linear Hypotheses:
##                  Estimate Std. Error t value Pr(>|t|)  
## trt1 - ctrl == 0  -0.3710     0.2788  -1.331   0.3908  
## trt2 - ctrl == 0   0.4940     0.2788   1.772   0.1979  
## trt2 - trt1 == 0   0.8650     0.2788   3.103   0.0121 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)
- Pairwise t-test
The function pairewise.t.test() can be also used to calculate pairwise comparisons between group levels with corrections for multiple testing.
pairwise.t.test(my_data$weight, my_data$group,
                 p.adjust.method = "BH")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  my_data$weight and my_data$group 
## 
##      ctrl  trt1 
## trt1 0.194 -    
## trt2 0.132 0.013
## 
## P value adjustment method: BH
The result is a table of p-values for the pairwise comparisons. Here, the p-values have been adjusted by the Benjamini-Hochberg method.
  • Check ANOVA assumptions: test validity? The ANOVA test assumes that, the data are normally distributed and the variance across groups are homogeneous. We can check that with some diagnostic plots.

    • Check the homogeneity of variance assumption The residuals versus fits plot can be used to check the homogeneity of variances.

    In the plot below, there is no evident relationships between residuals and fitted values (the mean of each groups), which is good. So, we can assume the homogeneity of variances.

# 1. Homogeneity of variances
plot(res.aov, 1)

Points 17, 15, 4 are detected as outliers, which can severely affect normality and homogeneity of variance. It can be useful to remove outliers to meet the test assumptions.

It’s also possible to use Bartlett’s test or Levene’s test to check the homogeneity of variances.

We recommend Levene’s test, which is less sensitive to departures from normal distribution. The     function leveneTest() [in car package] will be used:
library(car)
## Warning: package 'car' was built under R version 3.6.2
## Loading required package: carData
## Warning: package 'carData' was built under R version 3.6.2
## 
## Attaching package: 'car'
## The following object is masked from 'package:dplyr':
## 
##     recode
## The following object is masked from 'package:purrr':
## 
##     some
leveneTest(weight ~ group, data = my_data)
## Levene's Test for Homogeneity of Variance (center = median)
##       Df F value Pr(>F)
## group  2  1.1192 0.3412
##       27
From the output above we can see that the p-value is not less than the significance level of 0.05. This means that there is no evidence to suggest that the variance across groups is statistically significantly different. Therefore, we can assume the homogeneity of variances in the different treatment groups.

- Relaxing the homogeneity of variance assumption

The classical one-way ANOVA test requires an assumption of equal variances for all groups. In our example, the homogeneity of variance assumption turned out to be fine: the Levene test is not significant.

How do we save our ANOVA test, in a situation where the homogeneity of variance assumption is violated?

An alternative procedure (i.e.: Welch one-way test), that does not require that assumption have been implemented in the function oneway.test().

    + ANOVA test with no assumption of equal variances
    ```oneway.test(weight ~ group, data = my_data)```
    + Pairwise t-tests with no assumption of equal variances
    ```pairwise.t.test(my_data$weight, my_data$group,
                        p.adjust.method = "BH", pool.sd = FALSE)```
             

- Check the normality assumption

Normality plot of residuals. In the plot below, the quantiles of the residuals are plotted against the quantiles of the normal distribution. A 45-degree reference line is also plotted.

The normal probability plot of residuals is used to check the assumption that the residuals are normally distributed. It should approximately follow a straight line.
# 2. Normality
plot(res.aov, 2)

As all the points fall approximately along this reference line, we can assume normality.

The conclusion above, is supported by the Shapiro-Wilk test on the ANOVA residuals (W = 0.96, p = 0.6) which finds no indication that normality is violated.
# Extract the residuals
aov_residuals <- residuals(object = res.aov )
# Run Shapiro-Wilk test
shapiro.test(x = aov_residuals )
## 
##  Shapiro-Wilk normality test
## 
## data:  aov_residuals
## W = 0.96607, p-value = 0.4379
  • Non-parametric alternative to one-way ANOVA test

Note that, a non-parametric alternative to one-way ANOVA is Kruskal-Wallis rank sum test, which can be used when ANNOVA assumptions are not met.

kruskal.test(weight ~ group, data = my_data)
## 
##  Kruskal-Wallis rank sum test
## 
## data:  weight by group
## Kruskal-Wallis chi-squared = 7.9882, df = 2, p-value = 0.01842

4.2 Two-Way ANOVA Test in R

What is two-way ANOVA test?

Two-way ANOVA test is used to evaluate simultaneously the effect of two grouping variables (A and B) on a response variable.

The grouping variables are also known as factors. The different categories (groups) of a factor are called levels. The number of levels can vary between factors. The level combinations of factors are called cell.

  • When the sample sizes within cells are equal, we have the so-called balanced design. In this case the standard two-way ANOVA test can be applied.

  • When the sample sizes within each level of the independent variables are not the same (case of unbalanced designs), the ANOVA test should be handled differently.

Two-way ANOVA test hypotheses

  1. There is no difference in the means of factor A
  2. There is no difference in means of factor B
  3. There is no interaction between factors A and B

The alternative hypothesis for cases 1 and 2 is: the means are not equal.

The alternative hypothesis for case 3 is: there is an interaction between A and B.

Assumptions of two-way ANOVA test

Two-way ANOVA, like all ANOVA tests, assumes that the observations within each cell are normally distributed and have equal variances. We’ll show you how to check these assumptions after fitting ANOVA.

Compute two-way ANOVA test in R: balanced designs

Balanced designs correspond to the situation where we have equal sample sizes within levels of our independent grouping levels. - Visualize your data

Here, we’ll use the built-in R data set named ToothGrowth. It contains data from a study evaluating the effect of vitamin C on tooth growth in Guinea pigs. The experiment has been performed on 60 pigs, where each animal received one of three dose levels of vitamin C (0.5, 1, and 2 mg/day) by one of two delivery methods, (orange juice or ascorbic acid (a form of vitamin C and coded as VC). Tooth length was measured and a sample of the data is shown below.
my_data <- ToothGrowth
dplyr::sample_n(my_data, 10)
##     len supp dose
## 1  25.8   OJ  1.0
## 2  32.5   VC  2.0
## 3  15.2   OJ  0.5
## 4  17.3   VC  1.0
## 5  26.4   OJ  2.0
## 6  29.5   VC  2.0
## 7  10.0   VC  0.5
## 8  23.6   OJ  1.0
## 9  11.2   VC  0.5
## 10 18.5   VC  2.0
# Check the structure
str(my_data)
## 'data.frame':    60 obs. of  3 variables:
##  $ len : num  4.2 11.5 7.3 5.8 6.4 10 11.2 11.2 5.2 7 ...
##  $ supp: Factor w/ 2 levels "OJ","VC": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dose: num  0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 ...

From the output above, R considers “dose” as a numeric variable. We’ll convert it as a factor variable (i.e., grouping variable) as follow.

# Convert dose as a factor and recode the levels
# as "D0.5", "D1", "D2"
my_data$dose <- factor(my_data$dose, 
                  levels = c(0.5, 1, 2),
                  labels = c("D0.5", "D1", "D2"))
head(my_data)
##    len supp dose
## 1  4.2   VC D0.5
## 2 11.5   VC D0.5
## 3  7.3   VC D0.5
## 4  5.8   VC D0.5
## 5  6.4   VC D0.5
## 6 10.0   VC D0.5

Question: We want to know if tooth length depends on supp and dose.

table(my_data$supp, my_data$dose)
##     
##      D0.5 D1 D2
##   OJ   10 10 10
##   VC   10 10 10

We have 2X3 design cells with the factors being supp and dose and 10 subjects in each cell. Here, we have a balanced design. In the next sections I’ll describe how to analyse data from balanced designs, since this is the simplest case.

Box plots and line plots can be used to visualize group differences: + Box plot to plot the data grouped by the combinations of the levels of the two factors. + Two-way interaction plot, which plots the mean (or other summary) of the response for two-way combinations of factors, thereby illustrating possible interactions.

# Box plot with multiple groups
# Plot tooth length ("len") by groups ("dose")
# Color box plot by a second group: "supp"

ggboxplot(my_data, x = "dose", y = "len", color = "supp",
          palette = c("#00AFBB", "#E7B800"))

# Line plots with multiple groups
# +++++++++++++++++++++++
# Plot tooth length ("len") by groups ("dose")
# Color box plot by a second group: "supp"
# Add error bars: mean_se
# (other values include: mean_sd, mean_ci, median_iqr, ....)
ggline(my_data, x = "dose", y = "len", color = "supp",
       add = c("mean_se", "dotplot"),
       palette = c("#00AFBB", "#E7B800"))
## `stat_bindot()` using `bins = 30`. Pick better value with `binwidth`.

If you still want to use R base graphs, type the following scripts:

# Box plot with two factor variables
boxplot(len ~ supp * dose, data=my_data, frame = FALSE, 
        col = c("#00AFBB", "#E7B800"), ylab="Tooth Length")
# Two-way interaction plot
interaction.plot(x.factor = my_data$dose, trace.factor = my_data$supp, 
                 response = my_data$len, fun = mean, 
                 type = "b", legend = TRUE, 
                 xlab = "Dose", ylab="Tooth Length",
                 pch=c(1,19), col = c("#00AFBB", "#E7B800"))

Arguments used for the function interaction.plot(): + x.factor: the factor to be plotted on x axis. + trace.factor: the factor to be plotted as lines + response: a numeric variable giving the response + type: the type of plot. Allowed values include p (for point only), l (for line only) and b (for both point and line).

- Compute two-way ANOVA test
We want to know if tooth length depends on supp and dose.

The R function aov() can be used to answer this question. The function summary.aov() is used to summarize the analysis of variance model.
res.aov2 <- aov(len ~ supp + dose, data = my_data)
summary(res.aov2)
##             Df Sum Sq Mean Sq F value               Pr(>F)    
## supp         1  205.4   205.4   14.02             0.000429 ***
## dose         2 2426.4  1213.2   82.81 < 0.0000000000000002 ***
## Residuals   56  820.4    14.7                                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The output includes the columns F value and Pr(>F) corresponding to the p-value of the test.

From the ANOVA table we can conclude that both supp and dose are statistically significant. dose is the most significant factor variable. These results would lead us to believe that changing delivery methods (supp) or the dose of vitamin C, will impact significantly the mean tooth length.

Note the above fitted model is called additive model. It makes an assumption that the two factor variables are independent. If you think that these two variables might interact to create an synergistic effect, replace the plus symbol (+) by an asterisk (*), as follow.

# Two-way ANOVA with interaction effect
# These two calls are equivalent
res.aov3 <- aov(len ~ supp * dose, data = my_data)
res.aov3 <- aov(len ~ supp + dose + supp:dose, data = my_data)
summary(res.aov3)
##             Df Sum Sq Mean Sq F value               Pr(>F)    
## supp         1  205.4   205.4  15.572             0.000231 ***
## dose         2 2426.4  1213.2  92.000 < 0.0000000000000002 ***
## supp:dose    2  108.3    54.2   4.107             0.021860 *  
## Residuals   54  712.1    13.2                                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

It can be seen that the two main effects (supp and dose) are statistically significant, as well as their interaction.

Note that, in the situation where the interaction is not significant you should only use the additive model.

  • Interpret the results

From the ANOVA results, you can conclude the following, based on the p-values and a significance level of 0.05:

  • the p-value of supp is 0.000429 (significant), which indicates that the levels of supp are associated with significant different tooth length.

  • the p-value of dose is < 2e-16 (significant), which indicates that the levels of dose are associated with significant different tooth length.

  • the p-value for the interaction between supp*dose is 0.02 (significant), which indicates that the relationships between dose and tooth length depends on the supp method.

    • Compute some summary statistics

      • Compute mean and SD by groups using dplyr R package:
group_by(my_data, supp, dose) %>%
  summarise(
    count = n(),
    mean = mean(len, na.rm = TRUE),
    sd = sd(len, na.rm = TRUE)
  )
## `summarise()` regrouping output by 'supp' (override with `.groups` argument)
## # A tibble: 6 x 5
## # Groups:   supp [2]
##   supp  dose  count  mean    sd
##   <fct> <fct> <int> <dbl> <dbl>
## 1 OJ    D0.5     10 13.2   4.46
## 2 OJ    D1       10 22.7   3.91
## 3 OJ    D2       10 26.1   2.66
## 4 VC    D0.5     10  7.98  2.75
## 5 VC    D1       10 16.8   2.52
## 6 VC    D2       10 26.1   4.80
+ It’s also possible to use the function model.tables() as follow:
    
    ```model.tables(res.aov3, type="means", se = TRUE)```
    - Multiple pairwise-comparison between the means of groups
    In ANOVA test, a significant p-value indicates that some of the group means are different, but we don’t know which pairs of groups are different.

    It’s possible to perform multiple pairwise-comparison, to determine if the mean difference between specific pairs of group are statistically significant.
    + Tukey multiple pairewise-comparisons
    As the ANOVA test is significant, we can compute Tukey HSD (Tukey Honest Significant Differences, R function: TukeyHSD()) for performing multiple pairwise-comparison between the means of groups. The function TukeyHD() takes the fitted ANOVA as an argument.

    We don’t need to perform the test for the “supp” variable because it has only two levels, which have been already proven to be significantly different by ANOVA test. Therefore, the Tukey HSD test will be done only for the factor variable “dose”.
    
TukeyHSD(res.aov3, which = "dose")
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = len ~ supp + dose + supp:dose, data = my_data)
## 
## $dose
##           diff       lwr       upr     p adj
## D1-D0.5  9.130  6.362488 11.897512 0.0000000
## D2-D0.5 15.495 12.727488 18.262512 0.0000000
## D2-D1    6.365  3.597488  9.132512 0.0000027
  • diff: difference between means of the two groups
  • lwr, upr: the lower and the upper end point of the confidence interval at 95% (default)
  • p adj: p-value after adjustment for the multiple comparisons.

It can be seen from the output, that all pairwise comparisons are significant with an adjusted p-value < 0.05.

  • Multiple comparisons using multcomp package It’s possible to use the function glht() [in multcomp package] to perform multiple comparison procedures for an ANOVA. glht stands for general linear hypothesis tests. The simplified format is as follow:

    ```glht(model, lincft)```
  • model: a fitted model, for example an object returned by aov().

  • lincft(): a specification of the linear hypotheses to be tested. Multiple comparisons in ANOVA models are specified by objects returned from the function mcp().

    Use glht() to perform multiple pairwise-comparisons:

summary(glht(res.aov2, linfct = mcp(dose = "Tukey")))
## 
##   Simultaneous Tests for General Linear Hypotheses
## 
## Multiple Comparisons of Means: Tukey Contrasts
## 
## 
## Fit: aov(formula = len ~ supp + dose, data = my_data)
## 
## Linear Hypotheses:
##                Estimate Std. Error t value  Pr(>|t|)    
## D1 - D0.5 == 0    9.130      1.210   7.543 < 0.00001 ***
## D2 - D0.5 == 0   15.495      1.210  12.802 < 0.00001 ***
## D2 - D1 == 0      6.365      1.210   5.259 0.0000105 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)
  • Pairwise t-test

The function pairwise.t.test() can be also used to calculate pairwise comparisons between group levels with corrections for multiple testing.

pairwise.t.test(my_data$len, my_data$dose,
                p.adjust.method = "BH")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  my_data$len and my_data$dose 
## 
##    D0.5                D1                 
## D1 0.00000001004587510 -                  
## D2 0.00000000000000044 0.00001442603487380
## 
## P value adjustment method: BH
  • Check ANOVA assumptions: test validity? ANOVA assumes that the data are normally distributed and the variance across groups are homogeneous. We can check that with some diagnostic plots.

    • Check the homogeneity of variance assumption The residuals versus fits plot is used to check the homogeneity of variances. In the plot below, there is no evident relationships between residuals and fitted values (the mean of each groups), which is good. So, we can assume the homogeneity of variances.
# 1. Homogeneity of variances
plot(res.aov3, 1)

Points 32 and 23 are detected as outliers, which can severely affect normality and homogeneity of variance. It can be useful to remove outliers to meet the test assumptions.

Use the Levene’s test to check the homogeneity of variances. The function leveneTest() [in car package] will be used:

library(car)
leveneTest(len ~ supp*dose, data = my_data)
## Levene's Test for Homogeneity of Variance (center = median)
##       Df F value Pr(>F)
## group  5  1.7086 0.1484
##       54

From the output above we can see that the p-value is not less than the significance level of 0.05. This means that there is no evidence to suggest that the variance across groups is statistically significantly different. Therefore, we can assume the homogeneity of variances in the different treatment groups.

  • Check the normality assumption

Normality plot of the residuals. In the plot below, the quantiles of the residuals are plotted against the quantiles of the normal distribution. A 45-degree reference line is also plotted.

The normal probability plot of residuals is used to verify the assumption that the residuals are normally distributed.

The normal probability plot of the residuals should approximately follow a straight line.

# 2. Normality
plot(res.aov3, 2)

As all the points fall approximately along this reference line, we can assume normality. The conclusion above, is supported by the Shapiro-Wilk test on the ANOVA residuals (W = 0.98, p = 0.5) which finds no indication that normality is violated.

# Extract the residuals
aov_residuals <- residuals(object = res.aov3)
# Run Shapiro-Wilk test
shapiro.test(x = aov_residuals )
## 
##  Shapiro-Wilk normality test
## 
## data:  aov_residuals
## W = 0.98499, p-value = 0.6694

Compute two-way ANOVA test in R for unbalanced designs

An unbalanced design has unequal numbers of subjects in each group.

There are three fundamentally different ways to run an ANOVA in an unbalanced design. They are known as Type-I, Type-II and Type-III sums of squares. To keep things simple, note that The recommended method are the Type-III sums of squares.

The three methods give the same result when the design is balanced. However, when the design is unbalanced, they don’t give the same results.

The function Anova() [in car package] can be used to compute two-way ANOVA test for unbalanced designs.

library(car)
my_anova <- aov(len ~ supp * dose, data = my_data)
Anova(my_anova, type = "III")
## Anova Table (Type III tests)
## 
## Response: len
##              Sum Sq Df F value                Pr(>F)    
## (Intercept) 1750.33  1 132.730 0.0000000000000003603 ***
## supp         137.81  1  10.450              0.002092 ** 
## dose         885.26  2  33.565 0.0000000003363189827 ***
## supp:dose    108.32  2   4.107              0.021860 *  
## Residuals    712.11 54                                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

5. MANOVA Test in R: Multivariate Analysis of Variance

What is MANOVA test?

In the situation where there multiple response variables you can test them simultaneously using a multivariate analysis of variance (MANOVA). This article describes how to compute manova in R.

For example, we may conduct an experiment where we give two treatments (A and B) to two groups of mice, and we are interested in the weight and height of mice. In that case, the weight and height of mice are two dependent variables, and our hypothesis is that both together are affected by the difference in treatment. A multivariate analysis of variance could be used to test this hypothesis.

Assumptions of MANOVA

MANOVA can be used in certain conditions:

  • The dependent variables should be normally distribute within groups. The R function mshapiro.test( )[in the mvnormtest package] can be used to perform the Shapiro-Wilk test for multivariate normality. This is useful in the case of MANOVA, which assumes multivariate normality.

  • Homogeneity of variances across the range of predictors.

  • Linearity between all pairs of dependent variables, all pairs of covariates, and all dependent variable-covariate pairs in each cell

Interpretation of MANOVA

If the global multivariate test is significant, we conclude that the corresponding effect (treatment) is significant. In that case, the next question is to determine if the treatment affects only the weight, only the height or both. In other words, we want to identify the specific dependent variables that contributed to the significant global effect.

To answer this question, we can use one-way ANOVA (or univariate ANOVA) to examine separately each dependent variable.

Compute MANOVA in R

Here, we’ll use iris data set:

# Store the data in the variable my_data
my_data <- iris

my_data %>% sample_n(10)
##    Sepal.Length Sepal.Width Petal.Length Petal.Width    Species
## 1           5.5         2.5          4.0         1.3 versicolor
## 2           5.2         2.7          3.9         1.4 versicolor
## 3           5.7         4.4          1.5         0.4     setosa
## 4           6.4         3.2          5.3         2.3  virginica
## 5           5.0         2.3          3.3         1.0 versicolor
## 6           5.4         3.9          1.7         0.4     setosa
## 7           6.0         3.4          4.5         1.6 versicolor
## 8           6.8         3.2          5.9         2.3  virginica
## 9           4.4         3.0          1.3         0.2     setosa
## 10          7.7         3.8          6.7         2.2  virginica

Question: We want to know if there is any significant difference, in sepal and petal length, between the different species.

The function manova() can be used as follow:

sepl <- iris$Sepal.Length
petl <- iris$Petal.Length
# MANOVA test
res.man <- manova(cbind(Sepal.Length, Petal.Length) ~ Species, data = iris)
summary(res.man)
##            Df Pillai approx F num Df den Df                Pr(>F)    
## Species     2 0.9885   71.829      4    294 < 0.00000000000000022 ***
## Residuals 147                                                        
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Look to see which differ
summary.aov(res.man)
##  Response Sepal.Length :
##              Df Sum Sq Mean Sq F value                Pr(>F)    
## Species       2 63.212  31.606  119.26 < 0.00000000000000022 ***
## Residuals   147 38.956   0.265                                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response Petal.Length :
##              Df Sum Sq Mean Sq F value                Pr(>F)    
## Species       2 437.10 218.551  1180.2 < 0.00000000000000022 ***
## Residuals   147  27.22   0.185                                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

From the output above, it can be seen that the two variables are highly significantly different among Species.

6. Kruskal-Wallis Test in R (non parametric alternative to one-way ANOVA)

What is Kruskal-Wallis test?

Kruskal-Wallis test by rank is a non-parametric alternative to one-way ANOVA test, which extends the two-samples Wilcoxon test in the situation where there are more than two groups. It’s recommended when the assumptions of one-way ANOVA test are not met.

Visualize your data and compute Kruskal-Wallis test in R

Here, we’ll use the built-in R data set named PlantGrowth. It contains the weight of plants obtained under a control and two different treatment conditions.

my_data <- PlantGrowth

In R terminology, the column “group” is called factor and the different categories (“ctr”, “trt1”, “trt2”) are named factor levels. The levels are ordered alphabetically.

# Show the group levels
levels(my_data$group)
## [1] "ctrl" "trt1" "trt2"

If the levels are not automatically in the correct order, re-order them as follow:

my_data$group <- ordered(my_data$group,
                         levels = c("ctrl", "trt1", "trt2"))

group_by(my_data, group) %>%
  summarise(
    count = n(),
    mean = mean(weight, na.rm = TRUE),
    sd = sd(weight, na.rm = TRUE),
    median = median(weight, na.rm = TRUE),
    IQR = IQR(weight, na.rm = TRUE)
  )
## `summarise()` ungrouping output (override with `.groups` argument)
## # A tibble: 3 x 6
##   group count  mean    sd median   IQR
##   <ord> <int> <dbl> <dbl>  <dbl> <dbl>
## 1 ctrl     10  5.03 0.583   5.15 0.743
## 2 trt1     10  4.66 0.794   4.55 0.662
## 3 trt2     10  5.53 0.443   5.44 0.467
  • Visualize the data using box plots
# Box plots
# ++++++++++++++++++++
# Plot weight by group and color by group

ggboxplot(my_data, x = "group", y = "weight", 
          color = "group", palette = c("#00AFBB", "#E7B800", "#FC4E07"),
          order = c("ctrl", "trt1", "trt2"),
          ylab = "Weight", xlab = "Treatment")

# Mean plots
# ++++++++++++++++++++
# Plot weight by group
# Add error bars: mean_se
# (other values include: mean_sd, mean_ci, median_iqr, ....)

ggline(my_data, x = "group", y = "weight", 
       add = c("mean_se", "jitter"), 
       order = c("ctrl", "trt1", "trt2"),
       ylab = "Weight", xlab = "Treatment")

  • Compute Kruskal-Wallis test We want to know if there is any significant difference between the average weights of plants in the 3 experimental conditions.

The test can be performed using the function kruskal.test() as follow:

kruskal.test(weight ~ group, data = my_data)
## 
##  Kruskal-Wallis rank sum test
## 
## data:  weight by group
## Kruskal-Wallis chi-squared = 7.9882, df = 2, p-value = 0.01842
  • Interpret As the p-value is less than the significance level 0.05, we can conclude that there are significant differences between the treatment groups.

  • Multiple pairwise-comparison between groups

From the output of the Kruskal-Wallis test, we know that there is a significant difference between groups, but we don’t know which pairs of groups are different.

It’s possible to use the function pairwise.wilcox.test() to calculate pairwise comparisons between group levels with corrections for multiple testing.

pairwise.wilcox.test(PlantGrowth$weight, PlantGrowth$group,
                 p.adjust.method = "BH")
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## 
##  Pairwise comparisons using Wilcoxon rank sum test 
## 
## data:  PlantGrowth$weight and PlantGrowth$group 
## 
##      ctrl  trt1 
## trt1 0.199 -    
## trt2 0.095 0.027
## 
## P value adjustment method: BH

The pairwise comparison shows that, only trt1 and trt2 are significantly different (p < 0.05).

===== END =====