Attaching package: 'dplyr'
The following objects are masked from 'package:stats':
filter, lag
The following objects are masked from 'package:base':
intersect, setdiff, setequal, union
The tables above provides the distributions of respondents in terms of sex and section. It can be seen that there are 52 females and 48 males; an equal distribution of respondents was made per section constituting to a 20 students from each section.
Call:
lm(formula = `Students Engagement` ~ `Teaching Method` + `Teaching Quality`,
data = Data)
Coefficients:
(Intercept) `Teaching Method` `Teaching Quality`
0.42896 0.09961 0.77676
From this, we may deduce that the data fail to satisfy the two assumptions – Linearity and Homogeneity of Variance.
`summarise()` has grouped output by 'Sex'. You can override using the `.groups`
argument.
Attaching package: 'rstatix'
The following object is masked from 'package:stats':
filter
The mean for male and female is 2.938 and 3.062, respectively.
The above graph shows the plotting of data by sex, which contains two sexes – male and female.
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ forcats 1.0.0 ✔ stringr 1.5.1
✔ lubridate 1.9.3 ✔ tibble 3.2.1
✔ purrr 1.0.2 ✔ tidyr 1.3.1
✔ readr 2.1.5
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ rstatix::filter() masks dplyr::filter(), stats::filter()
✖ dplyr::lag() masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
Warning: The following aesthetics were dropped during statistical transformation: fill
ℹ This can happen when ggplot fails to infer the correct grouping structure in
the data.
ℹ Did you forget to specify a `group` aesthetic or to convert a numerical
variable into a factor?
It clearly shows that there is difference between the impact of collaborative learning in terms of teaching method when grouped according to their sex.
Loading required package: carData
Attaching package: 'car'
The following object is masked from 'package:purrr':
some
The following object is masked from 'package:dplyr':
recode
The histogram does not resemble a bell curve as seen above, means that the residuals do not have a normal distribution. Moreover, the points in the QQ-plots do not follow the straight line, with the majority of them falling outside the confidence bands.
Shapiro-Wilk normality test
data: res_aov$residuals
W = 0.82675, p-value = 1.795e-09
The Shapiro-Wilk p-value = 1.795e-09 on the residuals is less than the usual significance level of 0.05. Thus, we reject the hypothesis that residuals have a normal distribution.
Warning in leveneTest.default(y = y, group = group, ...): group coerced to
factor.
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 1 2.8699 0.09343 .
98
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The p-value is greater than the 0.05 level of significance. Thus, the homogeneity assumption of the variance is met.
Wilcoxon rank sum test
data: a and b
W = 1117, p-value = 0.3383
alternative hypothesis: true location shift is not equal to 0
Since the p-value is larger than 0.05, we fail to reject the null hypothesis, that is, there is no significant difference on the impact of collaborative learning in terms of teaching method when grouped according to sex.
`summarise()` has grouped output by 'Sex'. You can override using the `.groups`
argument.
# A tibble: 19 × 3
# Groups: Sex [2]
Sex `Teaching Quality` count
<fct> <dbl> <int>
1 Female 2.4 1
2 Female 2.6 4
3 Female 2.8 5
4 Female 3 28
5 Female 3.2 8
6 Female 3.4 3
7 Female 3.6 2
8 Female 3.8 1
9 Male 1.6 1
10 Male 2 1
11 Male 2.4 2
12 Male 2.6 1
13 Male 2.8 6
14 Male 3 20
15 Male 3.2 6
16 Male 3.4 3
17 Male 3.6 5
18 Male 3.8 1
19 Male 4 2
The mean for male and female is 3.062 and 3.031, respectively.
The above graph shows the plotting of data by sex, which contains two sexes – male and female.
Warning: The following aesthetics were dropped during statistical transformation: fill
ℹ This can happen when ggplot fails to infer the correct grouping structure in
the data.
ℹ Did you forget to specify a `group` aesthetic or to convert a numerical
variable into a factor?
The histogram does not resemble a bell curve as seen above, means that the residuals do not have a normal distribution. Moreover, the points in the QQ-plots do not follow the straight line, with the majority of them falling outside the confidence bands.
Shapiro-Wilk normality test
data: res_aov$residuals
W = 0.89161, p-value = 5.819e-07
The Shapiro-Wilk p-value = 5.819e-07 on the residuals is less than the usual significance level of 0.05. Thus, we reject the hypothesis that residuals have a normal distribution.
Warning in leveneTest.default(y = y, group = group, ...): group coerced to
factor.
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 1 5.036 0.02708 *
98
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The p-value is less than the 0.05 level of significance. Thus, the homogeneity assumption of the variance is not met.
Wilcoxon rank sum test with continuity correction
data: c and d
W = 1329, p-value = 0.5549
alternative hypothesis: true location shift is not equal to 0
Since the p-value= 0.5549 is greater than 0.05, we fail to reject the null hypothesis. Hence, there is no significant difference on the impact of collaborative learning in terms of teaching quality when grouped according to their sex.
`summarise()` has grouped output by 'Sex'. You can override using the `.groups`
argument.
# A tibble: 20 × 3
# Groups: Sex [2]
Sex `Students Engagement` count
<fct> <dbl> <int>
1 Female 1.2 1
2 Female 2.4 1
3 Female 2.6 3
4 Female 2.8 5
5 Female 3 19
6 Female 3.2 8
7 Female 3.4 8
8 Female 3.6 2
9 Female 3.8 4
10 Female 4 1
11 Male 1.4 2
12 Male 2.2 1
13 Male 2.6 2
14 Male 2.8 4
15 Male 3 20
16 Male 3.2 3
17 Male 3.4 9
18 Male 3.6 4
19 Male 3.8 1
20 Male 4 2
The mean for male and female is 3.079 and 3.108, respectively.
The above graph shows the plotting of data by sex, which contains two sexes – male and female.
The histogram does not resemble a bell curve as seen above, means that the residuals do not have a normal distribution. Moreover, the points in the QQ-plots roughly follow the straight line, with the majority of them falling within the confidence bands. However, this does not guarantee that residuals follow a normal distribution since when based on the diagram on the left, it is the exact opposite of it. Thus, it is more convenient to observe the two.
Shapiro-Wilk normality test
data: res_aov$residuals
W = 0.86068, p-value = 3.004e-08
The Shapiro-Wilk p-value = 3.004e-08 on the residuals is less than the usual significance level of 0.05. Thus, we reject the hypothesis that residuals have a normal distribution.
Warning in leveneTest.default(y = y, group = group, ...): group coerced to
factor.
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 1 0.1447 0.7044
98
The p-value is greater than the 0.05 level of significance. Thus, the homogeneity assumption of the variance is met.
Wilcoxon rank sum test with continuity correction
data: e and f
W = 1234, p-value = 0.9232
alternative hypothesis: true location shift is not equal to 0
Since the p-value= 0.9232 is greater than 0.05, we fail to reject the null hypothesis. Hence, there is no significant difference on the impact of collaborative learning in terms of students engagement when grouped according to their sex.
Shapiro-Wilk normality test
data: Data$`Teaching Method`
W = 0.78362, p-value = 8.014e-11
Since p-value = 8.014e-11 < 0.05, it is conclusive that we reject the null hypothesis. That is, we cannot assume normality.
Warning in leveneTest.default(y = y, group = group, ...): group coerced to
factor.
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 4 0.5702 0.6849
95
The p-value is greater than the 0.05 level of significance. Thus, the homogeneity assumption of the variance is met.
Attaching package: 'gplots'
The following object is masked from 'package:stats':
lowess
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
Warning in axis(1, at = 1:length(means), labels = legends, ...): "frame" is not
a graphical parameter
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
# A tibble: 5 × 11
Section variable n min max median iqr mean sd se ci
<fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Humility Teaching Meth… 20 2.8 4 3.1 0.25 3.14 0.291 0.065 0.136
2 Chastity Teaching Meth… 20 1.2 3.8 3 0.05 2.94 0.495 0.111 0.231
3 Modesty Teaching Meth… 20 2.6 3.4 3 0.2 2.97 0.254 0.057 0.119
4 Loyalty Teaching Meth… 20 1 3.8 3 0.2 2.83 0.603 0.135 0.282
5 Honesty Teaching Meth… 20 2.2 3.8 3 0.25 3.13 0.333 0.074 0.156
The mean of humility, chastity, modesty, loyalty, and honesty is 3.14, 2.94, 2.97, 2.83, and 3.13, respectively.
# A tibble: 1 × 6
.y. n statistic df p method
* <chr> <int> <dbl> <int> <dbl> <chr>
1 Teaching Method 100 8.25 4 0.0828 Kruskal-Wallis
Based on the p-value, there is no significant difference was observed between the group pairs.
# A tibble: 10 × 9
.y. group1 group2 n1 n2 statistic p p.adj p.adj.signif
* <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr>
1 Teaching Method Humil… Chast… 20 20 -1.31 0.189 1 ns
2 Teaching Method Humil… Modes… 20 20 -1.78 0.0753 0.753 ns
3 Teaching Method Humil… Loyal… 20 20 -2.07 0.0382 0.382 ns
4 Teaching Method Humil… Hones… 20 20 0.127 0.899 1 ns
5 Teaching Method Chast… Modes… 20 20 -0.465 0.642 1 ns
6 Teaching Method Chast… Loyal… 20 20 -0.759 0.448 1 ns
7 Teaching Method Chast… Hones… 20 20 1.44 0.150 1 ns
8 Teaching Method Modes… Loyal… 20 20 -0.294 0.768 1 ns
9 Teaching Method Modes… Hones… 20 20 1.91 0.0567 0.567 ns
10 Teaching Method Loyal… Hones… 20 20 2.20 0.0278 0.278 ns
There is a significant difference found between humility and loyalty.
Shapiro-Wilk normality test
data: Data$`Teaching Quality`
W = 0.87803, p-value = 1.499e-07
Since p-value = 1.499e-07 < 0.05, it is conclusive that we reject the null hypothesis. That is, we cannot assume normality.
Warning in leveneTest.default(y = y, group = group, ...): group coerced to
factor.
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 4 0.1143 0.9772
95
The p-value is greater than the 0.05 level of significance. Thus, the homogeneity assumption of the variance is met.
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
Warning in axis(1, at = 1:length(means), labels = legends, ...): "frame" is not
a graphical parameter
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
# A tibble: 5 × 11
Section variable n min max median iqr mean sd se ci
<fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Humility Teaching Qual… 20 2.4 3.8 3 0.25 3.05 0.324 0.072 0.151
2 Chastity Teaching Qual… 20 1.6 3.6 3 0.2 2.99 0.418 0.093 0.196
3 Modesty Teaching Qual… 20 2.6 3.6 3 0.25 3.11 0.263 0.059 0.123
4 Loyalty Teaching Qual… 20 2 4 3 0.05 3.01 0.428 0.096 0.2
5 Honesty Teaching Qual… 20 2.6 4 3 0.05 3.07 0.333 0.074 0.156
The mean of humility, chastity, modesty, loyalty, and honesty is 3.05, 2.99, 3.11, 3.01, and 3.07, respectively.
# A tibble: 1 × 6
.y. n statistic df p method
* <chr> <int> <dbl> <int> <dbl> <chr>
1 Teaching Quality 100 2.07 4 0.723 Kruskal-Wallis
Based on the p-value, there is no significant difference was observed between the group pairs.
# A tibble: 10 × 9
.y. group1 group2 n1 n2 statistic p p.adj p.adj.signif
* <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr>
1 Teaching Quality Humil… Chast… 20 20 -0.397 0.691 1 ns
2 Teaching Quality Humil… Modes… 20 20 0.382 0.702 1 ns
3 Teaching Quality Humil… Loyal… 20 20 -0.976 0.329 1 ns
4 Teaching Quality Humil… Hones… 20 20 -0.414 0.679 1 ns
5 Teaching Quality Chast… Modes… 20 20 0.779 0.436 1 ns
6 Teaching Quality Chast… Loyal… 20 20 -0.579 0.562 1 ns
7 Teaching Quality Chast… Hones… 20 20 -0.0174 0.986 1 ns
8 Teaching Quality Modes… Loyal… 20 20 -1.36 0.174 1 ns
9 Teaching Quality Modes… Hones… 20 20 -0.797 0.426 1 ns
10 Teaching Quality Loyal… Hones… 20 20 0.562 0.574 1 ns
Pairwise, no significant difference.
Shapiro-Wilk normality test
data: Data$`Students Engagement`
W = 0.85083, p-value = 1.273e-08
Since p-value = 1.273e-08 < 0.05, it is conclusive that we reject the null hypothesis. That is, we cannot assume normality.
Warning in leveneTest.default(y = y, group = group, ...): group coerced to
factor.
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 4 0.923 0.454
95
The p-value is greater than the 0.05 level of significance. Thus, the homogeneity assumption of the variance is met.
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
Warning in axis(1, at = 1:length(means), labels = legends, ...): "frame" is not
a graphical parameter
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
# A tibble: 5 × 11
Section variable n min max median iqr mean sd se ci
<fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Humility Students Enga… 20 1.2 3.8 3.2 0.4 3.14 0.555 0.124 0.26
2 Chastity Students Enga… 20 1.4 3.8 3 0.45 3.05 0.527 0.118 0.246
3 Modesty Students Enga… 20 2.4 3.6 3 0.2 3.08 0.255 0.057 0.119
4 Loyalty Students Enga… 20 1.4 4 3 0.3 2.95 0.519 0.116 0.243
5 Honesty Students Enga… 20 2.8 4 3.1 0.4 3.25 0.378 0.084 0.177
The mean of humility, chastity, modesty, loyalty, and honesty is 3.14, 3.05, 3.08, 2.95, and 3.25, respectively.
# A tibble: 1 × 6
.y. n statistic df p method
* <chr> <int> <dbl> <int> <dbl> <chr>
1 Students Engagement 100 5.79 4 0.215 Kruskal-Wallis
Based on the p-value, there is no significant difference was observed between the group pairs.
# A tibble: 10 × 9
.y. group1 group2 n1 n2 statistic p p.adj p.adj.signif
* <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr>
1 Students Engag… Humil… Chast… 20 20 -1.02 0.310 1 ns
2 Students Engag… Humil… Modes… 20 20 -1.08 0.280 1 ns
3 Students Engag… Humil… Loyal… 20 20 -2.02 0.0431 0.431 ns
4 Students Engag… Humil… Hones… 20 20 0 1 1 ns
5 Students Engag… Chast… Modes… 20 20 -0.0649 0.948 1 ns
6 Students Engag… Chast… Loyal… 20 20 -1.01 0.314 1 ns
7 Students Engag… Chast… Hones… 20 20 1.02 0.310 1 ns
8 Students Engag… Modes… Loyal… 20 20 -0.942 0.346 1 ns
9 Students Engag… Modes… Hones… 20 20 1.08 0.280 1 ns
10 Students Engag… Loyal… Hones… 20 20 2.02 0.0431 0.431 ns
There is a significant difference found between humility and loyalty, so with loyalty and honesty
Shapiro-Wilk normality test
data: Data1$`Scores in terms of the impact of collaborative learning`
W = 0.83758, p-value < 2.2e-16
Since p-value = 2.2e-16 < 0.05, it is conclusive that we reject the null hypothesis. That is, we cannot assume normality.
Warning in leveneTest.default(y = y, group = group, ...): group coerced to
factor.
Levene's Test for Homogeneity of Variance (center = median)
Df F value Pr(>F)
group 2 1.7026 0.184
297
The p-value is greater than the 0.05 level of significance. Thus, the homogeneity assumption of the variance is met.
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
Warning in axis(1, at = 1:length(means), labels = legends, ...): "frame" is not
a graphical parameter
Warning in plot.xy(xy.coords(x, y), type = type, ...): "frame" is not a
graphical parameter
# A tibble: 3 × 11
Variables variable n min max median iqr mean sd se ci
<fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Teaching Meth… Scores … 100 1 4 3 0.2 3.00 0.425 0.042 0.084
2 Teaching Qual… Scores … 100 1.6 4 3 0.2 3.05 0.354 0.035 0.07
3 Students Enga… Scores … 100 1.2 4 3 0.4 3.09 0.462 0.046 0.092
The mean of teaching method, teaching quality, and students engagement is 3.002, 3.046, and 3.094, respectively.
# A tibble: 1 × 6
.y. n statistic df p method
* <chr> <int> <dbl> <int> <dbl> <chr>
1 Scores in terms of the impact of collabora… 300 3.53 2 0.171 Krusk…
Based on the p-value, there is no significant difference was observed between the group pairs.
# A tibble: 3 × 9
.y. group1 group2 n1 n2 statistic p p.adj p.adj.signif
* <chr> <chr> <chr> <int> <int> <dbl> <dbl> <dbl> <chr>
1 Scores in terms… Teach… Teach… 100 100 0.250 0.803 1 ns
2 Scores in terms… Teach… Stude… 100 100 1.74 0.0823 0.247 ns
3 Scores in terms… Teach… Stude… 100 100 1.49 0.137 0.410 ns
Pairwaise, there is no significant difference.
Based on the provided output above, it can be seen that collaborative learning have the most significant impact towards collaborative learning.