Summary

Based on this analysis of 2023 TA Orientation Anytime (spring, summer, fall) and in-person (spring, fall):

  1. TAO anytime shows similar engagement as TAO in-person.

  2. Although many students register and do not start TAO Anytime, most students who start followed instructions, completing a minimum of 3 modules with 80+% performance on quizzes.

  3. Of the students who completed self-assessments before and after TAO Anytime, students report: a) much greater knowledge, b) moderately more confidence, c) slightly greater ability to implement various teaching practices, and d) moderately more positive attitudes towards various teaching practices.

  4. The gains in various teaching practices (3c-d) are not specific to the students who completed the module targeting the particular teaching practice, i.e., even students who did not complete related modules reported such gains. As a result, gains could be either re-test effects (e.g., students inflating their post-test scores to satisfy us since it may be clear to them we are expecting an increase, students self-justifying the time they spent on the course, etc.) or global gains (e.g., high overlap or transfer between modules).

Context

Datasets analyzed:

  • TA Orientation (TAO) Anytime spring 2023, summer 2023, fall 2023 (downloaded from Canvas on 2023-11-30, so students who took any of the courses after this date are not included in the datasets)

  • TA Orientation (TAO) in-person spring 2023, fall 2023

Analysis:

  • Conducted in R by Marianna Zhang

Data, code, plots, and code report all available on Google Drive.

All datasets were downloaded/exported from their respective Canvas sites on 2023-11-30.

Students who took any of the Canvas courses after this date are not included in the datasets.

Data issues are documented in data_issues.xlsx.

  • In spring 2023 gradebook, “FINAL.ASSESSMENT..Teaching.practices.to.foster.inclusion..453226.” is mislabelled as “ACTIVITY..Final.Assessment..453226.”. (Confirmed to be same column in Canvas, and from subsequent exports.)

  • Attitude subquestions 4 and 6 were corrupted in the final surveys in all datsets.

Visualization & analysis

1. How does TAO Anytime compare with TAO in-person?

TAO Anytime shows similar engagement as TAO in-person.

Registrations by format

TAO Anytime is as popular as TAO in-person in terms of students registered.

### Attrition by format

TAO in-person fall has unusually low attrition. Otherwise, attrition appears largely consistent between formats.

Attrition by format
registered attended
2023 spring
anytime 65 21
in-person 112 48
2023 summer
anytime 20 8
in-person - -
2023 fall
anytime 131 51
in-person 135 129

2. How are students engaging with TAO Anytime?

Although many students register and do not start TAO Anytime, most students who start followed instructions, completing a minimum of 3 modules with 80+% performance on quizzes.

Attrition by term

Attrition patterns are largely consistent term by term.

Attrition by term
enrolled pre 1+ 2+ 3+ 4+ 5+ 6+ 7 final
2023 spring 65 31 25 24 21 12 7 5 3 18
2023 summer 20 11 11 9 8 2 0 0 0 6
2023 fall 131 81 57 53 51 25 15 12 11 43

Attrition

Many who register do not even start the course. But most who start complete the instructed 3+ modules.

Attrition
collapsed across terms
enrolled pre 1+ 2+ 3+ 4+ 5+ 6+ 7 final
216 123 93 86 80 39 22 17 14 67

Module interest

Overall, TAO Anytime modules in order of popularity were:

Module interest
students
Grading problem sets labs and exams 65
Office hours and review sessions 64
Teaching practices to foster inclusion 56
Collecting student feedback for inclusion and equity 50
Leading sections in science and engineering 40
Mental health and well being in learning environments 36
Discussions in humanities and social sciences 32
Writing to engage students 20

Module interest is largely consistent across terms.

module 2023 fall 2023 spring 2023 summer
Grading problem sets labs and exams 40 18 7
Office hours and review sessions 40 19 5
Collecting student feedback for inclusion and equity 34 12 4
Teaching practices to foster inclusion 33 15 8
Leading sections in science and engineering 27 9 4
Mental health and well being in learning environments 24 11 1
Discussions in humanities and social sciences 21 10 1
Writing to engage students 15 5 -

Module grades

Most modules had at least 80% performance on their post-module quiz, since we defined 80% performance as a passing grade, and students could retake quizzes as many times as they wanted.

The one exception, “Leading sections in science and engineering”, had its maximum performance effectively capped due to an open-ended question that was left ungraded (see e.g., Leading sections in science and engineering quiz in TAO Anytime spring 2023).

Module grades
average standard deviation students
Grading problem sets labs and exams 97% 5% 65
Office hours and review sessions 97% 7% 64
Teaching practices to foster inclusion 93% 7% 56
Collecting student feedback for inclusion and equity 90% 12% 50
Leading sections in science and engineering 70% 12% 40
Mental health and well being in learning environments 93% 10% 36
Discussions in humanities and social sciences 86% 13% 32
Writing to engage students 79% 16% 20

3. Did students report any gains over time?

Comparing students’ self-assessments before and after TAO Anytime, students reported: a) much greater knowledge, b) moderately more confidence, c) slightly greater ability to implement various teaching practices, and d) moderately more positive attitudes towards various teaching practices.

Caveats

Students were asked a series of questions at the start of the course (pre) and after the course (final).

Not all students who completed the pre-test completed the post-test, so there are potential attrition effects (e.g., if students who did not improve as much dropped out and did not do the post-test).

In addition, there are potential re-test effects (e.g., students inflating their post-test scores to satisfy us since it may be clear to them we are expecting an increase, students self-justifying the time they spent on the course, etc.).

Knowledge

Students reported greater knowledge of teaching strategies to implement in their context after versus before TAO Anytime (t(74.2)=10.29, p<.001).

The main effect of survey time is statistically significant based on a linear mixed effects model of knowledge as a function of survey time (pre vs post), with random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: survey_knowledge
## 
## REML criterion at convergence: 426.4
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.44339 -0.45896  0.03504  0.54659  1.96179 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.2673   0.5170  
##  Residual             0.3182   0.5641  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   2.96363    0.06910 158.36030   42.89  < 2e-16 ***
## timefinal     0.94026    0.09138  74.21852   10.29 6.18e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.413

Confidence

Students reported greater confidence in their teaching abilities after versus before TAO Anytime (t(78.3)=4.99, p<.001).

This main effect of survey time is statistically significant, based on a linear mixed effects model of confidence as a function of survey time (pre vs post), with random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: survey_confidence
## 
## REML criterion at convergence: 430.5
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.23669 -0.46750 -0.07142  0.58743  1.96053 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3383   0.5817  
##  Residual             0.2851   0.5339  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##             Estimate Std. Error       df t value Pr(>|t|)    
## (Intercept)   3.1736     0.0713 154.8214  44.511  < 2e-16 ***
## timefinal     0.4367     0.0875  78.3215   4.991 3.55e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.375

Ability to implement specific skills

Overall

Across 6 surveyed teaching skills, students reported greater ability to implement such teaching skills after TAO Anytime (t(1037)=4.69, p<.001).

Ability to implement
collapsed across 6 skills
time avg sd n
pre 2.93 0.83 738
final 3.08 0.68 402

This main effect of time is statistically significant, based on a linear mixed effects model, modeling students’ reported ability on each skill with fixed effects for survey time (pre vs post-TAO Anytime), as well as random intercepts per student and skill.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name) + (1 | subquestion)
##    Data: survey_ability
## 
## REML criterion at convergence: 2244.2
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.2810 -0.5254  0.0602  0.5780  3.2607 
## 
## Random effects:
##  Groups      Name        Variance Std.Dev.
##  name        (Intercept) 0.24476  0.4947  
##  subquestion (Intercept) 0.04512  0.2124  
##  Residual                0.33041  0.5748  
## Number of obs: 1140, groups:  name, 123; subquestion, 6
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept) 2.923e+00  9.981e-02 8.106e+00  29.284 1.64e-09 ***
## timefinal   1.841e-01  3.983e-02 1.086e+03   4.621 4.28e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.114

By practice

Specifically, students reported statistically significant gains in their ability to implement the following practices:

  • Use student feedback to improve my teaching practice

  • Craft discussion questions to stimulate active discussion

  • Use scoring tools like checklists and rubrics to increase grading equity

Students did not report statistically significant gains in their ability to implement the following practices:

  • Develop in-class writing activities for students

  • Build a learning community that supports well-being

Co-create classroom norms with students

Students did not report greater ability to implement “Co-create classroom norms with students” after TAO Anytime (t(58.0)=1.85, p=.069).

This difference is marginal and not statistically significant, based on a linear mixed effects model, modeling students’ reported ability about “co-create classroom norms with students” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Co-create classroom norms with students")
## 
## REML criterion at convergence: 423.1
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.3500 -0.5137  0.1832  0.3293  1.6761 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.4482   0.6695  
##  Residual             0.2111   0.4595  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   2.73714    0.07331 141.18208  37.339   <2e-16 ***
## timefinal     0.15017    0.07665  70.78094   1.959    0.054 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.309
Use student feedback to improve my teaching practice

Students reported greater ability to implement “Use student feedback to improve my teaching practice” after TAO Anytime (t(85.5)=2.00, p=.049).

This difference is statistically significant, based on a linear mixed effects model, modeling students’ reported ability to “Use student feedback to improve my teaching practice” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Use student feedback to improve my teaching practice")
## 
## REML criterion at convergence: 361.4
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.6297 -0.3654 -0.0413  0.7151  2.4666 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.1997   0.4469  
##  Residual             0.2183   0.4673  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   3.19316    0.05839 162.70811  54.689   <2e-16 ***
## timefinal     0.15143    0.07591  85.51915   1.995   0.0493 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.404
Craft discussion questions to stimulate active discussion

Students reported greater ability to implement “Craft discussion questions to stimulate active discussion” after TAO Anytime (t(78.7)=2.27, p=.026).

This difference is statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Craft discussion questions to stimulate active discussion” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Craft discussion questions to stimulate active discussion")
## 
## REML criterion at convergence: 397.5
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -1.89736 -0.56839  0.05865  0.53770  1.86667 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.2929   0.5412  
##  Residual             0.2339   0.4836  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   2.93612    0.06554 154.21765  44.802   <2e-16 ***
## timefinal     0.17978    0.07939  78.70865   2.265   0.0263 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.369
Develop in-class writing activities for students

Students did not report greater ability to implement “Develop in-class writing activities for students” after TAO Anytime (t(78.5)=1.55, p=.13).

This difference is not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Develop in-class writing activities for students” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Develop in-class writing activities for students")
## 
## REML criterion at convergence: 477.8
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.2925 -0.5102  0.1316  0.3350  2.0770 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3466   0.5887  
##  Residual             0.4216   0.6493  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   2.60364    0.07915 161.04245  32.894   <2e-16 ***
## timefinal     0.16269    0.10512  78.54430   1.548    0.126    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.415
Use scoring tools like checklists and rubrics to increase grading equity

Students reported greater ability to implement “Use scoring tools like checklists and rubrics to increase grading equity” after TAO Anytime (t(83.6)=3.57, p<.001).

This difference is statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Use scoring tools like checklists and rubrics to increase grading equity” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Use scoring tools like checklists and rubrics to increase grading equity")
## 
## REML criterion at convergence: 418.8
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.89566 -0.35904  0.01538  0.39825  1.74432 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3612   0.6010  
##  Residual             0.2437   0.4937  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   2.98116    0.07023 154.70273  42.450   <2e-16 ***
## timefinal     0.29067    0.08148  83.55015   3.567    6e-04 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.350
Build a learning community that supports well-being

Students did not report greater ability to implement “Build a learning community that supports well-being” after TAO Anytime (t(74.4)=1.58, p=.12).

This difference is not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Build a learning community that supports well-being” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Build a learning community that supports well-being")
## 
## REML criterion at convergence: 374.4
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.91287 -0.19906  0.06548  0.51602  1.87822 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3608   0.6006  
##  Residual             0.1565   0.3956  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   3.09592    0.06493 143.03443  47.681   <2e-16 ***
## timefinal     0.10465    0.06615  74.43186   1.582    0.118    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.300

Attitude towards module practices

Overall

Across 6 surveyed teaching skills, students reported more positive attitudes towards such teaching skills after TAO Anytime (t(892.0)=8.62, p<.001).

Attitude
collapsed across 6 skills
time avg sd n
pre 2.93 0.83 738
final 3.08 0.68 402

This main effect of time is statistically significant, based on a linear mixed effects model, modeling students’ reported attitude on each skill with fixed effects for survey time (pre vs post-TAO Anytime), as well as random intercepts per student and skill.

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name) + (1 | subquestion)
##    Data: survey_attitude
## 
## REML criterion at convergence: 2422.7
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0472 -0.5881  0.0879  0.6636  2.5065 
## 
## Random effects:
##  Groups      Name        Variance Std.Dev.
##  name        (Intercept) 0.2728   0.5223  
##  subquestion (Intercept) 0.2252   0.4745  
##  Residual                0.5178   0.7196  
## Number of obs: 1006, groups:  name, 123; subquestion, 6
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   2.99864    0.20113   5.60965  14.909    1e-05 ***
## timefinal     0.54187    0.05869 945.37708   9.233   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.061

By practice

Specifically, students reported statistically significantly more positive attitudes towards the following practices:

  • Co-create classroom norms with students

  • Use student feedback to improve my teaching practice

  • Craft discussion questions to stimulate active discussion

  • Use scoring tools like checklists and rubrics to increase grading equity

Data about students’ final attitudes towards the following practices was corrupted, so no gains can be assessed:

  • Develop in-class writing activities for students

  • Build a learning community that supports well-being

Co-create classroom norms with students

Students reported more positive attitudes towards “Co-create classroom norms with students” after TAO Anytime (t(59.5)=5.39, p<.001).

This difference is statistically significant, based on a linear mixed effects model, modeling students’ reported attitude about “co-create classroom norms with students” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Co-create classroom norms with students")
## 
## REML criterion at convergence: 467.7
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.78116 -0.59958 -0.01659  0.51087  1.66144 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3992   0.6318  
##  Residual             0.3551   0.5959  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   2.75988    0.07843 158.54889  35.190  < 2e-16 ***
## timefinal     0.64750    0.09749  83.43332   6.642 2.96e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.381
Use student feedback to improve my teaching practice

Students reported more positive attitudes towards “Use student feedback to improve my teaching practice” after TAO Anytime (t(74.1)=3.75, p<.001).

This difference is statistically significant, based on a linear mixed effects model, modeling students’ reported attitude about “Use student feedback to improve my teaching practice” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Use student feedback to improve my teaching practice")
## 
## REML criterion at convergence: 397.8
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.0737 -0.4010  0.4198  0.5574  1.2305 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.2487   0.4987  
##  Residual             0.2610   0.5109  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   3.46984    0.06447 156.00760  53.823  < 2e-16 ***
## timefinal     0.31201    0.08311  74.09442   3.754 0.000344 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.399
Craft discussion questions to stimulate active discussion

Students reported more positive attitudes towards “Craft discussion questions to stimulate active discussion” after TAO Anytime (t(89.3)=4.67, p<.001).

This difference is statistically significant, based on a linear mixed effects model, modeling students’ reported attitude on “Craft discussion questions to stimulate active discussion” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Craft discussion questions to stimulate active discussion")
## 
## REML criterion at convergence: 492.7
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.30138 -0.54835  0.00319  0.76004  1.69792 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3833   0.6191  
##  Residual             0.4508   0.6714  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   3.05401    0.08247 165.46934   37.03  < 2e-16 ***
## timefinal     0.50815    0.10882  89.28165    4.67 1.06e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.412
Develop in-class writing activities for students

Since we only have pre-test data for this practice (final data is corrupted), we cannot conduct any analysis comparing changes pre to final.

Use scoring tools like checklists and rubrics to increase grading equity

Students reported more positive attitudes towards “Use scoring tools like checklists and rubrics to increase grading equity” after TAO Anytime (t(82.3)=5.91, p<.001).

This difference is statistically significant, based on a linear mixed effects model, modeling students’ reported attitude on “Use scoring tools like checklists and rubrics to increase grading equity” with fixed effects for survey time (pre vs post-TAO Anytime), and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time + (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Use scoring tools like checklists and rubrics to increase grading equity")
## 
## REML criterion at convergence: 511.5
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.0850 -0.6826 -0.1305  0.7678  1.1657 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.4082   0.6389  
##  Residual             0.5085   0.7131  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##              Estimate Std. Error        df t value Pr(>|t|)    
## (Intercept)   3.01300    0.08646 163.26060  34.848  < 2e-16 ***
## timefinal     0.68202    0.11535  82.30633   5.913 7.37e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##           (Intr)
## timefinal -0.418
Build a learning community that supports well-being

Since we only have pre-test data for this practice (final data is corrupted), we cannot conduct any analysis comparing changes pre to final.

4. Are the gains in individual skills specific to the students who took the module targeting that skill?

Gains in ability/attitude towards specific teaching practices generally are not specific to students who took modules targeting those practices.

As a result, we cannot rule out the possibility that gains are due to re-test effects (e.g., students inflating their post-test scores to satisfy us since it may be clear to them we are expecting an increase, students self-justifying the time they spent on the course, etc.).

There could also be other possibilities at play, such as global course effects (e.g., high overlap/transfer between modules, such that taking one module boosts ability/attitude towards practices targeted by another untaken module).

However, we cannot definitively rule out or conclude for either possibility.

(To me, the re-test possibility seems more likely, especially on “ability to implement” questions, since modules do not seem very overlapping/transferrable.)

Ability

Overall

Across all 6 practices, students who took a related module reported only marginally (not statistically significant) greater gains in their ability to implement the practice than students who did not take the related module (t(1043)=1.80, p=.072).

Ability to implement
collapsed across 6 skills
time avg sd n
after taking module
pre 2.93 0.80 249
final 3.16 0.60 210
did not take module
pre 2.93 0.84 489
final 3.00 0.76 192

This interaction of time and completed related module is marginal but not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on each skill with fixed effects for survey time (pre vs post-TAO Anytime), whether they completed the related module, and their interaction, as well as random intercepts per student and skill.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name) + (1 | subquestion)
##    Data: survey_ability
## 
## REML criterion at convergence: 2243
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.2357 -0.5381  0.0745  0.5911  3.2412 
## 
## Random effects:
##  Groups      Name        Variance Std.Dev.
##  name        (Intercept) 0.24858  0.4986  
##  subquestion (Intercept) 0.04193  0.2048  
##  Residual                0.32779  0.5725  
## Number of obs: 1140, groups:  name, 123; subquestion, 6
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                            2.906e+00  9.905e-02 9.037e+00  29.339
## timefinal                              1.072e-01  5.514e-02 1.095e+03   1.944
## as.numeric(completed_module)           4.699e-02  5.527e-02 1.130e+03   0.850
## timefinal:as.numeric(completed_module) 1.459e-01  7.604e-02 1.047e+03   1.919
##                                        Pr(>|t|)    
## (Intercept)                            2.83e-10 ***
## timefinal                                0.0521 .  
## as.numeric(completed_module)             0.3954    
## timefinal:as.numeric(completed_module)   0.0552 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.150              
## as.nmrc(c_) -0.190  0.361       
## tmfnl:s.(_)  0.104 -0.694 -0.554

By practice

Ability to implement, by module completion
time avg sd n
Co-create classroom norms with students - after taking module
pre 2.72 0.81 54
final 2.95 0.62 42
Co-create classroom norms with students - did not take module
pre 2.78 0.89 69
final 2.84 0.75 25
Use student feedback to improve my teaching practice - after taking module
pre 3.20 0.74 49
final 3.34 0.53 41
Use student feedback to improve my teaching practice - did not take module
pre 3.19 0.66 74
final 3.35 0.63 26
Craft discussion questions to stimulate active discussion - after taking module
pre 3.00 0.64 30
final 3.12 0.52 26
Craft discussion questions to stimulate active discussion - did not take module
pre 2.92 0.82 93
final 3.12 0.64 41
Develop in-class writing activities for students - after taking module
pre 2.53 0.84 19
final 3.06 0.43 17
Develop in-class writing activities for students - did not take module
pre 2.63 0.92 104
final 2.62 0.85 50
Use scoring tools like checklists and rubrics to increase grading equity - after taking module
pre 2.89 0.87 62
final 3.19 0.71 53
Use scoring tools like checklists and rubrics to increase grading equity - did not take module
pre 3.08 0.76 61
final 3.29 0.73 14
Build a learning community that supports well-being - after taking module
pre 3.09 0.74 35
final 3.26 0.51 31
Build a learning community that supports well-being - did not take module
pre 3.10 0.79 88
final 3.14 0.64 36

Co-create classroom norms with students

Students who took the Teaching practices to foster inclusion module did not report greater changes in their ability to “Co-create classroom norms with students” after TAO Anytime, versus students who did not take the module (t(69.8)=0.74, p=.46).

This interaction of time and completed related module is not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Co-create classroom norms with students” with fixed effects for survey time (pre vs post-TAO Anytime), whether they completed the related module, and their interaction, as well as random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Co-create classroom norms with students")
## 
## REML criterion at convergence: 426.3
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.4234 -0.4846  0.1598  0.3207  1.7594 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.4516   0.6720  
##  Residual             0.2123   0.4608  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              2.76747    0.09865 138.73768  28.054
## timefinal                                0.07250    0.12295  73.72165   0.590
## as.numeric(completed_module)            -0.06921    0.14807 140.31764  -0.467
## timefinal:as.numeric(completed_module)   0.13179    0.15819  70.09230   0.833
##                                        Pr(>|t|)    
## (Intercept)                              <2e-16 ***
## timefinal                                 0.557    
## as.numeric(completed_module)              0.641    
## timefinal:as.numeric(completed_module)    0.408    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.252              
## as.nmrc(c_) -0.666  0.168       
## tmfnl:s.(_)  0.196 -0.777 -0.301
Use student feedback to improve my teaching practice

Students who took the Collecting student feedback for inclusion and equity module did not report greater changes in their ability to “Use student feedback to improve my teaching practice” after TAO Anytime, versus students who did not take the module (t(84.3)=0.12, p=.90).

This interaction of time and completed related module is not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Use student feedback to improve my teaching practice” with fixed effects for survey time (pre vs post-TAO Anytime), whether they completed the related module, and their interaction, as well as random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Use student feedback to improve my teaching practice")
## 
## REML criterion at convergence: 365.7
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.6211 -0.3827 -0.0511  0.7018  2.4343 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.2016   0.4490  
##  Residual             0.2206   0.4697  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              3.18154    0.07585 159.99900  41.946
## timefinal                                0.13648    0.11836  93.42196   1.153
## as.numeric(completed_module)             0.02872    0.11971 161.86020   0.240
## timefinal:as.numeric(completed_module)   0.01927    0.15629  84.27036   0.123
##                                        Pr(>|t|)    
## (Intercept)                              <2e-16 ***
## timefinal                                 0.252    
## as.numeric(completed_module)              0.811    
## timefinal:as.numeric(completed_module)    0.902    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.330              
## as.nmrc(c_) -0.634  0.209       
## tmfnl:s.(_)  0.250 -0.757 -0.402
Craft discussion questions to stimulate active discussion

Students who took the Discussions in humanities and social sciences module did not report greater changes in their ability to “Craft discussion questions to stimulate active discussion” after TAO Anytime, versus students who did not take the module (t(73.0)=-0.67, p=.50).

This interaction of time and completed related module is not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Craft discussion questions to stimulate active discussion” with fixed effects for survey time (pre vs post-TAO Anytime), whether they completed the related module, and their interaction, as well as random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Craft discussion questions to stimulate active discussion")
## 
## REML criterion at convergence: 400.9
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -1.8555 -0.6189  0.0767  0.5076  1.8309 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.2953   0.5434  
##  Residual             0.2359   0.4857  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              2.91612    0.07585 151.78170  38.447
## timefinal                                0.20960    0.10016  80.87030   2.093
## as.numeric(completed_module)             0.08174    0.15259 155.06814   0.536
## timefinal:as.numeric(completed_module)  -0.09184    0.16737  72.72481  -0.549
##                                        Pr(>|t|)    
## (Intercept)                              <2e-16 ***
## timefinal                                0.0395 *  
## as.numeric(completed_module)             0.5929    
## timefinal:as.numeric(completed_module)   0.5849    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.333              
## as.nmrc(c_) -0.497  0.166       
## tmfnl:s.(_)  0.199 -0.598 -0.414
Develop in-class writing activities for students

Students who took the Writing to engage students module reported greater changes in their ability to “Develop in-class writing activities for students” after TAO Anytime, versus students who did not take the module (t(66.6)=2.13, p=.037).

This interaction of time and completed related module is statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Develop in-class writing activities for students” with fixed effects for survey time (pre vs post-TAO Anytime), whether they completed the related module, and their interaction, as well as random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Develop in-class writing activities for students")
## 
## REML criterion at convergence: 474.6
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.1007 -0.5061  0.1804  0.3698  1.8217 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3714   0.6094  
##  Residual             0.3929   0.6268  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              2.61716    0.08588 156.39376  30.473
## timefinal                                0.02882    0.11697  79.44707   0.246
## as.numeric(completed_module)            -0.09085    0.21818 156.31925  -0.416
## timefinal:as.numeric(completed_module)   0.55935    0.24219  65.80166   2.310
##                                        Pr(>|t|)    
## (Intercept)                              <2e-16 ***
## timefinal                                0.8060    
## as.numeric(completed_module)             0.6777    
## timefinal:as.numeric(completed_module)   0.0241 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.380              
## as.nmrc(c_) -0.394  0.150       
## tmfnl:s.(_)  0.184 -0.483 -0.464
Use scoring tools like checklists and rubrics to increase grading equity

Students who took the Grading problem sets labs and exams module did not report greater changes in their ability to “Use scoring tools like checklists and rubrics to increase grading equity” after TAO Anytime, versus students who did not take the module (t(86.6)=0.90, p=.37).

This interaction of time and completed related module is not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Use scoring tools like checklists and rubrics to increase grading equity” with fixed effects for survey time (pre vs post-TAO Anytime), whether they completed the related module, and their interaction, as well as random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Use scoring tools like checklists and rubrics to increase grading equity")
## 
## REML criterion at convergence: 420.4
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.93016 -0.39225  0.08698  0.41759  1.84140 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3629   0.6024  
##  Residual             0.2429   0.4928  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                        Estimate Std. Error       df t value
## (Intercept)                              3.0702     0.1002 151.4977  30.628
## timefinal                                0.1628     0.1701  92.5037   0.957
## as.numeric(completed_module)            -0.1758     0.1406 152.8382  -1.251
## timefinal:as.numeric(completed_module)   0.1886     0.1948  87.3713   0.969
##                                        Pr(>|t|)    
## (Intercept)                              <2e-16 ***
## timefinal                                 0.341    
## as.numeric(completed_module)              0.213    
## timefinal:as.numeric(completed_module)    0.335    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.229              
## as.nmrc(c_) -0.713  0.163       
## tmfnl:s.(_)  0.200 -0.873 -0.287
Build a learning community that supports well-being

Students who took the Mental health and well being in learning environments module did not report greater changes in their ability to “Build a learning community that supports well-being” after TAO Anytime, versus students who did not take the module (t(71.1)=0.78, p=.44).

This interaction of time and completed related module is not statistically significant, based on a linear mixed effects model, modeling students’ reported ability on “Build a learning community that supports well-being” with fixed effects for survey time (pre vs post-TAO Anytime), whether they completed the related module, and their interaction, as well as random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_ability %>% filter(subquestion == "Build a learning community that supports well-being")
## 
## REML criterion at convergence: 377.9
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.9612 -0.2960  0.0100  0.4582  1.9586 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3636   0.6030  
##  Residual             0.1571   0.3964  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                            3.093e+00  7.727e-02 1.408e+02  40.035
## timefinal                              5.001e-02  8.887e-02 7.600e+01   0.563
## as.numeric(completed_module)           6.859e-03  1.437e-01 1.432e+02   0.048
## timefinal:as.numeric(completed_module) 1.190e-01  1.345e-01 7.070e+01   0.885
##                                        Pr(>|t|)    
## (Intercept)                              <2e-16 ***
## timefinal                                 0.575    
## as.numeric(completed_module)              0.962    
## timefinal:as.numeric(completed_module)    0.379    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.259              
## as.nmrc(c_) -0.538  0.139       
## tmfnl:s.(_)  0.171 -0.661 -0.330

Attitude

Overall

Attitude
collapsed across 6 skills
time avg sd n
after taking module
pre 3.08 1.03 249
final NA NA 210
did not take module
pre 2.97 1.04 489
final NA NA 192

Students who took related modules did not report greater gains in their attitudes towards the practice targeted by the module after TAO Anytime, compared to students who did not take the related module (t(909.3)=1.25, p=.21).

This effect is not significant based on a linear mixed effects model, modeling reported ability on each practice as a function of survey time (pre vs post-TAO Anytime) and whether they completed the related module, with random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name) + (1 | subquestion)
##    Data: survey_attitude
## 
## REML criterion at convergence: 2417.5
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -3.10585 -0.59509  0.06473  0.67126  2.50445 
## 
## Random effects:
##  Groups      Name        Variance Std.Dev.
##  name        (Intercept) 0.2839   0.5328  
##  subquestion (Intercept) 0.2179   0.4668  
##  Residual                0.5103   0.7143  
## Number of obs: 1006, groups:  name, 123; subquestion, 6
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              2.94847    0.19970   5.81258  14.764
## timefinal                                0.44491    0.08659 948.56036   5.138
## as.numeric(completed_module)             0.14465    0.06904 999.25239   2.095
## timefinal:as.numeric(completed_module)   0.14620    0.11043 913.28328   1.324
##                                        Pr(>|t|)    
## (Intercept)                            7.90e-06 ***
## timefinal                              3.37e-07 ***
## as.numeric(completed_module)             0.0364 *  
## timefinal:as.numeric(completed_module)   0.1859    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.077              
## as.nmrc(c_) -0.118  0.320       
## tmfnl:s.(_)  0.056 -0.738 -0.488

By practice

Attitude by module completion
time avg sd n
Co-create classroom norms with students - after taking module
pre 2.85 0.88 54
final 3.50 0.74 42
Co-create classroom norms with students - did not take module
pre 2.71 0.93 69
final 3.20 0.87 25
Use student feedback to improve my teaching practice - after taking module
pre 3.55 0.74 49
final 3.80 0.51 41
Use student feedback to improve my teaching practice - did not take module
pre 3.42 0.83 74
final 3.81 0.49 26
Craft discussion questions to stimulate active discussion - after taking module
pre 2.93 1.05 30
final 3.46 0.95 26
Craft discussion questions to stimulate active discussion - did not take module
pre 3.10 0.91 93
final 3.54 0.81 41
Develop in-class writing activities for students - after taking module
pre 2.00 1.11 19
final NA NA 17
Develop in-class writing activities for students - did not take module
pre 2.22 1.02 104
final NA NA 50
Use scoring tools like checklists and rubrics to increase grading equity - after taking module
pre 3.03 1.09 62
final 3.74 0.62 53
Use scoring tools like checklists and rubrics to increase grading equity - did not take module
pre 3.00 1.06 61
final 3.50 0.85 14
Build a learning community that supports well-being - after taking module
pre 3.54 0.89 35
final NA NA 31
Build a learning community that supports well-being - did not take module
pre 3.52 0.84 88
final NA NA 36

Co-create classroom norms with students

Students who took the Teaching practices to foster inclusion module did not report greater gains in their attitude to “Co-create classroom norms with students” after TAO Anytime, versus students who did not take the module (t(82.6)=0.45, p=.66).

This difference is not statistically significant, based on a linear mixed effects model, modeling students’ reported attitude about “Co-create classroom norms with students” with fixed effects for survey time (pre vs post-TAO Anytime) and whether they completed the related module, and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Co-create classroom norms with students")
## 
## REML criterion at convergence: 469.4
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.84512 -0.55775 -0.08619  0.63587  1.60850 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3907   0.6251  
##  Residual             0.3612   0.6010  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                        Estimate Std. Error       df t value
## (Intercept)                              2.6978     0.1049 156.2199  25.723
## timefinal                                0.5644     0.1559  89.3820   3.621
## as.numeric(completed_module)             0.1395     0.1577 157.9269   0.885
## timefinal:as.numeric(completed_module)   0.1121     0.2019  82.8260   0.555
##                                        Pr(>|t|)    
## (Intercept)                             < 2e-16 ***
## timefinal                              0.000486 ***
## as.numeric(completed_module)           0.377647    
## timefinal:as.numeric(completed_module) 0.580356    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.318              
## as.nmrc(c_) -0.665  0.212       
## tmfnl:s.(_)  0.246 -0.772 -0.376
Use student feedback to improve my teaching practice

Students who took the Collecting student feedback for inclusion and equity module did not report greater gains in their attitude to “Use scoring tools like checklists and rubrics to increase grading equity” after TAO Anytime, versus students who did not take the module (t(74.7)=-0.49, p=.63).

This difference is not statistically significant, based on a linear mixed effects model, modeling students’ reported attitude about “Use student feedback to improve my teaching practice” with fixed effects for survey time (pre vs post-TAO Anytime) and whether they completed the related module, and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Use student feedback to improve my teaching practice")
## 
## REML criterion at convergence: 400.7
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.1537 -0.3649  0.4492  0.5885  1.2525 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.2475   0.4975  
##  Residual             0.2637   0.5135  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              3.41423    0.08346 153.76413  40.908
## timefinal                                0.34749    0.12956  82.26805   2.682
## as.numeric(completed_module)             0.13860    0.13172 155.95484   1.052
## timefinal:as.numeric(completed_module)  -0.08334    0.17102  73.50183  -0.487
##                                        Pr(>|t|)    
## (Intercept)                             < 2e-16 ***
## timefinal                               0.00884 ** 
## as.numeric(completed_module)            0.29432    
## timefinal:as.numeric(completed_module)  0.62749    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.328              
## as.nmrc(c_) -0.634  0.208       
## tmfnl:s.(_)  0.248 -0.758 -0.399
Craft discussion questions to stimulate active discussion

Students who took the Discussions in humanities and social sciences module did not report greater gains in their attitude to “Craft discussion questions to stimulate active discussion” after TAO Anytime, versus students who did not take the module (t(81.5)=0.60, p=.55).

This difference is not statistically significant, based on a linear mixed effects model, modeling students’ reported attitude on “Craft discussion questions to stimulate active discussion” with fixed effects for survey time (pre vs post-TAO Anytime) and whether they completed the related module, and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Craft discussion questions to stimulate active discussion")
## 
## REML criterion at convergence: 494.7
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.33693 -0.51870  0.02923  0.72451  1.65275 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.3876   0.6226  
##  Residual             0.4527   0.6728  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              3.09093    0.09536 162.51330  32.414
## timefinal                                0.46099    0.13669  91.66308   3.372
## as.numeric(completed_module)            -0.15076    0.19213 165.37273  -0.785
## timefinal:as.numeric(completed_module)   0.15440    0.22987  80.98405   0.672
##                                        Pr(>|t|)    
## (Intercept)                             < 2e-16 ***
## timefinal                               0.00109 ** 
## as.numeric(completed_module)            0.43375    
## timefinal:as.numeric(completed_module)  0.50369    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.373              
## as.nmrc(c_) -0.496  0.185       
## tmfnl:s.(_)  0.222 -0.595 -0.458
Develop in-class writing activities for students

Since we only have pre-test data for this practice (final data is corrupted), we cannot conduct any analysis comparing changes pre to final.

Use scoring tools like checklists and rubrics to increase grading equity

Students who took the Grading problem sets labs and exams module did not report greater changes in their attitude to “Use scoring tools like checklists and rubrics to increase grading equity” after TAO Anytime, versus students who did not take the module (t(88.4)=1.10, p=.28).

This difference is not statistically significant, based on a linear mixed effects model, modeling students’ reported attitude on “Use scoring tools like checklists and rubrics to increase grading equity” with fixed effects for survey time (pre vs post-TAO Anytime) and whether they completed the related module, and random intercepts per student.

There is a caveat for potential selection effects (e.g., students who chose versus didn’t choose the module may shown different change) and attrition effects (e.g., students who dropped out before completing post-test may have shown different change).

## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: as.numeric(response) ~ time * as.numeric(completed_module) +  
##     (1 | name)
##    Data: 
## survey_attitude %>% filter(subquestion == "Use scoring tools like checklists and rubrics to increase grading equity")
## 
## REML criterion at convergence: 512.2
## 
## Scaled residuals: 
##      Min       1Q   Median       3Q      Max 
## -2.07322 -0.73804  0.00782  0.77187  1.12644 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  name     (Intercept) 0.4256   0.6524  
##  Residual             0.4978   0.7056  
## Number of obs: 190, groups:  name, 123
## 
## Fixed effects:
##                                         Estimate Std. Error        df t value
## (Intercept)                              2.98976    0.12364 158.76108  24.181
## timefinal                                0.42697    0.23615  97.28023   1.808
## as.numeric(completed_module)             0.04416    0.17358 160.34776   0.254
## timefinal:as.numeric(completed_module)   0.31549    0.27197  89.07903   1.160
##                                        Pr(>|t|)    
## (Intercept)                              <2e-16 ***
## timefinal                                0.0737 .  
## as.numeric(completed_module)             0.7995    
## timefinal:as.numeric(completed_module)   0.2491    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) timfnl as.(_)
## timefinal   -0.275              
## as.nmrc(c_) -0.712  0.196       
## tmfnl:s.(_)  0.239 -0.868 -0.342
Build a learning community that supports well-being

Since we only have pre-test data for this practice (final data is corrupted), we cannot conduct any analysis comparing changes pre to final.