Team ID: Team 2
Chen Zhang (Model result & diagnostics, additional EDA, reproducibility)
Kenneth Lee (EDA, model setup, Rshiny, reproducibility)
Rong Duan (Causal Statement, extension and code cleanup)
Xialin Sang (Hypothesis testing, additional EDA, and regression model)
Github repo: https://github.com/ZCheryl/STA-207
The effects of class sizes on student achievement is an important topic for policymakers in the American K-12 education system. To study the effects of class size on student achievement in the primary grades, the State Department of Education in Tennessee launched a four-year longitudinal class-size randomized study from 1985 to 1989 called The Student/Teacher Achievement Ratio (STAR). Over 7000 students in 79 schools participated in this project. We highlight the features of the experiment process in the study below.
All participating schools had to agree to the random assignment of teachers and students to different class conditions: small class (13 to 17 students per teacher), regular class (22 to 25 students per teacher), and regular-with-aide class (22 to 25 students with a full-time teacher’s aide) [5].
The assignments of various class types were initiated as the students entered school in kindergarten and continued through third grade [5].
Each school must provide enough kindergarten students to be assigned to three numerous class types in order to participate in the project STAR [5].
The student achievement is measured annually via Stanford Achievement Tests (SATs) during the spring term on testing dates specified by the Tennessee state.[5].
Students moving from a school involved in STAR to another participating school were assigned to the same type of class as they had participated in previously. Also, it is possible that the size of a regular class can be as small as the small class type as students move out of the participating schools[5].
Besides class size and teacher aides, there were no other experimental changes involved in the study [5].
There were three schools resigned from the project STAR at the end of kindergarten, so that there were only left with 76 schools in the 1st-grade level [5].
Our primary scientific question of interest is whether there is a treatment effect of assigning various class types to the average math scaled scores in a 1st-grade class level. We implement exploratory data analysis, two-way ANOVA model, model diagnostics, hypothesis testing. In the end, we will discuss any causal statements that could possibly be made based on our analysis and assumptions and the differences between a student-level and a class-level analysis on this STAR dataset.
This work shows that the treatment effect of the class type does exist in a class level for this dataset. We also show that it is possible to make causal statements based on our analysis.
The original dataset has 11601 observations and 379 attributes. It describes the demographics of the students and teachers, class type assignment, the participating school and class identifiers, and test scores. We first examine if there are any missing values in the data. Then, we will explore the data with teachers as the unit for the 1st-grade students’ math scaled scores. We summarize some the findings below:
High level of missing values for teacher identifiers: Before we explore the data with teachers as the unit, we have found that there are 4772 observations that are missing teacher ID in the data. As shown in Figure 1, above 40% of the total number of observations do not contain information related to teachers. We drop all the observations that do not have a teacher ID as we can hardly impute this identification information. Then, we are then left with 6829 observations.
Missing class type information in some schools: We have also found that there are four schools (ID: 244728, 244796, 244736, 244839) that do not have at least one observation per class type, which contradicts with the experimental design. We then drop the observations from these schools to ensure we have at least one observation per class type in each school.
Small class types have higher average math scores: After we drop the observations that do not contain math scaled scores in a student level, we then take the average of the math scaled scores based on the remaining 6334 observations. We are then left with 325 observations with teachers as the unit. The distribution of the averaged math scores by class types is shown by Figure 2. We can see that the averaged math scores are higher in general.
Figure 1: Missing Proportion Plot. It shows the variables that have missing values in descending order. The x-axis shows the percentage of the missing values based on the overall number of observations.
Figure 2: Distribution plots. Left panel: The distribution of average math scaled scores by class type. The class type is described on the x-axis (1: small class; 2: regular class; 3: regular class with aide). Both the highest averaged math score and the lowest math score belong to the small class type. Right panel: The distribution of the averaged math scaled scores. The distribution of the averaged math scores seem to be roughly normal.
To see whether there is a treatment effect of the class type assignment in a class level, we will use a two-way ANOVA model. Our two-way ANOVA model is an additive model as specified below:
\(Y_{ijk} = \mu_{..} + \alpha_{i} + \beta_{j} + \epsilon_{ijk}\) where \(i = 1,...,3\), \(j = 1,...,72\) and \(k = 1,...,n_{ij}\)
Explanation of the notation
\(Y_{ijk}\) denotes the average math scaled score of the ith class type and the jth school for the \(k\)th teacher.
\(\mu_{..}\) denotes the overall average of math scaled scores in the population across class types and schools that we try to estimate and this is an unknown parameter.
\(\alpha_{i}\) denotes the main effect of the \(i\)th class type.
\(\beta_{j}\) denotes the main effect of the \(j\)th school.
\(\epsilon_{ijk}\) denotes the random error in the \(i\)th class type, \(j\)th school for the \(k\)th teacher. This is an unobserved random variable.
The index \(i\) represents the levels of class type: small (\(i=1\)), regular (\(i=2\)), regular with aide (\(i=3\)).
The index \(j\) represents the levels of the school indicator. Since we have only 72 schools in the data, we have \(j=1,...,72\).
The index \(n_{ij}\) denotes the total number of teachers in the \(j\)th school corresponding to the \(i\)th class type.
The assumptions of the two-way ANOVA model
The random errors are assumed to be identically and independently distributed from a normal distribution with mean \(0\) and variance \(\sigma^{2}\).
The outcomes are independent normal random variables with a common variance and with means equal to the overall average of math scaled scores across class types and schools.
The interaction effects are absent.
Justification for the model choice*
We do not include the interaction term in our two-way ANOVA model because the interaction term is not of our primary interest. Another reason is that we are also concerned about the model complexity. Given that there is a limited number of observations within each combination of the class type and the school indicator via our exploratory data analysis, we may not have enough information to estimate the parameters. Also, we introduce a blocking factor \(\beta_{j}\) into our model for controlling the source of varability; that is, we want to eliminate the effects due to variations among different schools while we are trying to determine the effects due to differences among class type assignments. Thus, this design of the experiment can give a greater accuracy of the model estimates [7].
Next, our model diagnostics will confirm whether the assumptions of our two-way ANOVA model hold such that this model is appropriate for the problem setting.
For the normality assumption, we will use a normal Q-Q plot and conduct a Shapiro–Wilk test to see if the residuals follow the normal distribution.
For equal variance assumption, we will use a residual vs. fitted value plot to see if the residuals (the differences between the actual test scores and predicted test scores) have mean zero and equal variance.
The independence assumption will not be tested. Since the design of this ANOVA model is randomized block design, not only students are randomly assigned, teachers within each school are also randomly assigned to a certain class type. As a result, independence is satisfied.
Furthermore, we would also like to confirm our assumption about absent or ignorable interaction effect between two factors by running the Tukey’s test for additivity.
As our primary interest is to determine whether there exists a treatment effect of the class types on the math scaled scores in class-level, we plan to use the F-test to test whether there is any main effect across class types. For our hypothesis testing, the significant level is set as 0.05. Our null hypothesis for the F-test is that \(H_{0}: \alpha_{i} = 0\), no main effects across class types, with the alternative hypothesis \(H_{a}:\) not all \(a_{i}\) are zero, main effects exist based on some class types. Upon the rejection of the null hypothesis, we will then investigate the nature of the differences among the averaged test scores of the class types.
After knowing the overall F-test is significant for testing the main effect of class types, we may proceed to find out more specific information about where the difference should be accounted by comparing the averages of mean test scores across class types in 1st grade. Subsequently, we will use a multiple comparison analysis to determine where the differences among the averages of the mean scores on class types occur. Since the dataset does not have the same number of observations for each class type, it is suggested that we should use Tukey’s procedure as it can give us a more precise estimation of the difference between the averages of the mean test scores from two different class types based on a narrower confidence interval [6].
Table 1 shows a summary of our ANOVA model. The sum of squares of schools shows the variability among the averaged test scores across schools, which is 6.2. The more similar the average of the mean test scores of schools, the smaller this sum of squares tend to be. Across class types, we also see that the variation of the outcomes around their respective mean of the averaged test scores based on each class type is higher, which is about 22.6. The smaller this value is, the smaller error variance we have. Similar to the usage of F-value, the p-value helps us understand whether there is a difference among the main effects of each class type or the difference happens due to chance. We will leave the detailed discussion of hypothesis testing in section 3.3, which also utilizes the information from Table 1.
Table 1: ANOVA Table
| Source of Variation | Degrees of Freedom | Sum of Squares | Mean Square | F | p-value |
|---|---|---|---|---|---|
| Class Type | 2 | 6559 | 6559 | 22.6 | 3.35e-06 |
| School | 71 | 128366 | 1808 | 6.2 | < 2e-16 |
| Error/Residual | 251 | 73131 | 290.2 |
Equal Variance Assumption: According to Figure 3, the residuals are scattered in between positive 50 (roughly) and negative 50 points. Visually, there seems to be more residuals lying in the positive extreme than negative extreme. Other patterns are hard to detect from this graph. Therefore, we conducted Levene’s Test for Homogeneity of Variance. From the test output, we can see that the p-value is less than 0.05. It indicates there is strong evidence suggesting that the variance across groups is different. This result makes sense because the residuals are unique, rather than the average of a group. Figure 4 plots the residuals against each factor. From these plots, we can examine the equal variance assumption in terms of the class types and school id. The variance of residual is not constant in terms of both factors and seems to be affected by the factor level.
Normality Assumption: In Figure 3, the QQ plot shows a heavy tail on both ends, and the normal distribution assumption is questionable. To further investigate this matter, we conducted the Shapiro-Wilk normality test. Since the p-value is less than the chosen significance level \(0.05\), we should reject the null hypothesis and conclude that it is unlikely for the data to come from a normally distributed population.
In search for a remedy to the non-normality issue, we considered taking box-cox transformation of the response variable (math score). However, the transformed response variable still rejects the null hypothesis at the \(0.05\) significance level. Taking into account the fact that box-cox transformation undermines the interpretability of the data, we decided not to transform the response variable.
Interaction Effects: Lastly, we further conducted Tukey’s test for additivity for the ignorable interaction assumption since it is more appropriate for a randomized block design. The final test produced a more satisfactory result. With p-value equals to \(0.42\), we can not reject the null hypothesis at the significance level 0.05. It is more likely that there is no interaction between the two factors in the ANOVA model.
Figure 3: Model diagnostic plots. Left panel: residual versus fitted values. Right panel: QQ plot with residuals.
Figure 4: More residual plots. Left panel: esidual vs. class types. Right panel: Residual vs. school id.
To conduct a F-test for the main effects of class types, the first step is to state the null hypothesis \(H_{0}\): \(\alpha_{1}=\alpha_{2}=\alpha_{3}\), where each \(\alpha_{i}\) represent the average of the mean test scores within each class type. The alternative hypothesis \(H_{\alpha}\): not all \(\alpha_{i}\)’s are equal. From table 1, we see that the pvalue is less than \(0.05\). Therefore, we reject the null hypothesis at the significance level \(0.05\) as the p-value suggests that the equality of the main effects of the class types happen less than 5% of the time. Therefore, we conclude that there is a main effect the class types.
Then, we use Tukey’s procedure to find out more information where the difference may come from. In the Figure 5, it gives us an intuition of confidence intervals for the difference in the means for all pairs of class types, and all pairs of school indicators. For example, the confidence interval (the interval at the bottom of the Figure 5) that compares regular classes with regular classes with aide. As the 95% confidence interval includes zero, there is no statistically significant difference between the treatments. Then, we observe from the other two confidence intervals, again in Figure 5, that the main effect can be attributed to the differences between small class types and other class types.
Figure 5: The Confidence Interval of Multiple Comparisons of Means. The confidence interval of Multiple Comparisons of the averages of the class types (1:small class, 2: regular class, 3: regular class + aide).
Now we would like to make a causal statement based on the above analysis. If the following assumptions are examined to be qualified in the study, then we could make a causal inference based on the analysis. The assumptions are summarized below.
Stable unit treatment value assumption (SUTVA): SUTVA has two implications: 1) No interference. That means the class type assignment of one teacher does not affect the potential outcomes of others. A class-level average score in math is not dependent on whether other teachers are assigned to a certain class type. For example, if a teacher in the regular class is discouraged because of the assignment of class types, they would not work as harder as usual, which could interfere with the experiment outcome. 2) The treatments are consistent - there is no different version of each treatment level [1]. In STAR, this assumption means all teachers in the same class type should use a similar way to teach, which makes sure the treatment is stable and the teaching quality is not accounted for the cause. This assumption excludes the unstable treatment cases for causal inference. However, it seems very difficult to implement the same teaching method in practice.
Randomization: With the double randomization of the students and teachers, we can fairly assume that the teachers and students are independent with the 3 class types. With a large sample size, the variation of performance students and teachers would not like to interfere with our outcome. Also, with the randomization of student assignment, we could ignore the potential influences of other factors (like gender, sex, etc.).
Exchangeability: Randomization experiment is expected to produce exchangeability [2]. Exchangeability means the potential outcome is independent with treatment. In our case, it means that the class type which a teacher assigned to is independent with the average math score in that class. In other words, because each teacher and student are distinctive individuals, the class type assignment does not dictate a different performance in the score. But potential outcome is not observed outcome. With exchangeability, we can conclude that the differences in scores among the 3 class types would exist in the whole population. Thus, we could use the association relationship to draw some causal inferences based on this assumption.
Positivity: Positivity is the assumption that any individual has a positive probability of receiving all levels of the treatment. In STAR, we just simply check whether every school has all 3 types of classes, and drop the schools which don’t have enough class types. This also corresponds to the randomized block design of our analysis.
Double-blind: To satisfy the double-blind assumption, the teachers and students should not have any preconception that there is a treatment effect among different class types. Otherwise, this information would disturb their educational performance. In the same way, if the scientists believed the class types have treatment effects, their analysis would be interfered by this preconception.
As long as the above assumptions hold and our analysis confirms that the main effects of different class types are significant, we can quantify the average causal effects due to class types. However, we recognize that there are many potential aspects of the experiment that can undermine the above assumptions. For example, missing values would invalidate the randomization. If the missing values are not randomly missing but because of some certain purpose, the exchangeability assumptions would not hold, which is difficult for us to draw a causal statement.
In the project 1, we used one-way ANOVA for our model to test whether there exists a treatment effect of the class type on the math scaled scores based a student-level analysis. The causal statement was made possible assuming the implementation of completely randomized design. In fact, the Stable Unit Treatment Value Assumption (SUTVA) is not plausible in the student-level study, because students are prone to peer influences. Violations of SUTVA complicate the regression and imputation approaches considerably, and we therefore primarily focus on teacher-unit in order to draw some causal inferences [3]. In this project, with teachers as the unit, the observed values are the averages of test scores within each class. It is ensured that the class type assignment of one teacher does not affect the potential outcomes of others.
In addition, we implemented a randomized block design in project 2 by adding the school identity as another factor. This is necessary because different schools would have pre-treatment effects on different levels, which interfere with the scores of each class type. Under the randomized block design setup, subjects within each block are randomly assigned to different treatments. Compared to the completely randomized design in project 1, this design reduces within-class type variability and potential confounding, producing better estimates of the treatment effects [4]. As a part of the two-way ANOVA model, we account for an additional source of variability coming from schools as the blocking factor. The previous within treatment variability is split into two sources: the reduced error variability and block variability. As a result, the treatment variability becomes larger compared to the reduced error variability. Thus, the class types can provide a better explanation to the differences in scores.
At the beginning of re-exploration, we would like to delve into why school is designed to be the blocking factor. We believe that the location is an important source of heterogeneity of school. Figure 6 left panel shows a scatter plot of math scores ordered by school id, where location is the main consideration. There is a clear pattern that a larger portion of school is in the rural area (blue color) and their students generally perform better in the math tests. In contrast, we can observe the red inner-city schools are clustered to the left. Collectively, the students attending inner-city schools have lower math scores compared with other locations. This trend of strong association between inner-city schools and lower math scores can be confirmed with the boxplot on the right panel of Figure 6.
According to project description, schools that have more than half of their students on free or reduced cost lunch were tentatively defined as inner-city, which is an indicative of a low socioeconomic background. Inner-city and suburban schools are all located in metropolitan areas. Perhaps this is a cause of low scores. The school is tentatively designed as a block factor not only because it is easy and natural to implement, but also it serves a purpose of reducing the effect of nuisance factors.
Figure 6: Math Test Scores by School ID, School Area and Class Types. Left panel: the distribution of student math scores in all schools colored by school locations. Right panel: the boxplot of class averaged math scores grouped by school locations.
Before exploring the data in a longitudinal fashion, we examined the missing value. In Figure 7, blackout area means the data is missing. So the pattern suggests that there are students dropping out of program and new student signing up in every year. Because we don’t have enough information of these students, we can only focus on the students who participates throughout the course of the program.
Figure 7: Missing Data Plot. The blackout areas indicate missing entries, and the gray areas represent entries that are present. More than half of the data are not complete.
Within these student, we are furthered interested in which students have continuously stayed in the same class type (small class or regular class). An alluvial diagram here can help us track the flow of student (Figure 8). We can see that students from regular classes are most active in switching into other classrooms and the total number of student is shrinking. Visibly, there are two streams of students from regular class, going into other class types every year. Eventually we use the students that stay in the same class type for the next plot (Figure 9).
Figure 8: Alluvial Plot. This plot displays the class types of students in each grade. Some students transfer to other class types as they rise in grades.
Figure 9 shows the average score on the longitudinal scale. It clearly shows that students in small class type perform better in standardized math test in all most all grades and locations. However, average graph on the right shows that the gap between small class and other types are getting smaller in higher grades. This result might potentially overturn people’s perception that small class size always have a significant grade boosting effect, even when the student go to a higher grade. In other words, the effect of small class might only be significant in the lower grade. As students move up to higher grade, the advantage of small class size might fade away.
Figure 9: Comparison Scores in Different Locations and Grades. In each subplot, small classes almost always have the highest score.
Motivation
Although in this experiment teachers are randomly assigned to each class types, there are some missing values as presented in section 1.2. By dropping these entries, we lose some information. We are concerned that, if the missing values are not randomly missing, we cannot be confident that the remaining dataset is completely randomized.
Regression Model
Considering each teacher as the individual unit, we want to examine whether the demographic features of teachers would affect the math performance in 1st grade besides class type. We propose a regression model with school fixed effect.
\(Y_{jk}=\beta X_{jk}+\sum_{t=1}^{6} \gamma_{t}T_{t,jk}+\alpha_{j}+\epsilon_{jk}, t=1,...6,j=1,...72\) and \(k=1,...,n_{j}\)
\(Y_{jk}\) denotes the mean math score of teacher k at shcool j.
\(X_{jk}\) is a categorical variable of our primary interest, class type.
\(T_{t,jk}\) represents t covariates: Teacher Gender, Teacher Race, Teacher Career Ladder Level, Teacher Highest Degree, Years of Total Teaching Experience.
\(\alpha_{j}\) is the school fixed effect that can control for unobservable differences in math performance across schools. It varies across schools but is constant across teachers for each school.
\(\epsilon_{jk}\) is the error terms.
Result
Table 2 shows a summary of our regression model. All p-values for the variables teacher characteristics fall below our specified \(\alpha=0.05\) and provide evidence that teacher characteristics explain relatively little in the averaged math test scores in 1st grade.
Table 2: ANOVA Table for Regression Model
| Source of Variation | Degrees of Freedom | Sum of Squares | F | p-value |
|---|---|---|---|---|
| Class Type | 2 | 9533 | 17.76 | 6.45e-08 |
| Teaching Experience(year) | 1 | 212 | 0.79 | 0.38 |
| Teacher Gender | 1 | 105 | 0.39 | 0.53 |
| Teacher Race | 1 | 99 | 0.37 | 0.54 |
| Teacher Career Ladder Level | 3 | 1220 | 1.51 | 0.21 |
| Teacher Highest Degree | 3 | 90 | 0.11 | 0.95 |
| Error/Residual | 239 | 64151 | 290.2 |
Conclusion
As we explored above, teachers’ features do not have a significant influence on student scores, which also indicates that the missing value do not have a specific pattern and this dataset can be regarded as randomized. Combining the result that we explored before, small class type not only have a statistically significant effect on math scores, but can cause a higher math score in causality. Hence, we conclude that small class type can cause a higher math score.
[1] Marin Vlastelica Pogančić, 2019, “Causal vs. Statistical Inference”, https://towardsdatascience.com/causal-vs-statistical-inference-3f2c3e617220, Max Planck Institute for Intelligent Systems
[2]Miguel A. Hernán, James M. Robins(2018).Causal Inference: What If.1420076167
[3]Guido W. Imbens & Donald B. Rubin, CAUSAL INFERENCE for Statistics, Social, and Biomedical Sciences An Introduction, chapter 9, ISBN 978-0-521-88588-1.
[4]Hanushek, Eric A. “Some findings from an independent investigation of the Tennessee STAR experiment and from other investigations of class size effects.” Educational Evaluation and Policy Analysis 21.2 (1999): 143-163.
[5] C.M. Achilles; Helen Pate Bain; Fred Bellott; Jayne Boyd-Zaharias; Jeremy Finn; John Folger; John Johnston; Elizabeth Word, 2008, “Tennessee’s Student Teacher Achievement Ratio (STAR) project”, https://doi.org/10.7910/DVN/SIWH9F, Harvard Dataverse, V1, UNF:3:Ji2Q+9HCCZAbw3csOdMNdA== [fileUNF]
[6] Kutner, Michael H., et al.Applied linear statistical models. Vol. 5. New York: McGraw-Hill Irwin, 2005.
[7] (2008) Randomized Block Design. In: The Concise Encyclopedia of Statistics. Springer, New York, NY
Original report of project 2: https://github.com/kenneth-lee-ch/STA-207/tree/master/project2 Final report: https://github.com/ZCheryl/STA-207
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 3.6.0 (2019-04-26)
## os Ubuntu 16.04.6 LTS
## system x86_64, linux-gnu
## ui X11
## language (EN)
## collate C.UTF-8
## ctype C.UTF-8
## tz Etc/UTC
## date 2020-03-16
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## package * version date lib source
## abind 1.4-5 2016-07-21 [1] RSPM (R 3.6.0)
## AER * 1.2-9 2020-02-06 [1] RSPM (R 3.6.0)
## assertthat 0.2.1 2019-03-21 [1] RSPM (R 3.6.0)
## backports 1.1.5 2019-10-02 [1] RSPM (R 3.6.0)
## callr 3.4.2 2020-02-12 [1] RSPM (R 3.6.0)
## car * 3.0-7 2020-03-11 [1] RSPM (R 3.6.0)
## carData * 3.0-3 2019-11-16 [1] RSPM (R 3.6.0)
## cellranger 1.1.0 2016-07-27 [1] RSPM (R 3.6.0)
## cli 2.0.2 2020-02-28 [1] RSPM (R 3.6.0)
## colorspace 1.4-1 2019-03-18 [1] RSPM (R 3.6.0)
## crayon 1.3.4 2017-09-16 [1] RSPM (R 3.6.0)
## crosstalk 1.1.0.1 2020-03-13 [1] CRAN (R 3.6.0)
## curl 4.3 2019-12-02 [1] RSPM (R 3.6.0)
## data.table * 1.12.8 2019-12-09 [1] RSPM (R 3.6.0)
## desc 1.2.0 2018-05-01 [1] RSPM (R 3.6.0)
## devtools * 2.2.2 2020-02-17 [1] RSPM (R 3.6.0)
## digest 0.6.25 2020-02-23 [1] RSPM (R 3.6.0)
## dplyr * 0.8.5 2020-03-07 [1] RSPM (R 3.6.0)
## ellipsis 0.3.0 2019-09-20 [1] RSPM (R 3.6.0)
## evaluate 0.14 2019-05-28 [1] RSPM (R 3.6.0)
## fansi 0.4.1 2020-01-08 [1] RSPM (R 3.6.0)
## farver 2.0.3 2020-01-16 [1] RSPM (R 3.6.0)
## forcats 0.5.0 2020-03-01 [1] RSPM (R 3.6.0)
## foreign 0.8-71 2018-07-20 [2] CRAN (R 3.6.0)
## Formula 1.2-3 2018-05-03 [1] RSPM (R 3.6.0)
## fs 1.3.2 2020-03-05 [1] RSPM (R 3.6.0)
## ggplot2 * 3.3.0 2020-03-05 [1] RSPM (R 3.6.0)
## glue 1.3.1 2019-03-12 [1] RSPM (R 3.6.0)
## gridExtra * 2.3 2017-09-09 [1] RSPM (R 3.6.0)
## gtable 0.3.0 2019-03-25 [1] RSPM (R 3.6.0)
## haven 2.2.0 2019-11-08 [1] RSPM (R 3.6.0)
## hms 0.5.3 2020-01-08 [1] RSPM (R 3.6.0)
## htmltools 0.4.0 2019-10-04 [1] RSPM (R 3.6.0)
## htmlwidgets 1.5.1 2019-10-08 [1] RSPM (R 3.6.0)
## httr 1.4.1 2019-08-05 [1] RSPM (R 3.6.0)
## jsonlite 1.6.1 2020-02-02 [1] RSPM (R 3.6.0)
## knitr 1.28 2020-02-06 [1] RSPM (R 3.6.0)
## labeling 0.3 2014-08-23 [1] RSPM (R 3.6.0)
## lattice 0.20-38 2018-11-04 [2] CRAN (R 3.6.0)
## lazyeval 0.2.2 2019-03-15 [1] RSPM (R 3.6.0)
## lifecycle 0.2.0 2020-03-06 [1] RSPM (R 3.6.0)
## lmtest * 0.9-37 2019-04-30 [1] RSPM (R 3.6.0)
## magrittr 1.5 2014-11-22 [1] RSPM (R 3.6.0)
## MASS * 7.3-51.4 2019-03-31 [2] CRAN (R 3.6.0)
## Matrix 1.2-17 2019-03-22 [2] CRAN (R 3.6.0)
## memoise 1.1.0 2017-04-21 [1] RSPM (R 3.6.0)
## multcompView * 0.1-8 2019-12-19 [1] RSPM (R 3.6.0)
## munsell 0.5.0 2018-06-12 [1] RSPM (R 3.6.0)
## naniar * 0.5.0 2020-02-28 [1] RSPM (R 3.6.0)
## nortest * 1.0-4 2015-07-30 [1] RSPM (R 3.6.0)
## openxlsx 4.1.4 2019-12-06 [1] RSPM (R 3.6.0)
## pillar 1.4.3 2019-12-20 [1] RSPM (R 3.6.0)
## pkgbuild 1.0.6 2019-10-09 [1] RSPM (R 3.6.0)
## pkgconfig 2.0.3 2019-09-22 [1] RSPM (R 3.6.0)
## pkgload 1.0.2 2018-10-29 [1] RSPM (R 3.6.0)
## plotly * 4.9.2 2020-02-12 [1] RSPM (R 3.6.0)
## prettyunits 1.1.1 2020-01-24 [1] RSPM (R 3.6.0)
## processx 3.4.2 2020-02-09 [1] RSPM (R 3.6.0)
## ps 1.3.2 2020-02-13 [1] RSPM (R 3.6.0)
## purrr 0.3.3 2019-10-18 [1] RSPM (R 3.6.0)
## R6 2.4.1 2019-11-12 [1] RSPM (R 3.6.0)
## Rcpp 1.0.3 2019-11-08 [1] RSPM (R 3.6.0)
## readxl 1.3.1 2019-03-13 [1] RSPM (R 3.6.0)
## remotes 2.1.1 2020-02-15 [1] RSPM (R 3.6.0)
## rio 0.5.16 2018-11-26 [1] RSPM (R 3.6.0)
## rlang 0.4.5 2020-03-01 [1] RSPM (R 3.6.0)
## rmarkdown 2.1 2020-01-20 [1] RSPM (R 3.6.0)
## rprojroot 1.3-2 2018-01-03 [1] RSPM (R 3.6.0)
## sandwich * 2.5-1 2019-04-06 [1] RSPM (R 3.6.0)
## scales 1.1.0 2019-11-18 [1] RSPM (R 3.6.0)
## sessioninfo 1.1.1 2018-11-05 [1] RSPM (R 3.6.0)
## stringi 1.4.6 2020-02-17 [1] RSPM (R 3.6.0)
## stringr 1.4.0 2019-02-10 [1] RSPM (R 3.6.0)
## survival * 2.44-1.1 2019-04-01 [2] CRAN (R 3.6.0)
## testthat 2.3.2 2020-03-02 [1] RSPM (R 3.6.0)
## tibble 2.1.3 2019-06-06 [1] RSPM (R 3.6.0)
## tidyr 1.0.2 2020-01-24 [1] RSPM (R 3.6.0)
## tidyselect 1.0.0 2020-01-27 [1] RSPM (R 3.6.0)
## usethis * 1.5.1 2019-07-04 [1] RSPM (R 3.6.0)
## vctrs 0.2.4 2020-03-10 [1] RSPM (R 3.6.0)
## viridisLite 0.3.0 2018-02-01 [1] RSPM (R 3.6.0)
## visdat * 0.5.3 2019-02-15 [1] RSPM (R 3.6.0)
## withr 2.1.2 2018-03-15 [1] RSPM (R 3.6.0)
## xfun 0.12 2020-01-13 [1] RSPM (R 3.6.0)
## yaml 2.2.1 2020-02-01 [1] RSPM (R 3.6.0)
## zip 2.0.4 2019-09-01 [1] RSPM (R 3.6.0)
## zoo * 1.8-7 2020-01-10 [1] RSPM (R 3.6.0)
##
## [1] /home/rstudio-user/R/x86_64-pc-linux-gnu-library/3.6
## [2] /opt/R/3.6.0/lib/R/library