The assessment report is created for free using R, RStudio, and Plotly. R is extremely useful when it comes to documentation. There are definitely many other choices available. However, using R may allow us to save time for future assessments. Some results are included for documentation purposes. It is recommended that you read the graphs and the descriptions only.
To read the dashboard interactively, you are highly recommendeded to visit https://rpubs.com/utjimmyx/assessment_self.
A total of 218 student papers from seven courses (Biology 4928, Chemistry 1908 and Chemistry 4948, KINE 1018, KINE 4868, Political Science 4908, NURS 4908) offered in Fall 2022 and Spring 2023 were collected.
A team of six faculty members (5 from Modern Languages and 1 from Business Administration) utilized the rubric found in the assessment folder to score the artifacts. Each faculty scored 40 artifacts. Each faculty was asked to review about 15 submissions for the purpose of evaluating cross rater reliability.
An examination of the 15 artifacts for potential biases was conducted using a Bland−Altman agreement analysis. The findings indicate a minimal bias, ranging from .14 to .43, among the assessors. After evaluation, 203 artifacts were included in the final evaluation. Please find the subject name (the subject code used in the subsequent analysis).
The average overall score was 3.02 ± 0.61 out of a possible 4 for the sample. Figure 1 offers a general overview of the evaluations for each rubric category (Self Assessment, Strategy Development, and Implementation of strategy)
86.7% of students in the sample met or exceeded expectations for Self Assessment.
79.3% of students in the sample met or exceeded expectations for Strategy Development.
71.4% of students in the sample met or exceeded expectations for Implementation.
On average, 69% of students in the sample met or exceeded expectations for the combined rubric items.
library(tidyr)
library(dplyr)
library(ggplot2)
library(plotly)
# Insert Data
percentage1 <- data.frame(Category = c("Self_assessment","Strategy_development","Implementation"),
proportion = c(0.2709, 0.2414, 0.2364))
head(percentage1)
## Category proportion
## 1 Self_assessment 0.2709
## 2 Strategy_development 0.2414
## 3 Implementation 0.2364
# Insert Data
percentage2 <- data.frame(Category = c("Self_assessment","Strategy_development","Implementation"),
proportion = c(0.59605, 0.5517, 0.4778))
head(percentage2)
## Category proportion
## 1 Self_assessment 0.59605
## 2 Strategy_development 0.55170
## 3 Implementation 0.47780
updated <- merge(percentage1, percentage2, by = "Category")
print(updated)
## Category proportion.x proportion.y
## 1 Implementation 0.2364 0.47780
## 2 Self_assessment 0.2709 0.59605
## 3 Strategy_development 0.2414 0.55170
library(tidyr)
# Convert the result to a long format for easier plotting - chatgpt tip
long <- updated %>%
pivot_longer(cols=c('proportion.x', 'proportion.y'), names_to = "Variable", values_to = "Proportion")
#some codes for beautification, but not necessary
# Replace String 'proportion.x', 'proportion.y' with Another Stirng
## ref: https://sparkbyexamples.com/r-programming/replace-string-with-another-string-in-r/
long$Variable[long$Variable == 'proportion.x'] <- 'Exceeds Expectations'
long$Variable[long$Variable == 'proportion.y'] <- 'Meets Expectations'
long
## # A tibble: 6 × 3
## Category Variable Proportion
## <chr> <chr> <dbl>
## 1 Implementation Exceeds Expectations 0.236
## 2 Implementation Meets Expectations 0.478
## 3 Self_assessment Exceeds Expectations 0.271
## 4 Self_assessment Meets Expectations 0.596
## 5 Strategy_development Exceeds Expectations 0.241
## 6 Strategy_development Meets Expectations 0.552
colnames(long)[2] <- "Rubric_items"
plot <- ggplot(data=long, aes(x=Category, y=Proportion,
fill = Rubric_items)) +
geom_bar(stat = "identity") +
scale_y_continuous(labels = scales::percent) +
ylab("Proportion (%)") +
xlab("Rubric Categories") +
ggtitle("Proportion of Submissions Exceeding or Meeting Expectations") +
theme(plot.title = element_text(hjust = 0.1)) +
theme(axis.text.x=element_text(size=6))
ggplotly(plot)
df <-read.csv("whole3.csv")
sum1 <- df %>%
group_by(Subject) %>%
summarize(Average_Self_Assessment = mean(Self_assessment, na.rm = TRUE),
SD_Self_Assessment = sd(Self_assessment, na.rm = TRUE)) %>%
ungroup()
sum2 <- df %>%
group_by(Subject) %>%
summarize(Average_Strategy_development = mean(Strategy_development, na.rm = TRUE),
SD_Strategy_development = sd(Strategy_development, na.rm = TRUE)) %>%
ungroup()
sum3 <- df %>%
group_by(Subject) %>%
summarize(Average_Implementation = mean(Implementation, na.rm = TRUE),
SD_Implementation = sd(Implementation, na.rm = TRUE)) %>%
ungroup()
Sum_subject <- left_join(sum1, sum2, by = "Subject") %>%
left_join(sum3, by = "Subject")
head(Sum_subject)
## # A tibble: 6 × 7
## Subject Average_Self_Assessment SD_Self_Assessment Average_Strategy_developm…¹
## <chr> <dbl> <dbl> <dbl>
## 1 BP 3.12 0.641 2.75
## 2 DS 3.21 0.426 3.14
## 3 GC 3.47 0.513 3.37
## 4 JG 3.21 0.626 3.06
## 5 JM 3.13 0.612 3.11
## 6 PH 2.90 0.693 2.71
## # ℹ abbreviated name: ¹Average_Strategy_development
## # ℹ 3 more variables: SD_Strategy_development <dbl>,
## # Average_Implementation <dbl>, SD_Implementation <dbl>
# Select certain columns and create a new dataset
Averages <- Sum_subject %>%
select(Subject, Average_Self_Assessment,
Average_Strategy_development, Average_Implementation)
# Print the new dataset
print(Averages)
## # A tibble: 6 × 4
## Subject Average_Self_Assessment Average_Strategy_deve…¹ Average_Implementation
## <chr> <dbl> <dbl> <dbl>
## 1 BP 3.12 2.75 2.75
## 2 DS 3.21 3.14 2.79
## 3 GC 3.47 3.37 3.21
## 4 JG 3.21 3.06 3.03
## 5 JM 3.13 3.11 3.06
## 6 PH 2.90 2.71 2.58
## # ℹ abbreviated name: ¹Average_Strategy_development
library(tidyr)
# Reshape the data
your_data_long <- gather(Averages,
key = "Variable",
value = "Value", -Subject)
library(dplyr)
# Create a scatter plot with trend lines for each group
plot <- ggplot(your_data_long, aes(x = Subject, y = Value, color = Subject)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE) +
labs(title = "Average Scores (1-4) for each Subject",
x = "Subject (see page 1 for details)",
y = "Scores") +
facet_wrap(~Variable, scales = "free_y", ncol = 1) +
theme_minimal()
ggplotly(plot)
library(dplyr)
# Select certain columns and create a new dataset
SDs <- Sum_subject %>%
select(Subject, SD_Self_Assessment,
SD_Strategy_development, SD_Implementation)
# Print the new dataset
print(SDs)
## # A tibble: 6 × 4
## Subject SD_Self_Assessment SD_Strategy_development SD_Implementation
## <chr> <dbl> <dbl> <dbl>
## 1 BP 0.641 0.463 0.463
## 2 DS 0.426 0.535 0.699
## 3 GC 0.513 0.496 0.631
## 4 JG 0.626 0.716 0.740
## 5 JM 0.612 0.787 0.870
## 6 PH 0.693 0.776 0.825
library(tidyr)
# Reshape the data
your_data_long <- gather(SDs,
key = "Variable",
value = "Value", -Subject)
# Create a scatter plot with trend lines for each group
plot <- ggplot(your_data_long, aes(x = Subject, y = Value, color = Subject)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE) +
labs(title = "Standard Deviations for each Subject",
x = "Category",
y = "Values") +
facet_wrap(~Variable, scales = "free_y", ncol = 1) +
theme_minimal()
ggplotly(plot)
The following three dashboards show the distribution of assessment scores for each disciplines and skewness by displaying the quartiles (percentages) and averages interactively.
The results indicate that the following three programs (JM - KINE 4868, JG -KINE 1018, GC- Political Science 4908) outperform most other majors, suggesting a need to identify interventions for enhancing all remaining programs.
data <- read.csv ("assessment1.csv", stringsAsFactors=FALSE)
library(plotly)
p <- plot_ly(data,
x = ~Self_assessment,
color = ~Subject, type = "box")
p
data <- read.csv ("assessment1.csv", stringsAsFactors=FALSE)
library(plotly)
p1 <- plot_ly(data,
x = ~Strategy_development,
color = ~Subject, type = "box")
p1
p2 <- plot_ly(data,
x = ~Implementation,
color = ~Subject, type = "box")
p2
In general, the following three programs (JM - KINE 4868, JG -KINE 1018, GC- Political Science 4908) outperform most other majors, suggesting a need to identify interventions for enhancing all remaining programs.
The findings indicate promising overall scores; nevertheless, the average scores for BP (Biology 4928) and PH (NURS 4908) lag behind those of other subjects across all three categories. It’s crucial to acknowledge that the sample sizes for subjects like BP (Biology 4928) and DS (Chemistry 1908 and Chemistry 4948) are generally smaller than those for other subjects. Consequently, outliers may have more impact on the final results the subjects such as BP and DS.
In addition, it is recommended that this assessment is repeated and that prior to that, faculty come together to develop an assignment prompt that better matches the goal of this assessment.
The final step involves a little bit of experimenting with new ideas
in reporting and assessment. Dynamic reporting is a way of reporting
without recalculation or data analysis. One of the most popular dynamic
reporting technique is plotly
that offers various free and
accessible features. By embedding front-end techniques (such as HTML,
CSS and JavaScripts) into R automatically to build dynamic and polished
formatting, educational assessment can be done more efficiently. We can
now share the assessment results online or viem them locally with an
internet browser. Below are three simple examples. This page includes
all of the dashboard created using R, RStudio, and
plotly
. You can also visit each interactive dashboard by
clicking the link.
We could leverage R and RStudio to develop functional dashboards that Excel cannot handle for both internal reporting or prototyping an online analytical application. The professional plan can authenticate users with password protected access for privacy and security.
Thank you everyone for your time! Any feedback is appreciated!