0. INTRODUCTION

Text classification is also called supervised Machine Learning for text analysis. We used text classification assess the demonstration of data literacy in an open online learning network. The dataset consists of 1431 observations of students’ comments based on six activities including Climate Threats, Air Pollution, U.S Wellbeing, Covid Herd Immunity, Vaccination Roadblocks, and First Vaccinated from the NY Times Learning Network.

Data source. To create the dataset, we first coded students’ comments with the following dimensions: match, tension, more information, connection, challenge, affection, and science. These dimensions demonstrate critical data literacy skills. For instance, “challenge” entails the capacity of challenging data collection methods, ways of representing data, interpreting data, etc. In each dimension, we scored the comments using a 2-point scale (0-1). Taking “match” as an example, if learners explained that their personal experiences matched data trends (i.e., “My community is one of the areas under high water stress. This is true because it gets really hot in the summer and we need water.”), their comments were scored as 1 point. Otherwise, we scored the comments as 0 points. Reliability was calculated, based on 10% of comments from each activity, with Cohen’s kappa. Kappa was found to be above 0.8 for each dimension, indicating good reliability.

In this walkthrough, we will use the “more Information” dimension as an example to address the following research question: How can machine learning be used to assess learners’ critical data literacy in open online learning environments? The label of “more information” relates to the extent to which the student demonstrates curiosity about other data that the featured visualization did not show. This could be a future trend or when the student sought more contexts to interpret data trends.

Text classification workflow

Figure source: https://monkeylearn.com/text-classification

This workflow shows that to build a text classification model, we should first turn text columns into features in data wrangling, and set up training datasets and testing datasets for model development. Specifically, before analysis, we’ll take a quick look at the data and understand the context of data collection. Then we turn the text column into numeric features for data modeling. After that, we create two sets of data and only use the training set for model development.


1. PREPARE

First, let’s load the following packages that we’ll be needing for this walkthrough:

library(tidyverse)
library(tidymodels)
library(textrecipes)
library(discrim) #naive Bayes model is available in the tidymodels package discrim.

2. WRANGLE

To get started, we need to import, or “read”, our data into R. The function used to import your data will depend on the file format of the data you are trying to import. First, however, you’ll need to do the following:

  1. Download the comments.csv file we’ll be using.
  2. Create a folder in the directory on your computer where you stored your R Project and name it “data”.
  3. Add the file to your data folder.
  4. Check your Files tab in RStudio to verify that your file is indeed in your data folder.

Now let’s read our data into our Environment using the read_csv() function and assign it to a variable name so we can work with it like any other object in R. Then let’s take a quick glimpse() at the data to see what we have to work with.

comments <- read_csv("data/comments.csv")
## Rows: 1431 Columns: 5
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (3): user_display_name, user_location, comment
## dbl (1): activity
## lgl (1): more_info
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
glimpse(comments)
## Rows: 1,431
## Columns: 5
## $ activity          <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1…
## $ user_display_name <chr> "Angie", "welington nunez", "valeria", "caleb", "Ang…
## $ user_location     <chr> "bronk", "bronx", "chicago , il", "us", "Massachuset…
## $ comment           <chr> "this graph shows me when there is a extreme rainfal…
## $ more_info         <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, TRU…

We can see that the dataset includes five columns, activity (which activity the comment came from), user_display_name (the name of users), user_location (the location of users), comment (users’ comments), and more_info (whether the comment demonstrates students’ data literacy in the dimension of seeking for more information).

Let’s look at the data! Here are the first six comments:

head(comments$comment)
## [1] "this graph shows me when there is a extreme rainfall and if it  close to where you live"                                                                                                                                                                                           
## [2] "the graph shows me that there will be an extreme rainfall"                                                                                                                                                                                                                         
## [3] "this map showed me that the damage has already been done and that it isn??t even one specific place(state) but all of the map. our planet has already been hit with the effects and we can??t really do much but just watch and wait to see what happens eventually in the future."
## [4] "In all places there going to be affected no matter what.In the state where i live we??ve experimented hurricanes more. So somethings might change for example the level of water or wind speed."                                                                                   
## [5] "I noticed that my county was a high risk for hurricanes, I wonder why that is when I have never experienced a hurricane in the 13 years I've lived here."                                                                                                                          
## [6] "I notice that at the moment no matter where you live in the united states there is some sort of extreme unusual weather that could possibly affect you and your state."

3. MODEL

Now, let’s build a classification model with two columns in the dataset: comment and more_info. We need to split the data into training and testing datasets. We can use the initial_split() function from to create this binary split of the data. The strata argument ensures that the distribution of more_info is similar in the training set and testing set. Since the split uses random sampling, we set a seed so we can reproduce our results.

set.seed(123)

comments <- comments %>%
  mutate(more_info=as.factor(more_info))

comments_split <- initial_split(comments, strata = more_info)

comments_train <- training(comments_split)
comments_test <- testing(comments_split)

We can check the dimensions of the two splits with the function dim()

dim(comments_train)
## [1] 1073    5
dim(comments_test)
## [1] 358   5

Next we need to preprocess this data (i.e., comment) to prepare it for modeling; we have text data, and we need to build numeric features for machine learning from that text. We first initialize our set of preprocessing transformations with the recipe() function, using a formula expression to specify the variables, our outcome (i.e., more_info) plus our predictor (i.e., comment), along with the data set (i.e., comments_train). The comments_test data set is for the final testing step and should not be used for building models.

comments_rec <-
  recipe(more_info ~ comment, data = comments_train)

Now we add steps to process the comment variable. First we tokenize the text to words with step_tokenize(). Before we calculate tf-idf we use step_tokenfilter() to only keep the 1000 most frequent tokens, to avoid creating too many variables in our model. To finish, we use step_tfidf() to compute tf-idf.

comments_rec <- comments_rec %>%
  step_tokenize(comment) %>%
  step_tokenfilter(comment, max_tokens = 1e3) %>%
  step_tfidf(comment)

Let’s create 10-fold cross-validation sets, and use these resampled sets for performance estimates. In this way, 90% of the training data is included in each fold, and the other 10% is held out for evaluation.

set.seed(234)
comments_folds <- vfold_cv(comments_train)

comments_folds
## #  10-fold cross-validation 
## # A tibble: 10 × 2
##    splits            id    
##    <list>            <chr> 
##  1 <split [965/108]> Fold01
##  2 <split [965/108]> Fold02
##  3 <split [965/108]> Fold03
##  4 <split [966/107]> Fold04
##  5 <split [966/107]> Fold05
##  6 <split [966/107]> Fold06
##  7 <split [966/107]> Fold07
##  8 <split [966/107]> Fold08
##  9 <split [966/107]> Fold09
## 10 <split [966/107]> Fold10

Now let’s set up a naive bayes model.

nb_spec <- naive_Bayes() %>%
  set_mode("classification") %>%
  set_engine("naivebayes")

nb_spec
## Naive Bayes Model Specification (classification)
## 
## Computational engine: naivebayes

Now that we have a full specification of the preprocessing recipe and set up a model, we can build up a tidymodels workflow() to bundle together our modeling components.

nb_wf <- workflow() %>%
  add_recipe(comments_rec) %>%
  add_model(nb_spec)

nb_wf
## ══ Workflow ════════════════════════════════════════════════════════════════════
## Preprocessor: Recipe
## Model: naive_Bayes()
## 
## ── Preprocessor ────────────────────────────────────────────────────────────────
## 3 Recipe Steps
## 
## • step_tokenize()
## • step_tokenfilter()
## • step_tfidf()
## 
## ── Model ───────────────────────────────────────────────────────────────────────
## Naive Bayes Model Specification (classification)
## 
## Computational engine: naivebayes

We fit one time to the training data as a whole. Now, to estimate how well that model performs, let’s fit the model many times, once to each of these resampled folds, and then evaluate on the heldout part of each resampled fold.

nb_rs <- fit_resamples(
  nb_wf,
  comments_folds,
  control = control_resamples(save_pred = TRUE)
)

We can use collect_metrics() to get evaluation information.

nb_rs_metrics <- collect_metrics(nb_rs)

nb_rs_metrics
## # A tibble: 2 × 6
##   .metric  .estimator  mean     n std_err .config             
##   <chr>    <chr>      <dbl> <int>   <dbl> <chr>               
## 1 accuracy binary     0.616    10  0.0270 Preprocessor1_Model1
## 2 roc_auc  binary     0.726    10  0.0237 Preprocessor1_Model1

Another way to evaluate our model is to evaluate the confusion matrix. A confusion matrix tabulates a model’s false positives and false negatives for each class. The function conf_mat() computes a cross-tabulation of observed and predicted classes. This allows us to visualize how well the model performs and helps us to identify problems and think about ways to address the problems.

cm <- conf_mat(nb_rs[[5]][[1]], truth = more_info,
         estimate = .pred_class)

autoplot(cm, type = "heatmap")

✅ Comprehension Check

What do you notice in the confusion matrix?

Replace the naive Bayes model with a SVM model and compare the results from these two models:

  1. Does the SVM model work better than the naive Bayes model?
  2. Why does the SVM model work better (or worse)?

svm_poly(degree = 1) %>%

  set_mode(“classification”) %>%

  set_engine(“kernlab”, scaled = FALSE)

nb_spec2 <- svm_poly(degree = 1) %>% # set model
   set_mode("classification") %>%
   set_engine("kernlab", scaled = FALSE)

nb_wf2 <- workflow() %>% # run model from training data
   add_recipe(comments_rec) %>%
   add_model(nb_spec2)
install.packages("kernlab")
## Installing package into '/cloud/lib/x86_64-pc-linux-gnu-library/4.3'
## (as 'lib' is unspecified)
library(kernlab)
## 
## Attaching package: 'kernlab'
## The following object is masked from 'package:scales':
## 
##     alpha
## The following object is masked from 'package:purrr':
## 
##     cross
## The following object is masked from 'package:ggplot2':
## 
##     alpha
nb_rs2 <- fit_resamples(
   nb_wf2,
   comments_folds,
   control = control_resamples(save_pred = TRUE)
)
nb_rs_metrics2 <- collect_metrics(nb_rs2)

cm2 <- conf_mat(nb_rs2[[5]][[1]], truth = more_info,
               estimate = .pred_class)
autoplot(cm2, type = "heatmap")

---
title: "Lab 4: Text Classification in Open Learning Resourses"
output: 
  html_document:
    toc: true
    toc_depth: 3
    toc_float: yes
    code_folding: show
    code_download: TRUE
editor_options: 
  markdown: 
    wrap: 72
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```

## 0. INTRODUCTION

Text classification is also called supervised Machine Learning for text
analysis. We used text classification assess the demonstration of data
literacy in an open online learning network. The dataset consists of
1431 observations of students' comments based on six activities
including [Climate
Threats](https://www.nytimes.com/2020/10/15/learning/whats-going-on-in-this-graph-climate-threats.html),
[Air
Pollution](https://www.nytimes.com/2021/02/11/learning/whats-going-on-in-this-graph-world-cities-air-pollution.html),
[U.S
Wellbeing](https://www.nytimes.com/2020/12/03/learning/whats-going-on-in-this-graph-us-well-being-compared-internationally.html),
[Covid Herd
Immunity](https://www.nytimes.com/2021/03/04/learning/whats-going-on-in-this-graph-covid-herd-immunity.html),
[Vaccination
Roadblocks](https://www.nytimes.com/2021/03/18/learning/whats-going-on-in-this-graph-vaccination-roadblocks.html),
and [First
Vaccinated](https://www.nytimes.com/2020/12/28/learning/whats-going-on-in-this-graph-first-vaccinated.html)
from the NY Times Learning Network.

**Data source**. To create the dataset, we first coded students'
comments with the following dimensions: match, tension, more
information, connection, challenge, affection, and science. These
dimensions demonstrate critical data literacy skills. For instance,
"challenge" entails the capacity of challenging data collection methods,
ways of representing data, interpreting data, etc. In each dimension, we
scored the comments using a 2-point scale (0-1). Taking "match" as an
example, if learners explained that their personal experiences matched
data trends (i.e., "My community is one of the areas under high water
stress. This is true because it gets really hot in the summer and we
need water."), their comments were scored as 1 point. Otherwise, we
scored the comments as 0 points. Reliability was calculated, based on
10% of comments from each activity, with Cohen's kappa. Kappa was found
to be above 0.8 for each dimension, indicating good reliability.

In this walkthrough, we will use the "more Information" dimension as an
example to address the following research question: **How can machine
learning be used to assess learners' critical data literacy in open
online learning environments?** The label of "more information" relates
to the extent to which the student demonstrates curiosity about other
data that the featured visualization did not show. This could be a
future trend or when the student sought more contexts to interpret data
trends.

### Text classification workflow

[![Figure source: https://monkeylearn.com/text-classification](img/tcflow.png "A flowchart of text classification."){style="width"
width="500"}](https://monkeylearn.com/text-classification/)

This workflow shows that to build a text classification model, we should
first turn text columns into features in data wrangling, and set up
training datasets and testing datasets for model development.
Specifically, before analysis, we'll take a quick look at the data and
understand the context of data collection. Then we turn the text column
into numeric features for data modeling. After that, we create two sets
of data and only use the training set for model development.

------------------------------------------------------------------------

## 1. PREPARE

First, let's load the following packages that we'll be needing for this
walkthrough:

```{r load-packages, message=FALSE}
library(tidyverse)
library(tidymodels)
library(textrecipes)
library(discrim) #naive Bayes model is available in the tidymodels package discrim.
```

------------------------------------------------------------------------

## 2. WRANGLE

To get started, we need to import, or "read", our data into R. The
function used to import your data will depend on the file format of the
data you are trying to import. First, however, you'll need to do the
following:

1.  Download the `comments.csv` file we'll be using.
2.  Create a folder in the directory on your computer where you stored
    your R Project and name it "data".
3.  Add the file to your data folder.
4.  Check your Files tab in RStudio to verify that your file is indeed
    in your data folder.

Now let's read our data into our Environment using the `read_csv()`
function and assign it to a variable name so we can work with it like
any other object in R. Then let's take a quick `glimpse()` at the data
to see what we have to work with.

```{r read-csv}
comments <- read_csv("data/comments.csv")

glimpse(comments)
```

We can see that the dataset includes five columns, activity (which
activity the comment came from), user_display_name (the name of users),
user_location (the location of users), comment (users' comments), and
more_info (whether the comment demonstrates students' data literacy in
the dimension of seeking for more information).

Let's look at the data! Here are the first six comments:

```{r}
head(comments$comment)
```

## 3. MODEL

Now, let's build a classification model with two columns in the dataset:
comment and more_info. We need to split the data into training and
testing datasets. We can use the `initial_split()` function from to
create this binary split of the data. The `strata` argument ensures that
the distribution of `more_info` is similar in the training set and
testing set. Since the split uses random sampling, we set a seed so we
can reproduce our results.

```{r}
set.seed(123)

comments <- comments %>%
  mutate(more_info=as.factor(more_info))

comments_split <- initial_split(comments, strata = more_info)

comments_train <- training(comments_split)
comments_test <- testing(comments_split)
```

We can check the dimensions of the two splits with the function `dim()`

```{r}
dim(comments_train)

dim(comments_test)
```

Next we need to preprocess this data (i.e., comment) to prepare it for
modeling; we have text data, and we need to build numeric features for
machine learning from that text. We first initialize our set of
preprocessing transformations with the `recipe()` function, using a
formula expression to specify the variables, our outcome (i.e.,
more_info) plus our predictor (i.e., comment), along with the data set
(i.e., comments_train). The comments_test data set is for the final
testing step and should not be used for building models.

```{r}
comments_rec <-
  recipe(more_info ~ comment, data = comments_train)
```

Now we add steps to process the comment variable. First we tokenize the
text to words with `step_tokenize()`. Before we calculate tf-idf we use
`step_tokenfilter()` to only keep the 1000 most frequent tokens, to
avoid creating too many variables in our model. To finish, we use
`step_tfidf()` to compute tf-idf.

```{r preprocessing comment variable}
comments_rec <- comments_rec %>%
  step_tokenize(comment) %>%
  step_tokenfilter(comment, max_tokens = 1e3) %>%
  step_tfidf(comment)
```

Let's create 10-fold cross-validation sets, and use these resampled sets
for performance estimates. In this way, 90% of the training data is
included in each fold, and the other 10% is held out for evaluation.

```{r}
set.seed(234)
comments_folds <- vfold_cv(comments_train)

comments_folds
```

Now let's set up a naive bayes model.

```{r}
nb_spec <- naive_Bayes() %>%
  set_mode("classification") %>%
  set_engine("naivebayes")

nb_spec
```

Now that we have a full specification of the preprocessing recipe and
set up a model, we can build up a tidymodels `workflow()` to bundle
together our modeling components.

```{r}
nb_wf <- workflow() %>%
  add_recipe(comments_rec) %>%
  add_model(nb_spec)

nb_wf
```

We fit one time to the training data as a whole. Now, to estimate how
well that model performs, let's fit the model many times, once to each
of these resampled folds, and then evaluate on the heldout part of each
resampled fold.

```{r}
nb_rs <- fit_resamples(
  nb_wf,
  comments_folds,
  control = control_resamples(save_pred = TRUE)
)
```

We can use `collect_metrics()` to get evaluation information.

```{r}
nb_rs_metrics <- collect_metrics(nb_rs)

nb_rs_metrics
```

Another way to evaluate our model is to evaluate the confusion matrix. A
confusion matrix tabulates a model's false positives and false negatives
for each class. The function conf_mat`()` computes a cross-tabulation of
observed and predicted classes. This allows us to visualize how well the
model performs and helps us to identify problems and think about ways to
address the problems.

```{r}
cm <- conf_mat(nb_rs[[5]][[1]], truth = more_info,
         estimate = .pred_class)

autoplot(cm, type = "heatmap")
```

##### ✅ Comprehension Check

What do you notice in the confusion matrix?

Replace the naive Bayes model with a SVM model and compare the results
from these two models:

1.  Does the SVM model work better than the naive Bayes model?
2.  Why does the SVM model work better (or worse)?

svm_poly(degree = 1) %\>%

  set_mode("classification") %\>%

  set_engine("kernlab", scaled = FALSE)

```{r}
nb_spec2 <- svm_poly(degree = 1) %>% # set model
   set_mode("classification") %>%
   set_engine("kernlab", scaled = FALSE)

nb_wf2 <- workflow() %>% # run model from training data
   add_recipe(comments_rec) %>%
   add_model(nb_spec2)
```

```{r}
install.packages("kernlab")
library(kernlab)

nb_rs2 <- fit_resamples(
   nb_wf2,
   comments_folds,
   control = control_resamples(save_pred = TRUE)
)
```

```{r}
nb_rs_metrics2 <- collect_metrics(nb_rs2)

cm2 <- conf_mat(nb_rs2[[5]][[1]], truth = more_info,
               estimate = .pred_class)
autoplot(cm2, type = "heatmap")
```
