As a reminder, to earn a badge for each lab, you are required to respond to a set of prompts for two parts:
In Part I, you will reflect on your understanding of key concepts and begin to think about potential next steps for your own study.
In Part II, you will create a simple data product in R that demonstrates your ability to apply an analytic technique introduced in this learning lab.
Part A:
Part B: Once last time, use the institutional library (e.g. NCSU Library), Google Scholar or search engine to locate a research article, presentation, or resource that applies unsupervised machine learning to an educational context aligned with your research interests. More specifically, locate a study that involve using Latent Profile Analysis or a similar method. You may find the published papers that have used LPA helpful in this respect; those can be browsed here.
Provide an APA citation for your selected study.
Rosenberg, J. M., & Krist, C. (2021). Combining machine learning and qualitative methods to elaborate students’ ideas about the generality of their model-based explanations. Journal of Science Education and Technology, 30, 255-267.
What research questions were the authors of this study trying to address using Latent Profile Analysis or a similar method?
How can an approach that integratesML methods and interpretive qualitative coding be used to elaborate students’ consideration of generality as a means of assessing students’ participation in science practices?
The authors illustrated how unsupervised machine learning methods, when coupled with qualitative, interpretive coding, were used to revise our construct map for generality in a way that allowed for a more nuanced evaluation that was closely tied to empirical patterns in the data.
According to the researchers, they adopted a CGT approach with the aim of grounded conceptual development: to elaborate and iteratively revise a construct map that represented how we characterized students’ consideration of the epistemic criterion of generality when constructing model-based scientific explanations.
Like the last data product, this one may be a challenge, too. Here,
estimate latent profiles using your own data. If you do not
have ready access to appropriate data (for LPA, continuous/numeric
data), choose any of the data sets in the data folder of
this repository.
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.2 ✔ readr 2.1.4
## ✔ forcats 1.0.0 ✔ stringr 1.5.0
## ✔ ggplot2 3.4.3 ✔ tibble 3.2.1
## ✔ lubridate 1.9.2 ✔ tidyr 1.3.0
## ✔ purrr 1.0.2
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(tidyLPA)
## You can use the function citation('tidyLPA') to create a citation for the use of {tidyLPA}.
## Mplus is not installed. Use only package = 'mclust' when calling estimate_profiles().
library(tidytext)
library(textdata)
transcript <- read_csv("data/r-processed-transcript.csv")
## Rows: 2927 Columns: 18
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (2): group, transcript_text
## dbl (5): index, nwords, duration_seconds, words_per_second, liwc_certitude
## lgl (8): question_mark, elipses, word_present_maybe, word_present_sort_of, ...
## time (3): start, end, duration
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
transcript %>%
glimpse()
## Rows: 2,927
## Columns: 18
## $ group <chr> "orange", "orange", "orange", "orange", "orange…
## $ index <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, …
## $ start <time> 00:03:18, 00:03:20, 00:03:28, 00:03:34, 00:03:…
## $ end <time> 00:03:20, 00:03:21, 00:03:30, 00:03:38, 00:03:…
## $ duration <time> 00:00:02, 00:00:01, 00:00:02, 00:00:04, 00:00:…
## $ transcript_text <chr> "It's table 6, right?", "I think so.", "Always …
## $ question_mark <lgl> TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE,…
## $ elipses <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE…
## $ nwords <dbl> 4, 3, 5, 8, 1, 1, 5, 11, 8, 7, 1, 4, 5, 12, 3, …
## $ duration_seconds <dbl> 2.314815, 1.157407, 2.314815, 4.629630, 1.15740…
## $ words_per_second <dbl> 1.728, 2.592, 2.160, 1.728, 0.864, 0.864, 2.160…
## $ word_present_maybe <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE…
## $ word_present_sort_of <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE…
## $ word_present_unsure <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE…
## $ word_present_dont_know <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE…
## $ word_present_confused <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE…
## $ word_present_possibly <lgl> FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE…
## $ liwc_certitude <dbl> 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00,…
nrc <- get_sentiments("nrc") # this access sentiment
transcript <- transcript %>%
select(group, index, start, end, duration, transcript_text) # select the variables we'll be using
transcript %>%
unnest_tokens(word, transcript_text) %>% # this changes the data to be in "long" form, with each row consisting of individual words in the transcript data
left_join(nrc, relationship = "many-to-many") %>% # ignore warnings
count(sentiment)
## Joining with `by = join_by(word)`
## # A tibble: 11 × 2
## sentiment n
## <chr> <int>
## 1 anger 98
## 2 anticipation 268
## 3 disgust 72
## 4 fear 134
## 5 joy 199
## 6 negative 296
## 7 positive 407
## 8 sadness 142
## 9 surprise 104
## 10 trust 282
## 11 <NA> 17294
chunk_size <- 45 # chunk duration in seconds
start_point <- transcript$start %>% as.integer() %>% pluck(1) # find the starting point of the time stamps
end_point <- transcript$end %>% as.integer() %>% pluck(nrow(transcript)) # find the ending point of the time stamps
transcript$start <- as.integer(transcript$start)
transcript$end <- as.integer(transcript$end)
# Create a new variable for the chunks
transcript$segment_id <- cut(transcript$start, breaks = seq(from = start_point, to = end_point, by = chunk_size))
number_of_words_per_segment <- transcript %>%
unnest_tokens(word, transcript_text) %>%
count(segment_id) %>%
rename(words_per_segment = n)
data_for_lpa <- transcript %>%
unnest_tokens(word, transcript_text) %>% # create a one-word-per-row structure
left_join(nrc, relationship = "many-to-many") %>% # join the sentiment data
count(segment_id, sentiment) %>% # count the number of words assigned to each emotional expression
spread(sentiment, n) %>% # change the data to be in wide form
janitor::clean_names() %>% # make the names easier to type
left_join(number_of_words_per_segment) %>% # join the number of words per segment
reframe(pct_fear = fear / words_per_segment, # create summary variables, dividing each sentiment score by the nubmer of the words in each segment
pct_joy = joy / words_per_segment,
pct_anticipation = anticipation / words_per_segment,
pct_disgust = disgust / words_per_segment,
pct_sadness = sadness / words_per_segment,
pct_surprise = surprise / words_per_segment,
pct_trust = trust / words_per_segment) %>%
mutate_if(is.numeric, replace_na, 0) # replace NA values with 0s
## Joining with `by = join_by(word)`
## Joining with `by = join_by(segment_id)`
data_for_lpa %>%
estimate_profiles(n_profiles = 3)
## tidyLPA analysis using mclust:
##
## Model Classes AIC BIC Entropy prob_min prob_max n_min n_max BLRT_p
## 1 3 -6379.76 -6291.51 0.93 0.96 0.98 0.16 0.62 0.01
data_for_lpa %>%
estimate_profiles(n_profiles = 3) %>%
plot_profiles(add_line = TRUE)
data_for_lpa %>%
estimate_profiles(1:7) %>%
compare_solutions()
## Warning: The solution with the maximum number of classes under consideration
## was considered to be the best solution according to one or more fit indices.
## Examine your results with care and consider estimating more classes.
## Compare tidyLPA solutions:
##
## Model Classes BIC
## 1 1 -6012.392
## 1 2 -6175.140
## 1 3 -6291.511
## 1 4 -6288.654
## 1 5 -6250.156
## 1 6 -6270.330
## 1 7 -6240.073
##
## Best model according to BIC is Model 1 with 3 classes.
##
## An analytic hierarchy process, based on the fit indices AIC, AWE, BIC, CLC, and KIC (Akogul & Erisoglu, 2017), suggests the best solution is Model 1 with 1 classes.
data_for_lpa %>%
estimate_profiles(1:4) %>%
get_fit()
## # A tibble: 4 × 18
## Model Classes LogLik AIC AWE BIC CAIC CLC KIC SABIC ICL
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 1 1 3041. -6054. -5903. -6012. -5998. -6080. -6037. -6057. 6012.
## 2 1 2 3142. -6240. -6002. -6175. -6153. -6282. -6215. -6245. 6164.
## 3 1 3 3220. -6380. -6055. -6292. -6262. -6438. -6347. -6386. 6282.
## 4 1 4 3238. -6400. -5989. -6289. -6251. -6475. -6359. -6409. 6274.
## # ℹ 7 more variables: Entropy <dbl>, prob_min <dbl>, prob_max <dbl>,
## # n_min <dbl>, n_max <dbl>, BLRT_val <dbl>, BLRT_p <dbl>
We can also plot several possible solutions; again, please replace
compare_solutions(), this time with
plot_profiles(add_line = TRUE):
data_for_lpa %>%
estimate_profiles(1:4) %>%
plot_profiles(add_line = TRUE)
To do so, run estimate_profiles(), specifying two
profiles, and assigning the output the name
two_profile_solution.
two_profile_solution <- estimate_profiles(data_for_lpa, n_profiles = 2)
Let’s plot this solution using
plot_profiles(add_line = TRUE)
plot_profiles(two_profile_solution, add_line = TRUE)
data_for_two_profile_solution <-
get_data(two_profile_solution) %>% # get the classes for each row of data
select(Class) %>% # let's select just the class (profile) variable
mutate(segment_id = unique(transcript$segment_id)) # this assigns the segment IDs back to the data, so we can join the transcript data later on
Then, let’s bind together the profiles assigned to each
chunk with the original data. Please use bind_rows(),
providing both data_for_lpa and
data_for_two_profile_solution together and assigning the
output the name combined_data.
combined_data <- bind_cols(data_for_lpa, data_for_two_profile_solution)
Now, let’s take a look at the data using View() the data
frame we create next, data_to_view. Don’t write
this in a code chunk; instead, just view the data frame you just created
by typing the code into the console (as View() can cause
issues when it comes time to knit — unless it is commented!).
data_to_view <- left_join(transcript, combined_data) # this joins the transcript and combined data, so we can see which segment is associated with which profile, or class
## Joining with `by = join_by(segment_id)`
# View(data_to_view)
Please interpret the results of your analysis below. What did you find? How interpretable and useful are the profiles? And, what next steps - including those involving qualitative analysis - might you take to deepen this analysis? There are two main latent profiles. To interpret qualitative data more carefully and compare results with unsupervised modeling.
Complete the following steps to knit and publish your work:
First, change the name of the author: in the YAML
header at the very top of this document to your name. The YAML
header controls the style and feel for knitted document but doesn’t
actually display in the final output.
Next, click the knit button in the toolbar above to “knit” your R Markdown document to a HTML file that will be saved in your R Project folder. You should see a formatted webpage appear in your Viewer tab in the lower right pan or in a new browser window. Let’s us know if you run into any issues with knitting.
Finally, publish your webpage on Posit Cloud by clicking the “Publish” button located in the Viewer Pane after you knit your document. See screenshot below.

To receive credit for this assignment and earn your fourth ML badge, share the link to published webpage under the next incomplete badge artifact column on the 2023 LASER Scholar Information and Documents spreadsheet: https://go.ncsu.edu/laser-sheet.
Once your instructor has checked your link, you will be provided a physical version of the badge below!