The final activity for each learning lab provides space to work with data and to reflect on how the concepts and techniques introduced in each lab might apply to your own research.

To earn a badge for each lab, you are required to respond to a set of prompts for two parts:

Part I: Reflect and Plan

Use the institutional library (e.g. NCSU Library), Google Scholar or search engine to locate a research article, presentation, or resource that applies learning analytics analysis to an educational context or topic of interest. More specifically, locate a study that makes use of one of the data structures we learned today. You are also welcome to select one of your research papers.

  1. Provide an APA citation for your selected study.

    • Croxton, R. A., & Moore, A. C. (2020). Quantifying library engagement: Aligning library, institutional, and student success data. College and Research Libraries, 81(3), 399–434. https://doi.org/10.5860/crl.81.3.399
  2. What types of data are associated with LA ?

    • Structured data include studeent ID, # of library visits, # of check out library resources, whether students visited campus resources and services centers.
  3. What type of data structures are analyzed in the educational context?

    • structured data such as whether students are engaged with campus activities (visiting library), and their Grade Point Average (GPA)
  4. How might this article be used to better understand a dataset or educational context of personal or professional interest to you?

    • Being able to identify types of data that can be collected across campus and collect them to measure the library value on the students’ academic successs (e.g., GPA)
  5. Finally, how do these processes compare with what teachers and educational organizations already do to support and assess student learning?

    • These processes have done in some institutions depending on their data privacy and protection policy. While teachers and educational organizations could limit looking at only the class level rather than campus level, this study examined analyzing data at the campus level.

Draft a research question of guided by techniques and data sources that you are potentially interested in exploring in more depth.

  1. What data source(s) should be analyzed or discussed?

    • text data that automatically recorded through a system.
  2. What is the purpose of your article?

    • To explore what types of questions are asked through the online chat text
  3. Explain the analytical level at which these data would need to be collected and analyzed.

    • This text data can be used text mining, specifically, topic modeling.
  4. How, if at all, will your article touch upon the application(s) of LA to “understand and improve learning and the contexts in which learning occurs?”

    • Exploring how students explore information and what types of questions are asked are critical because the findings from the article can provide common questions, which can be developed answers for a training purpose.

Part II: Data Product

After you finish the script file for lab1_badge add the code fo reach problem into the correct code chunks below. # Making sure not to change the code chunk name.

# YOUR FINAL CODE HERE
Students <- c("Thor", "Rogue", "Electra", "Electra", "Wolverine")
Foods <- c("Bread", "Orange", "Chocolate", "Carrots", "Milk")

df <- data.frame(Students, Foods)
# YOUR FINAL CODE HERE
table(df)
##            Foods
## Students    Bread Carrots Chocolate Milk Orange
##   Electra       0       1         1    0      0
##   Rogue         0       0         0    0      1
##   Thor          1       0         0    0      0
##   Wolverine     0       0         0    1      0
# YOUR FINAL CODE HERE

c(1,2,3,4,5)
## [1] 1 2 3 4 5
# YOUR FINAL CODE HERE
vec <- c(1,2,3,4,5)
sum (vec)
## [1] 15
# YOUR FINAL CODE HERE
library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
## ✔ ggplot2 3.3.6     ✔ purrr   0.3.4
## ✔ tibble  3.1.7     ✔ dplyr   1.0.9
## ✔ tidyr   1.2.0     ✔ stringr 1.4.0
## ✔ readr   2.1.2     ✔ forcats 0.5.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()

Knit & Submit

Congratulations, you’ve completed your Data Sources Badge!

Complete the following steps to submit your work for review by:

  1. Change the name of the author: in the YAML header at the very top of this document to your name. As noted in Reproducible Research in R, The YAML header controls the style and feel for knitted document but doesn’t actually display in the final output.

  2. Click the yarn icon above to “knit” your data product to a HTML file that will be saved in your R Project folder.

  3. Commit your changes in GitHub Desktop and push them to your online GitHub repository.

  4. Publish your HTML page the web using one of the following publishing methods: Publish on RPubs by clicking the “Publish” button located in the Viewer Pane when you knit your document. Note, you will need to quickly create a RPubs account. Publishing on GitHub using either GitHub Pages or the HTML previewer.

  5. Post a new discussion on GitHub to our Foundations Badges forum. In your post, include a link to your published web page and write a short reflection highlighting one thing you learned from this lab and one thing you’d like to explore further.