Now that I’ve finished going through the coding tutorials, the rest of the term’s goals will be about the group project. Here were my goals for this week:
This was successful! Felt quite proud of myself for this one since I found a lot of the resources through Google and Stack Overflow.
Basically, I had to find a package, install it, and make sure it worked. And luckily for us, the first package I found worked. Here is the code I used for installing and reading the data.
library(readspss)
library(tidyverse) #remember to load this or we cannot get the pipeline
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
## ✓ ggplot2 3.3.4 ✓ purrr 0.3.4
## ✓ tibble 3.1.2 ✓ dplyr 1.0.6
## ✓ tidyr 1.1.3 ✓ stringr 1.4.0
## ✓ readr 1.4.0 ✓ forcats 0.5.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
data <- read.sav("Humiston & Wamsley 2019 data.sav")
It worked really well and looks quite nice!
Now this was a lot more difficult. In some ways, it feels like I don’t know enough coding yet to be able to do this. And if I were on my own, I certainly wouldn’t have been able to. But luckily, in our group meeting, we worked on this together.
This is how we calculated average age. We’ve applied similar code to the ESS (Epworth Sleepiness Scale).
cleandata <- data %>% #remove excluded participants
filter(exclude=="no")
# Calculate average age
averageage <- cleandata %>%
summarise(averageage = mean(General_1_Age),
agesd = sd(General_1_Age))
print(averageage)
## averageage agesd
## 1 19.54839 1.233929
# Calculate ESS (Epworth Sleepiness Score)
ESS <- cleandata %>%
select(Epworth_total) %>%
summarise(averageESS = mean(Epworth_total),
sdESS = sd(Epworth_total))
print(ESS)
## averageESS sdESS
## 1 15.29032 2.830707
The main challenge I had was when I tried to remove excluded participants, it was getting rid of all the observations because the code I initially used was:
cleandata <- data %>%
filter(exclude==0)
For some reason, this didn’t work even though it worked for two other group members. We all downloaded our data the same way, so I’m not too sure what went wrong there. Luckily, Jade figured it out and told us.
We also came across some other challenges since the data was a big vague. For example, in the journal article, it tells us that one of the means we need to calculate is the Stanford Sleepiness Score (SSS). But, we could not see a variable within the data with that label. Eventually, we worked out that it was the “Alertness” variable. But more problems came after that. Initially, we tried to do this:
SSSgroup <- cleandata %>%
select(AlertTest_1_Feel,AlertTest_2_Feel,AlertTest_3_Feel,AlertTest_4_Feel) %>%
drop_na() %>% #helps get rid of NA values
summarise(averageSSS = mean(rbind(
AlertTest_1_Feel,AlertTest_2_Feel,AlertTest_3_Feel,AlertTest_4_Feel)),
sdSSS = sd(rbind(
AlertTest_1_Feel,AlertTest_2_Feel,AlertTest_3_Feel,AlertTest_4_Feel)))
SSS <- SSSgroup %>%
summarise(averageSSS = mean(SSSgroup))
print(SSS)
But this kept returning NA values despite the drop_na()
Luckily, Jade worked out how to clean it up and make it work.
cleandata <- cleandata %>%
mutate(
SSSvalue = as.numeric( #change Alert tests to be numeric levels since they are factors
x = AlertTest_1_Feel,
levels = 1:5, #assigns a number to the labels below, allowing us to average them
labels = c("1 - Feeling active, vital alert, or wide awake",
"2 - Functioning at high levels, but not at peak; able to concentrate",
"3 - Awake, but relaxed; responsive but not fully alert",
"4 - Somewhat foggy, let down",
"5 - Foggy; losing interest in remaining awake; slowed down"),
exclude = NA #excludes NA values
)
)
SSS <- cleandata %>%
select(SSSvalue) %>%
summarise(averageSSS = mean(SSSvalue),
sdSSS = sd(SSSvalue))
print(SSS)
## averageSSS sdSSS
## 1 2.806452 0.7491931
We also calculated implicit bias averages. These were pretty easy since the labels were clear.
#Calculate baseline implicit bias
BIB <- cleandata %>%
select(base_IAT_gen,base_IAT_race) %>%
summarise(averageBIB = mean(rbind(base_IAT_gen,base_IAT_race)), #binds rows together to average all the categories in both rows
sdBIB = sd(rbind(base_IAT_gen,base_IAT_race))) #binds rows together to SD all the values in both categories
print(BIB)
## averageBIB sdBIB
## 1 0.5565373 0.4058619
#Calculate pre-nap implicit bias
PreNBIB <-cleandata %>%
select(pre_IAT_gen,pre_IAT_race) %>%
summarise(averagePreNBIB = mean(rbind(pre_IAT_gen,pre_IAT_race)),
sdPreNBIB = sd(rbind(pre_IAT_gen,pre_IAT_race)))
print(PreNBIB)
## averagePreNBIB sdPreNBIB
## 1 0.2566674 0.4776418
#Calculate post-nap implicit bias
PostNBIB <- cleandata %>%
select(post_IAT_gen,post_IAT_race) %>%
summarise(averagePostNBIB = mean(rbind(post_IAT_gen,post_IAT_race)),
sdPostNBIB = sd(rbind(post_IAT_gen,post_IAT_race)))
print(PostNBIB)
## averagePostNBIB sdPostNBIB
## 1 0.2776836 0.4585372
#Calculate one-week delay implicit bias
WeekBIB <- cleandata %>%
select(week_IAT_gen,week_IAT_race) %>%
summarise(averageWeekBIB = mean(rbind(week_IAT_gen,week_IAT_race)),
sdWeekBIB = sd(rbind(week_IAT_gen,week_IAT_race)))
print(WeekBIB)
## averageWeekBIB sdWeekBIB
## 1 0.3994186 0.4254629
Calculating the average sex was a bit trickier because sex is a factor.
Male <- cleandata %>%
select(General_1_Sex) %>%
tally(General_1_Sex == "Male") #count how many are labelled 'male'
Male_percentage <- Male/31 #calculating the number of males out of 31 since n = 31
print(Male_percentage) #multiply this by 100 to get real percentage
## n
## 1 0.483871
Calculating the number of people who had the cue played during the name was the same process
Napcue <- cleandata %>%
select(Cue_condition) %>%
tally(Cue_condition == "race cue played")
racialcue_perentage <- Napcue/31
print(racialcue_perentage)
## n
## 1 0.5483871
Unfortunately, I have not had much progress on this yet as I have been working on some stuff for other classes. I am lucky that one of my group mates has worked out something, but this is definitely an area I will need to catch up on in my own time.
We have to keep on going with our project so from next week I am hoping to: