Millions of people are getting some sort of heart disease every year and heart disease is the biggest killer of both men and women in the United States and around the world. Statistical analysis has identified many risk factors associated with heart disease such as age, blood pressure, total cholesterol, diabetes, hypertension, family history of heart disease, obesity, lack of physical exercise, etc. In this notebook, we’re going to run statistical testings and regression models using the Cleveland heart disease dataset to assess one particular factor – maximum heart rate one can achieve during exercise and how it is associated with a higher likelihood of getting heart disease.
# Read datasets Cleveland_hd.csv into hd_data
hd_data <- read.csv("datasets/Cleveland_hd.csv")
# take a look at the first 5 rows of hd_data
head(hd_data, 5)
We noticed that the outcome variable class
has more than two levels. According to the codebook, any non-zero values can be coded as an “event.” Let’s create a new variable called hd
to represent a binary 1/0 outcome.
There are a few other categorical/discrete variables in the dataset. Let’s also convert sex into ‘factor’ type for next step analysis. Otherwise, R will treat them as continuous by default.
The full data dictionary is also displayed here.
## Registered S3 methods overwritten by 'ggplot2':
## method from
## [.quosures rlang
## c.quosures rlang
## print.quosures rlang
## ── Attaching packages ────────────────────────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.1.1 ✔ purrr 0.3.2
## ✔ tibble 2.1.2 ✔ dplyr 0.8.1
## ✔ tidyr 0.8.3 ✔ stringr 1.4.0
## ✔ readr 1.3.1 ✔ forcats 0.4.0
## ── Conflicts ───────────────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
# Use the 'mutate' function from dplyr to recode our data
hd_data <- hd_data %>%
mutate(hd = ifelse(class > 0, 1, 0))
# recode sex using mutate function and save as hd_data
hd_data <- hd_data %>%
mutate(sex = factor(sex,
levels = 0:1,
labels = c("Female", "Male")
)
)
head(hd_data)
Now, let’s use statistical tests to see which ones are related to heart disease. We can explore the associations for each variable in the dataset. Depending on the type of the data (i.e., continuous or categorical), we use t-test or chi-squared test to calculate the p-values.
Recall, t-test is used to determine whether there is a significant difference between the means of two groups (e.g., is the mean age from group A different from the mean age from group B?). A chi-squared test for independence compares two variables in a contingency table to see if they are related. In a more general sense, it tests to see whether distributions of categorical variables differ from each another.
# Does sex have an effect? Sex is a binary variable in this dataset,
# so the appropriate test is chi-squared test
hd_sex <- chisq.test(hd_data$sex, hd_data$hd)
# sex and hd are both factors
# Does age have an effect? Age is continuous, so we use a t-test
hd_age <- t.test(hd_data$age ~ hd_data$hd)
# age is the continuous vector; hd is the grouping variable
# What about thalach? Thalach is continuous, so we use a t-test
hd_heartrate <- t.test(hd_data$thalach ~ hd_data$hd)
# Print the results to see if p<0.05.
print(hd_sex)
##
## Pearson's Chi-squared test with Yates' continuity correction
##
## data: hd_data$sex and hd_data$hd
## X-squared = 22.043, df = 1, p-value = 0.000002667
##
## Welch Two Sample t-test
##
## data: hd_data$age by hd_data$hd
## t = -4.0303, df = 300.93, p-value = 0.00007061
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -6.013385 -2.067682
## sample estimates:
## mean in group 0 mean in group 1
## 52.58537 56.62590
##
## Welch Two Sample t-test
##
## data: hd_data$thalach by hd_data$hd
## t = 7.8579, df = 272.27, p-value = 0.00000000000009106
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 14.32900 23.90912
## sample estimates:
## mean in group 0 mean in group 1
## 158.378 139.259
A good picture is worth a thousand words. In addition to p-values from statistical tests, we can plot the age, sex, and maximum heart rate distributions with respect to our outcome variable. This will give us a sense of both the direction and magnitude of the relationship.
First, let’s plot age using a boxplot since it is a continuous variable.
# Recode hd to be labelled
hd_data <- hd_data %>%
mutate(hd_labelled = ifelse(hd == 0,
"No Disease",
"Disease"))
# age vs hd
ggplot(data = hd_data, aes(x = hd_labelled,y = age)) +
geom_boxplot() +
labs(title = "Maximum Heart Rate Distributions According to Age",
x = "Label",
y = "Age") +
theme_test()
Next, let’s plot sex using a barplot since it is a binary variable in this dataset.
# sex vs hd
ggplot(data = hd_data, aes(x = hd_labelled, fill = sex)) +
geom_bar(position = "fill") +
labs(title = "Maximum Heart Rate Distributions According to Sex",
x = "Label",
y = "Sex %") +
theme_test()
And finally, let’s plot thalach using a boxplot since it is a continuous variable.
# max heart rate vs hd
ggplot(data = hd_data, aes(x = hd_labelled, y = thalach)) +
geom_boxplot() +
labs(title = "Maximum Heart Rate Distributions According to Maximum Heart Rate Achieved",
x = "Label",
y = "Thalach (Max Heart Rate Achieved)") +
theme_test()
The plots and the statistical tests both confirmed that all the three variables are highly significantly associated with our outcome (p < 0.001 for all tests).
In general, we want to use multiple logistic regression when we have one binary and two or more predicting variables. The binary variable is the dependent (Y) variable; we are studying the effect that the independent (X) variables have on the probability of obtaining a particular value of the dependent variable. For example, we might want to know the effect that maximum heart rate, age, and sex have on the probability that a person will have a heart disease in the next year. The model will also tell us what the remaining effect of maximum heart rate is after we control or adjust for the effects from the other two effectors.
The glm()
command is designed to perform generalized linear models (regressions) on binary outcome data, count data, probability data, proportion data, and many other data types. In our case, the outcome is binary following a binomial distribution.
# use glm function from base R and specify the family argument as binomial
model <- glm(data = hd_data, hd ~ age + sex + thalach, family = "binomial")
# extract the model summary
summary(model)
##
## Call:
## glm(formula = hd ~ age + sex + thalach, family = "binomial",
## data = hd_data)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2250 -0.8486 -0.4570 0.9043 2.1156
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 3.111610 1.607466 1.936 0.0529 .
## age 0.031886 0.016440 1.940 0.0524 .
## sexMale 1.491902 0.307193 4.857 0.00000119437 ***
## thalach -0.040541 0.007073 -5.732 0.00000000993 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 417.98 on 302 degrees of freedom
## Residual deviance: 332.85 on 299 degrees of freedom
## AIC: 340.85
##
## Number of Fisher Scoring iterations: 4
It’s common practice in medical research to report Odds Ratio (OR) to quantify how strongly the presence or absence of property A is associated with the presence or absence of the outcome. When the OR is greater than 1, we say A is positively associated with outcome B (increases the Odds of having B). Otherwise, we say A is negatively associated with B (decreases the Odds of having B).
The raw glm coefficient table in R represents the log(Odds Ratio) of the outcome. Therefore, we need to convert the values to the original OR scale and calculate the corresponding 95% Confidence Interval (CI) of the estimated Odds Ratio when reporting results from a logistic regression.
# calculate OR
tidy_m$OR <- exp(tidy_m$estimate)
# calculate 95% CI and save as lower CI and upper CI
tidy_m$lower_CI <- exp(tidy_m$estimate - 1.96 * tidy_m$std.error)
tidy_m$upper_CI <- exp(tidy_m$estimate + 1.96 * tidy_m$std.error)
So far, we have built a logistic regression model and examined the model coefficients/ORs. We may wonder how can we use this model we developed to predict a person’s likelihood of having heart disease given his/her age, sex, and maximum heart rate. Furthermore, we’d like to translate the predicted probability into a decision rule for clinical use by defining a cutoff value on the probability scale. In practice, when an individual comes in for a health check-up, the doctor would like to know the predicted probability of heart disease, for specific values of the predictors: a 45-year-old female with a max heart rate of 150. To do that, we create a data frame called newdata, in which we include the desired values for our prediction.
# get the predicted probability in our dataset using the predict() function
pred_prob <- predict(model, hd_data, type = "response")
# create a decision rule using probability 0.5 as cutoff and save the predicted decision into the main data frame
hd_data$pred_hd <- ifelse(pred_prob >= 0.5, 1, 0)
# create a newdata data frame to save a new case information
newdata <- data.frame(age = 45, sex = "Female", thalach = 150)
# predict probability for this new case and print out the predicted value
p_new <- predict(model, newdata, type = "response")
p_new
## 1
## 0.1773002
Are the predictions accurate? How well does the model fit our data? We are going to use some common metrics to evaluate the model performance. The most straightforward one is Accuracy, which is the proportion of the total number of predictions that were correct. On the opposite, we can calculate the classification error rate using 1- accuracy. However, accuracy can be misleading when the response is rare (i.e., imbalanced response). Another popular metric, Area Under the ROC curve (AUC), has the advantage that it’s independent of the change in the proportion of responders. AUC ranges from 0 to 1. The closer it gets to 1 the better the model performance. Lastly, a confusion matrix is an N X N matrix, where N is the level of outcome. For the problem at hand, we have N=2, and hence we get a 2 X 2 matrix. It cross-tabulates the predicted outcome levels against the true outcome levels.
After these metrics are calculated, we’ll see (from the logistic regression OR table) that older age, being male and having a lower max heart rate are all risk factors for heart disease. We can also apply our model to predict the probability of having heart disease. For a 45 years old female who has a max heart rate of 150, our model generated a heart disease probability of 0.177 indicating low risk of heart disease. Although our model has an overall accuracy of 0.71, there are cases that were misclassified as shown in the confusion matrix. One way to improve our current model is to include other relevant predictors from the dataset into our model, but that’s a task for another day!
# load Metrics package
library(Metrics)
# calculate auc, accuracy, clasification error
auc <- auc(hd_data$hd, hd_data$pred_hd)
accuracy <- accuracy(hd_data$hd, hd_data$pred_hd)
classification_error <- ce(hd_data$hd, hd_data$pred_hd)
## [1] "AUC= 0.706483593612915"
## [1] "Accuracy= 0.70957095709571"
## [1] "Classification Error= 0.29042904290429"
## Predicted Status
## True Status 0 1
## 0 122 42
## 1 46 93