Descriptive Statistics

Bivariate Analysis

Karol Flisikowski

2022-05-09

Introduction

This is our first lab when we are considering 2 dimensions and instead of calculating univariate statistics by groups (or factors) of other variable - we will measure their common relationships based on co-variance and correlation coefficients.

*Please be very careful when choosing the measure of correlation! In case of different measurument scales we have to recode one of the variables into weaker scale.

It would be nice to add some additional plots in the background. Feel free to add your own sections and use external packages.

Data

This time we are going to use a typical credit scoring data with predefined “default” variables and personal demografic and income data. Please take a look closer at headers and descriptions of each variable.

Scatterplots

First let’s visualize our quantitative relationships using scatterplots.

## `geom_smooth()` using formula 'y ~ x'

You can also normalize the skewed distribution of incomes using log:

We can add an estimated linear regression line:

Scatterplots by groups

We can finally see if there any differences between risk status:

## `geom_smooth()` using formula 'y ~ x'

We can also see more closely if there any differences between those two distributions adding their estimated density plots:

## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'

We can also put those plots together:

## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'

Scatterplots with density curves

We can also see more closely if there any differences between those two distributions adding their estimated density plots:

Correlation coefficients - Pearson’s linear correlation

Ok, let’s move to some calculations. In R, we can use the cor() function. It takes three arguments and the method: cor(x, y, method) For 2 quantitative data, with all assumptions met, we can calculate simple Pearson’s coefficient of linear correlation:

## [1] 0.574346

Ok, what about the percentage of the explained variability?

## [1] 33

So as we can see almost ??? of total log of incomes’ variability is explained by differences in age. The rest (???) is probably explained by other factors.

Partial and semipartial correlation

The partial and semi-partial (also known as part) correlations are used to express the specific portion of variance explained by eliminating the effect of other variables when assessing the correlation between two variables.

Partial correlation holds constant one variable when computing the relations to others. Suppose we want to know the correlation between X and Y holding Z constant for both X and Y. That would be the partial correlation between X and Y controlling for Z.

Semipartial correlation holds Z constant for either X or Y, but not both, so if we wanted to control X for Z, we could compute the semipartial correlation between X and Y holding Z constant for X.

Suppose we want to know the correlation between the log of income and age controlling for years of employment. How highly correlated are these after controlling for tenure?

**There can be more than one control variable.

##    estimate      p.value statistic   n gp  Method
## 1 0.3194263 4.805085e-18  8.899323 700  1 pearson
##    estimate      p.value statistic   n gp  Method
## 1 0.2203711 3.899134e-09  5.964597 700  1 pearson

Rank correlation

For 2 different scales - like for example this pair of variables: income vs. education levels - we cannot use Pearson’s coefficient. The only possibility is to rank also incomes… and lose some more detailed information about them.

First, let’s see boxplots of income by education levels.

Now, let’s see Kendal’s coefficient of rank correlation (robust for ties).

Point-biserial correlation

Let’s try to verify if there is a significant relationship between incomes and risk status. First, let’s take a look at the boxplot:

If you would like to compare 1 quantitative variable (income) and 1 dychotomous variable (default status - binary), then you can use point-biserial coefficient:

## 
##  Pearson's product-moment correlation
## 
## data:  bank$log_income and bank$default
## t = -3.6057, df = 698, p-value = 0.0003334
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  -0.20725185 -0.06174165
## sample estimates:
##        cor 
## -0.1352258

Nonlinear correlation - eta coefficient

If you would like to check if there are any nonlinearities between 2 variables, the only possibility (beside transformations and linear analysis) is to calculate “eta” coefficient and compare it with the Pearson’s linear coefficient.

## [1] 0.4290611
## [1] 0.6374379

Correlation matrix

We can also prepare the correlation matrix for all quantitative variables stored in our data frame.

We can use ggcorr() function:

## Warning in ggcorr(bank): data in column(s) 'def', 'educ' are not numeric and
## were ignored

As you can see - the default correlation matrix is not the best idea for all measurement scales (including binary variable “default”).

That’s why now we can perform our bivariate analysis with ggpair with grouping.

Correlation matrix with scatterplots

Here is what we are about to calculate: - The correlation matrix between age, log_income, employ, address, debtinc, creddebt, and othdebt variable grouped by whether the person has a default status or not. - Plot the distribution of each variable by group - Display the scatter plot with the trend by group

##                             bank.age log.bank.income..base...10. bank.employ
## bank.age                        1.00                        0.57        0.54
## log.bank.income..base...10.     0.57                        1.00        0.72
## bank.employ                     0.54                        0.72        1.00
## bank.address                    0.60                        0.37        0.32
## bank.debtinc                    0.02                       -0.01       -0.03
## bank.creddebt                   0.30                        0.53        0.40
## bank.othdebt                    0.34                        0.60        0.41
##                             bank.address bank.debtinc bank.creddebt
## bank.age                            0.60         0.02          0.30
## log.bank.income..base...10.         0.37        -0.01          0.53
## bank.employ                         0.32        -0.03          0.40
## bank.address                        1.00         0.01          0.21
## bank.debtinc                        0.01         1.00          0.50
## bank.creddebt                       0.21         0.50          1.00
## bank.othdebt                        0.23         0.58          0.63
##                             bank.othdebt
## bank.age                            0.34
## log.bank.income..base...10.         0.60
## bank.employ                         0.41
## bank.address                        0.23
## bank.debtinc                        0.58
## bank.creddebt                       0.63
## bank.othdebt                        1.00

Qualitative data

In case of two variables measured on nominal or ordinal&nominal scale - we are forced to organize so called “contingency” table with frequencies and calculate some kind of the correlation coefficient based on them. This is so called “contingency analysis”.

Let’s consider one example based on our data: verify, if there is any significant correlation between education level and credit risk.

Exercise 1. Contingency analysis.

Do you believe in the Afterlife? https://nationalpost.com/news/canada/millennials-do-you-believe-in-life-after-life A survey was conducted and a random sample of 1091 questionnaires is given in the form of the following contingency table:

##         Believe
## Gender   Yes  No
##   Female 435 375
##   Male   147 134

Our task is to check if there is a significant relationship between the belief in the afterlife and gender. We can perform this procedure with the simple chi-square statistics and chosen qualitative correlation coefficient (two-way 2x2 table).

## 
##  Pearson's Chi-squared test with Yates' continuity correction
## 
## data:  dane
## X-squared = 0.11103, df = 1, p-value = 0.739
##         Believe
## Gender         Yes        No
##   Female 0.3987168 0.3437214
##   Male   0.1347388 0.1228231
## [1] 0.01218871

As you can see we can calculate our chi-square statistic really quickly for two-way tables or larger. Now we can standardize this contingency measure to see if the relationship is significant.

## [1] 0.01218871

Exercise 2. Contingency analysis for the ‘Titanic’ data.

Let’s consider the titanic dataset which contains a complete list of passengers and crew members on the RMS Titanic. It includes a variable indicating whether a person did survive the sinking of the RMS Titanic on April 15, 1912. A data frame contains 2456 observations on 14 variables.

#titanic2<- titanic[Status=="Survivor"| Status =="Victim"]

library(ggstatsplot)
## Warning: package 'ggstatsplot' was built under R version 4.1.3
## Registered S3 method overwritten by 'parameters':
##   method                         from      
##   format.parameters_distribution datawizard
## You can cite this package as:
##      Patil, I. (2021). Visualizations with statistical details: The 'ggstatsplot' approach.
##      Journal of Open Source Software, 6(61), 3167, doi:10.21105/joss.03167
titanic %>% 
  filter(Status=="Survivor"| Status =="Victim") %>%
ggbarstats(

  x =Status,
  y =Gender
)

The website http://www.encyclopedia-titanica.org/ offers detailed information about passengers and crew members on the RMS Titanic. According to the website 1317 passengers and 890 crew member were aboard.

8 musicians and 9 employees of the shipyard company are listed as passengers, but travelled with a free ticket, which is why they have NA values in fare. In addition to that, fare is truely missing for a few regular passengers.

data("mpg")
attach(mpg)
?mpg
## starting httpd help server ... done
ggplot(
  data=mpg,
  aes(x=cty, y=hwy)
) + 
  geom_point()

#linear correlation
cor(cty, hwy)
## [1] 0.9559159