About

This worksheet includes three main tasks: data modeling (a key step to understand the data), basic steps to compute a simple signal-to-noise ratio, and data exploration to identify trends & patterns using Watson Analytics.

Setup

Remember to always set your working directory to the source file location. Go to ‘Session’, scroll down to ‘Set Working Directory’, and click ‘To Source File Location’. Read carefully the below and follow the instructions to complete the tasks and answer any questions. Submit your work to RPubs as detailed in previous notes.

Note

For your assignment you may be using different data sets than what is included here. Always read carefully the instructions on Sakai. For clarity, tasks/questions to be completed/answered are highlighted in red color (visible in preview) and numbered according to their particular placement in the task section. Quite often you will need to add your own code chunk.

Execute all code chunks, preview, publish, and submit link on Sakai.


Task 1: Data Modeling

To begin the Lab, examine the content of the csv file ‘creditrisk.csv’ by opening the file in RStudio. You can view the file separetely in Excel or use File -> Import Dataset in RStudio for that purpose.

##### 1A) Create a simple star relational schema in ERDPlus standalone feature https://erdplus.com/#/standalone, take a screenshot of the image, and add it below. Consider using one fact table for loan, one dimension table for customer profile, and one dimension table for credit risk.

To add a picture, use the directions found in Lab00. Below are steps and an example to create a simple star relational schema in ERDPlus.

Steps to create an star relation schema using erdplus.
From the drop-down option select New Start Schema
Example of how to create an start schema using erdplus
Completed Star schema example

Finally export the diagram as an image.


Task 2: Signal-to-Noise Ratio

Next, read the csv file into R Studio. It can be useful to name your data to create a shortcut to it. Here we will label the data, ‘mydata’. To see the top head data in the console, one can ‘call’ it using the function ‘head’ and referring to it by its given shortcut name.

mydata = read.csv(file="data/creditrisk.csv")
head(mydata)
##      Loan.Purpose Checking Savings Months.Customer Months.Employed Gender
## 1 Small Appliance        0     739              13              12      M
## 2       Furniture        0    1230              25               0      M
## 3         New Car        0     389              19             119      M
## 4       Furniture      638     347              13              14      M
## 5       Education      963    4754              40              45      M
## 6       Furniture     2827       0              11              13      M
##   Marital.Status Age Housing Years        Job Credit.Risk
## 1         Single  23     Own     3  Unskilled         Low
## 2       Divorced  32     Own     1    Skilled        High
## 3         Single  38     Own     4 Management        High
## 4         Single  36     Own     2  Unskilled        High
## 5         Single  31    Rent     3    Skilled         Low
## 6        Married  25     Own     1    Skilled         Low

To capture, or extract, the checking and savings columns and perform some analytics on them, we must first be able to extract the columns from the data separately. Using the ‘$’ sign following the label for the data extracts a specific column. For convenience, we relabel the extracted data. Below, we have extracted the checking column.

#Extracting the Checking Column
checking = mydata$Checking 

#Calling the Checking Column to display top head values
head(checking)
## [1]    0    0    0  638  963 2827

##### 2A) Repeat here the above code chunk to extract instead the savings column. Be careful to use different variable naming.

savings = mydata$Savings

head(savings)
## [1]  739 1230  389  347 4754    0

In order to calculate the mean, or the average by hand of the checking column, one can add each individual row entry and divide by the total number of rows. Thankfully, R has a built-in command for this. We have done an example using the checking column.

#Using the 'mean' function on checking to calculate the checking average and naming the average 'meanChecking'
meanChecking = mean(checking)

#Calling the average
meanChecking
## [1] 1048.014

We similarly compute the standard deviation or spread of the checking column

#Computing the standard deviation of checking
spreadChecking = sd(checking)

Now, to compute the SNR, the signal to noise ratio, a formula is created because there is no built in function. SNR is the mean, or average, divided by the spread.

#Compute the snr of Checking and name it snr_Checking
snr_Checking = meanChecking/spreadChecking

#Call snr_Checking
snr_Checking
## [1] 0.3330006

##### 2B) Repeat here the above code chunks calculations to derive instead the SNR for the savings column. Watch your variable namings to differentiate the checking calculations from the savings.

meanSavings = mean(savings)
meanSavings
## [1] 1812.562
spreadSavings =sd(savings)
spreadSavings
## [1] 3597.285
snr_Savings = meanSavings/spreadSavings
snr_Savings
## [1] 0.5038695

##### 2C) Of the checking and savings data , which one has a higher SNR? What does it mean in terms of possible data quality? savings SNR is higher meaning less noise and better data ————