This worksheet includes three main tasks: data modeling (a key step to understand the data),and basic steps to compute a simple signal-to-noise ratio.
Remember to always set your working directory to the source file location. Go to ‘Session’, scroll down to ‘Set Working Directory’, and click ‘To Source File Location’. Read carefully the below and follow the instructions to complete the tasks and answer any questions. Submit your work as detailed in previous
For your assignment you may be using different data sets than what is included here. Always read carefully the instructions on Sakai. For clarity, tasks/questions to be completed/answered are highlighted in red color (visible in preview) and numbered according to their particular placement in the task section. Quite often you will need to add your own code chunk.
Execute all code chunks, preview, publish, and submit link on Sakai.
To begin the Lab, examine the content of the csv file ‘creditrisk.csv’ by opening the file in RStudio. You can view the file separetely in Excel or use File -> Import Dataset in RStudio for that purpose.
An important early phase when working with any data is modeling. Whether we are dealing with structured or unstructurred data, data modeling is an exercise to demonstrate our understanding of the data, not just from the data relational aspect, but also from the business perspective. There are many database modeling tools available for the purpose of creating relational schemas. For the purpose of this lab we will work with the tools ERDPlus available at https://erdplus.com. An animated view on how to create star schemas is on the web page.
##### 1) Create a relational schema and star schema of the data using ERDPlus standalone feature https://erdplus.com/standalone, take a screenshot of the two images, and add them below.
Relational Schema
Star Schema
To add a picture, use the directions found in Lab01. Below are steps and an example to create a simple star schema or relational schema in ERDPlus.
Next, read the csv file into R Studio. It can be useful to name your data to create a shortcut to it. Here we will label the data, ‘mydata’. To see the top head data in the console, one can ‘call’ it using the function ‘head’ and referring to it by its given shortcut name.
mydata = read.csv(file="creditrisk.csv")
head(mydata)
## Loan.Purpose Checking Savings Months.Customer Months.Employed Gender
## 1 Small Appliance 0 739 13 12 M
## 2 Furniture 0 1230 25 0 M
## 3 New Car 0 389 19 119 M
## 4 Furniture 638 347 13 14 M
## 5 Education 963 4754 40 45 M
## 6 Furniture 2827 0 11 13 M
## Marital.Status Age Housing Years Job Credit.Risk
## 1 Single 23 Own 3 Unskilled Low
## 2 Divorced 32 Own 1 Skilled High
## 3 Single 38 Own 4 Management High
## 4 Single 36 Own 2 Unskilled High
## 5 Single 31 Rent 3 Skilled Low
## 6 Married 25 Own 1 Skilled Low
To capture, or extract, the checking and savings columns and perform some analytics on them, we must first be able to extract the columns from the data separately. Using the ‘$’ sign following the label for the data extracts a specific column. For convenience, we relabel the extracted data. Below, we have extracted the checking column.
#Extracting the Checking Column
checking = mydata$Checking
#Calling the Checking Column to display top head values
head(checking)
## [1] 0 0 0 638 963 2827
##### 2A) Repeat here the above code chunk to extract instead the savings column. Be careful to use different variable naming
saving =mydata$Savings
head(saving)
## [1] 739 1230 389 347 4754 0
In order to calculate the mean, or the average by hand of the checking column, one can add each individual row entry and divide by the total number of rows. Thankfully, R has a built-in command for this. We have done an example using the checking column.
#Using the 'mean' function on checking to calculate the checking average and naming the average 'meanChecking'
meanChecking = mean(checking)
#Calling the average
meanChecking
## [1] 1048.014
We similarly compute the standard deviation or spread of the checking column
#Computing the standard deviation of checking
spreadChecking = sd(checking)
spreadChecking
## [1] 3147.183
Now, to compute the SNR, the signal to noise ratio, a formula is created because there is no built in function. SNR is the mean, or average, divided by the spread.
#Compute the snr of Checking and name it snr_Checking
snr_Checking = meanChecking/spreadChecking
#Call snr_Checking
snr_Checking
## [1] 0.3330006
##### 2B) Repeat here the above code chunks calculations to iintroduce new variables for the saving column and to derive the corresponding SNR
meanSaving = mean(saving)
meanSaving
## [1] 1812.562
spreadSaving = sd(saving)
spreadSaving
## [1] 3597.285
signal_to_noise_saving = meanSaving/spreadSaving
signal_to_noise_saving
## [1] 0.5038695
##### 2C) Of the checking and savings data , which one has a higher SNR? What does it mean in terms of possible data quality?
Saving has a higher SNR, which means that there is less noise in the data for savings, and this means higher possible data quality that can inform business decisions. Quite simply, Savings data has a higher number of signals.
Making Sure I know how to insert photos: Jake