The focus of this lab is on data outliers, data preparation, and data modeling. This lab requires the use of Microsoft Excel, R, and ERDplus.
Remember to always set your working directory to the source file location. Go to ‘Session’, scroll down to ‘Set Working Directory’, and click ‘To Source File Location’. Read carefully the below and follow the instructions to complete the tasks and answer any questions. Submit your work to RPubs as detailed in previous notes.
For your assignment you may be using different data sets than what is included in this worksheet demo. Make sure to read carefully the instructions on Sakai.
First, we must calculate the mean, standard deviation, maximum, and minimum for the Age column using R.
In R, we must read in the file again, extract the column and find the values that are asked for.
#Read File
mydata = read.csv(file="data/creditrisk.csv")
#Name the extracted variable
age = mydata$Age
mydata = read.csv(file="data/Scoring.csv")
age = mydata$Age
#Calculate the average age below. Refer to Worksheet 2 for the correct command.
meanAge = mean(age)
meanAge
## [1] 37.08412
#Calculate standard deviation of age below. Refer to Worksheet 2 for the correct command.
spreadAge = sd(age)
spreadAge
## [1] 10.98637
#Calculate the maximum of age below. The command to find the maximum is max(variable) where variable is the extracted variable.
max(age)
## [1] 68
#Calculate the minimum of age below. The command to find the minimum is min(variable) where variable is the extracted variable.
min(age)
## [1] 18
Next, use the formula from class to detect any outliers. An outlier is value that “lies outside” most of the other values in a set of data. A common way to estimate the upper and lower threshold is to take the mean (+ or -) 3 * standard deviation. Try using this formula to find the upper and lower limit for age.
#Use the formula above to calculate the upper and lower threshold
meanAge + 3*sd(age)
## [1] 70.04322
meanAge - 3*sd(age)
## [1] 4.125023
Another similar method to find the upper and lower thresholds discussed in introductory statistics courses involves finding the interquartile range. Follow along below to see how we first calculate the interquartile range..
quantile(age)
## 0% 25% 50% 75% 100%
## 18 28 36 45 68
lowerq = quantile(age)[2]
upperq = quantile(age)[4]
iqr = upperq - lowerq
The formula below calculates the threshold. The threshold is the boundaries that determine if a value is an outlier. If the value falls above the upper threshold or below the lower threshold, it is an outlier.
Below is the upper threshold:
upperthreshold = (iqr * 1.5) + upperq
upperthreshold
## 75%
## 70.5
Below is the lower threshold:
lowerthreshold = lowerq - (iqr * 1.5)
lowerthreshold
## 25%
## 2.5
Are there any outliers? How many? It can also be useful to visualize the data using a box and whisker plot. The boxplot below supports the IQR we found of 15 and upper and lower threshold.
# There do not appear to be any outliers, as the maximum age of 68 and minimum of 18 are not above or below the upper and lower thresholds, as calculated above.
boxplot(age)
Next, we must read the ‘creditriskorg.csv’ file into R. This is the original dataset and contains missing values.
newdata = read.csv(file="data/scoring_original.csv")
head(newdata)
## Status Seniority Home Time Age Marital Records Job Expenses
## 1 good 9 rent 60 30 married no_rec freelance $73K
## 2 good 17 rent 60 58 widow no_rec fixed $48K
## 3 bad 10 owner 36 46 married yes_rec freelance $90K
## 4 good 0 rent 60 24 single no_rec fixed $63K
## 5 good 0 rent 36 26 single no_rec fixed $46K
## 6 good 1 owner 60 36 married no_rec fixed $75K
## Income Assets Debt Amount Price Finrat Savings
## 1 $129K 0 0 $800.00 $846.00 94.56265 4.200000
## 2 $131K 0 0 $1,000.00 $1,658.00 60.31363 4.980000
## 3 $200K 3000 0 $2,000.00 $2,985.00 67.00168 1.980000
## 4 $182K 2500 0 $900.00 $1,325.00 67.92453 7.933333
## 5 $107K 0 0 $310.00 $910.00 34.06593 7.083871
## 6 $214K 3500 0 $650.00 $1,645.00 39.51368 12.830769
We observe that the column names are shifted down below because of the empty line. So, we must make sure to use the command skip and set the header to true.
newdata = read.csv(file="data/scoring_original.csv",header=TRUE,sep=",")
head(newdata)
## Status Seniority Home Time Age Marital Records Job Expenses
## 1 good 9 rent 60 30 married no_rec freelance $73K
## 2 good 17 rent 60 58 widow no_rec fixed $48K
## 3 bad 10 owner 36 46 married yes_rec freelance $90K
## 4 good 0 rent 60 24 single no_rec fixed $63K
## 5 good 0 rent 36 26 single no_rec fixed $46K
## 6 good 1 owner 60 36 married no_rec fixed $75K
## Income Assets Debt Amount Price Finrat Savings
## 1 $129K 0 0 $800.00 $846.00 94.56265 4.200000
## 2 $131K 0 0 $1,000.00 $1,658.00 60.31363 4.980000
## 3 $200K 3000 0 $2,000.00 $2,985.00 67.00168 1.980000
## 4 $182K 2500 0 $900.00 $1,325.00 67.92453 7.933333
## 5 $107K 0 0 $310.00 $910.00 34.06593 7.083871
## 6 $214K 3500 0 $650.00 $1,645.00 39.51368 12.830769
scoring_original did not have column names shifted down
To calculate the mean for Checking in R, follow Worksheet 2. Extract the Checking column first and then find the average using the function built in R. What happens when we try to use the function?
price = newdata$Price
meanPrice = mean(price)
## Warning in mean.default(price): argument is not numeric or logical:
## returning NA
An error appears saying “argument is not numeric or logical: returning NA” There are other characters that are affecting the data from being calculated correctly (such as the commas and dollar signs in the price column).
To resolve the error, we must understand where it is coming from and correct for. There are missing values in the csv file, which is quite common as most datasets are not perfect. Additionally, there are commas within the excel spreadsheet, and R does not recognize that ‘1,234’ is equivalent to ‘1234’. Lastly, there are ‘$’ symbols throughout the file which is not a numerical symbol either.
The sub function replaces these symbols with something else. So, in order to remove the comma in the number “1,234”, we must substitute it with just an empty space.
As shown on the worksheet, type and copy the exact commands to find the mean with the NA values removed.
#substitute comma with blank in all of checking. Below are examples using a hypothetical variable name 'new'.
# Example new = sub(",","",new)
#substitute dollar sign with blank in all of checking
# Example new = sub("\\$","",new)
#Convert values to numeric to remove any NA
# Example new = as.numeric(new)
#Calculate mean of checking with NA removed
# Example mean(new,na.rm=TRUE)
price = sub(",","",price)
price = sub("\\$","",price)
price = as.numeric(price)
## Warning: NAs introduced by coercion
mean(price,na.rm=TRUE)
## [1] 1462.48
What are some other ways to clean this data? How about Excel? How does Excel treat the missing values and the “$” symbols?
There are a few rows with no data (dollar sign in the price column). We can treat that as either a 0 value, counted towards the mean, or as an absent value, which would not count towards the mean. In order to find out how Excel is treating the missing values, we have to manually check both ways to see which one is given by Excel. Excel understands the $ as a descriptive part of the cell, so that would have no affect on the calculations.
Now, we will look at Chicago taxi data. Go and explore the interactive dashboard and read the description of the data.
Chicago Taxi Dashboard: https://data.cityofchicago.org/Transportation/Taxi-Trips-Dashboard/spcw-brbq
Chicago Taxi Data Description: http://digital.cityofchicago.org/index.php/chicago-taxi-data-released/
– Open in RStudio or Excel the taxi trips sample csv file located in the data folder. Note the size of the file, the number of columns and of rows here. Identify the unique entities, and fields in the data.
– Define a relational business logic integrity check for the column field ‘Trip Seconds’.
The “Trip Seconds” column field should correlate to the difference between the “Trip Start Timestamp” and “Trip End Timestamp”. The Chicago Taxi Data Description site did mention that the times are rounded to 15 minute intervals in order to keep the data more private, so the seconds are likely not going to be exact – but they should be at least close to what the timestamp shows.
– Using https://erdplus.com/#/standalone draw a star like schema using at least the following tables:
Star schema for Chicago Taxi Data.