For this quiz, you are going to use orange juice data. This data set is originally used in a machine learning (ML) class, with the goal to predict which of the two brands of orange juices the customers bought. Of course, you are not building a ML algorithm in this quiz. I just wanted to provide you with the context of the data.

The response variable (that ML algorithm is built to predict) is Purchase, which takes either CH (Citrus Hill) or MM (Minute Maid). The predictor variables (that ML algorithm uses to make predictions) are characteristics of the customer and the product itself. Together, the data set has 18 variables.WeekofPurchase is the week of purchase. LoyalCH is customer brand loyalty for CH (how loyal the customer is for CH on a scale of 0-1), and is the only variable that characterizes customers. All other variables are characteristics of the product or stores the sale occurred at. For more information on the data set, click the link below and scroll down to page 11. https://cran.r-project.org/web/packages/ISLR/ISLR.pdf

# Load the package
library(tidyverse)

# Import data
Orange <- read.csv('https://raw.githubusercontent.com/selva86/datasets/master/orange_juice_withmissing.csv', stringsAsFactors = TRUE) %>%
  mutate(STORE = as.factor(STORE),
         StoreID = as.factor(StoreID))

# Print the first 6 rows
head(Orange)
##   Purchase WeekofPurchase StoreID PriceCH PriceMM DiscCH DiscMM SpecialCH
## 1       CH            237       1    1.75    1.99   0.00    0.0         0
## 2       CH            239       1    1.75    1.99   0.00    0.3         0
## 3       CH            245       1    1.86    2.09   0.17    0.0         0
## 4       MM            227       1    1.69    1.69   0.00    0.0         0
## 5       CH            228       7    1.69    1.69   0.00    0.0         0
## 6       CH            230       7    1.69    1.99   0.00    0.0         0
##   SpecialMM  LoyalCH SalePriceMM SalePriceCH PriceDiff Store7 PctDiscMM
## 1         0 0.500000        1.99        1.75      0.24     No  0.000000
## 2         1 0.600000        1.69        1.75     -0.06     No  0.150754
## 3         0 0.680000        2.09        1.69      0.40     No  0.000000
## 4         0 0.400000        1.69        1.69      0.00     No  0.000000
## 5         0 0.956535        1.69        1.69      0.00    Yes  0.000000
## 6         1 0.965228        1.99        1.69      0.30    Yes  0.000000
##   PctDiscCH ListPriceDiff STORE
## 1  0.000000          0.24     1
## 2  0.000000          0.24     1
## 3  0.091398          0.23     1
## 4  0.000000          0.00     1
## 5  0.000000          0.00     0
## 6  0.000000          0.30     0
# Get a sense of the dataset
glimpse(Orange)
## Rows: 1,070
## Columns: 18
## $ Purchase       <fct> CH, CH, CH, MM, CH, CH, CH, CH, CH, CH, CH, CH, CH, ...
## $ WeekofPurchase <int> 237, 239, 245, 227, 228, 230, 232, 234, 235, 238, 24...
## $ StoreID        <fct> 1, 1, 1, 1, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 1, 2...
## $ PriceCH        <dbl> 1.75, 1.75, 1.86, 1.69, 1.69, 1.69, 1.69, 1.75, 1.75...
## $ PriceMM        <dbl> 1.99, 1.99, 2.09, 1.69, 1.69, 1.99, 1.99, 1.99, 1.99...
## $ DiscCH         <dbl> 0.00, 0.00, 0.17, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00...
## $ DiscMM         <dbl> 0.00, 0.30, 0.00, 0.00, 0.00, 0.00, 0.40, 0.40, 0.40...
## $ SpecialCH      <int> 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
## $ SpecialMM      <int> 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1...
## $ LoyalCH        <dbl> 0.500000, 0.600000, 0.680000, 0.400000, 0.956535, 0....
## $ SalePriceMM    <dbl> 1.99, 1.69, 2.09, 1.69, 1.69, 1.99, 1.59, 1.59, 1.59...
## $ SalePriceCH    <dbl> 1.75, 1.75, 1.69, 1.69, 1.69, 1.69, 1.69, 1.75, 1.75...
## $ PriceDiff      <dbl> 0.24, -0.06, 0.40, 0.00, 0.00, 0.30, -0.10, -0.16, -...
## $ Store7         <fct> No, No, No, No, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Y...
## $ PctDiscMM      <dbl> 0.000000, 0.150754, 0.000000, 0.000000, 0.000000, 0....
## $ PctDiscCH      <dbl> 0.000000, 0.000000, 0.091398, 0.000000, 0.000000, 0....
## $ ListPriceDiff  <dbl> 0.24, 0.24, 0.23, 0.00, 0.00, 0.30, 0.30, 0.24, 0.24...
## $ STORE          <fct> 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2...
summary(Orange)
##  Purchase WeekofPurchase  StoreID       PriceCH         PriceMM     
##  CH:653   Min.   :227.0   1   :157   Min.   :1.690   Min.   :1.690  
##  MM:417   1st Qu.:240.0   2   :222   1st Qu.:1.790   1st Qu.:1.990  
##           Median :257.0   3   :196   Median :1.860   Median :2.090  
##           Mean   :254.4   4   :139   Mean   :1.867   Mean   :2.085  
##           3rd Qu.:268.0   7   :355   3rd Qu.:1.990   3rd Qu.:2.180  
##           Max.   :278.0   NA's:  1   Max.   :2.090   Max.   :2.290  
##                                      NA's   :1       NA's   :4      
##      DiscCH            DiscMM         SpecialCH       SpecialMM     
##  Min.   :0.00000   Min.   :0.0000   Min.   :0.000   Min.   :0.0000  
##  1st Qu.:0.00000   1st Qu.:0.0000   1st Qu.:0.000   1st Qu.:0.0000  
##  Median :0.00000   Median :0.0000   Median :0.000   Median :0.0000  
##  Mean   :0.05196   Mean   :0.1234   Mean   :0.147   Mean   :0.1624  
##  3rd Qu.:0.00000   3rd Qu.:0.2300   3rd Qu.:0.000   3rd Qu.:0.0000  
##  Max.   :0.50000   Max.   :0.8000   Max.   :1.000   Max.   :1.0000  
##  NA's   :2         NA's   :4        NA's   :2       NA's   :5       
##     LoyalCH          SalePriceMM     SalePriceCH      PriceDiff       Store7   
##  Min.   :0.000011   Min.   :1.190   Min.   :1.390   Min.   :-0.6700   No :714  
##  1st Qu.:0.320000   1st Qu.:1.690   1st Qu.:1.750   1st Qu.: 0.0000   Yes:356  
##  Median :0.600000   Median :2.090   Median :1.860   Median : 0.2300            
##  Mean   :0.565203   Mean   :1.962   Mean   :1.816   Mean   : 0.1463            
##  3rd Qu.:0.850578   3rd Qu.:2.130   3rd Qu.:1.890   3rd Qu.: 0.3200            
##  Max.   :0.999947   Max.   :2.290   Max.   :2.090   Max.   : 0.6400            
##  NA's   :5          NA's   :5       NA's   :1       NA's   :1                  
##    PctDiscMM         PctDiscCH       ListPriceDiff    STORE    
##  Min.   :0.00000   Min.   :0.00000   Min.   :0.000   0   :356  
##  1st Qu.:0.00000   1st Qu.:0.00000   1st Qu.:0.140   1   :157  
##  Median :0.00000   Median :0.00000   Median :0.240   2   :222  
##  Mean   :0.05939   Mean   :0.02732   Mean   :0.218   3   :194  
##  3rd Qu.:0.11268   3rd Qu.:0.00000   3rd Qu.:0.300   4   :139  
##  Max.   :0.40201   Max.   :0.25269   Max.   :0.440   NA's:  2  
##  NA's   :5         NA's   :2

Q1 empirical rule Your favorite orange juice brand is Minute Maid. You went to a local market and found that it is sold at $2.50. You are surprised at the price tag, which seems too pricey. You wonder how rare it would be to encounter a Minute Maid orange juice at this price. Fortunately, you have a dataset of the Minute Maid juice prices. How would you use the dataset to answer your question? Describe.

Hint: Discuss all of the following topics in your answer: the normal distribution, the mean, and the standard deviation.

I would use the data set to calculate what the mean or average price of Minute Made (PriceMM). This would show me what the average price is to compare it to the $2.50 price I saw at the store. Next I would plot the data for the price of MM to see how the distribution is. That would allow me to see how different the prices really are between locations.

Q2 PriceMM Is the data normally distributed? Plot Minute Maid orange juice price in a histogram.

Orange %>%
  ggplot(aes(PriceMM)) +
  geom_histogram()

Q3 PriceMM Calculate the mean of price.

Hint: You may add the na.rm = TRUE argument in the mean() function, if the function returns NA. It means that the variable has at least one row with NA.

mean(Orange$PriceMM, na.rm =  TRUE)
## [1] 2.085038

Q4 PriceMM Calculate the standard deviation of price.

Hint: You may add the na.rm = TRUE argument in the mean() function, if the function returns NA. It means that the variable has at least one row with NA.

sd(Orange$PriceMM, na.rm = TRUE)
## [1] 0.1344285

Q5 empirical rule Base on your analysis in Q2, would it be appropriate to use the empirical rule for Q1? Why? Why not?

Hint: Discuss characterstics of the normal distribution.

It would be appropriate to use the empirical rule because the mean price for the data is $2.08. The empirical rule can be used to find out the chances of finding a specific price.

Q6 empirical rule Let’s just assume that the prices are normally distributed for the sake of discussion. Would you pay $2.50 for the Minute Maid orange juice? Or walk away?

Hint: Discuss in terms of the mean, the standard deviation, and the probability.

I would probably walk away and find another juice. The mean price of the Juice is 2.08 and the price at the store was was 2.50. the standard deviation from the mean makes it so that you are better off finding another juice.

Q7 Why do you think the empirical rule uses the standard deviation as a measure of spread instead of the variance?

Hint: Discuss in terms of the unit of the variance and the standard deviation.

The empirical rule used the standard deviation as a measure of spread instead of the variance because standard deviation is simpler to interpret and determine than variance. Variance weighs outliers more heavily than the data closer to the mean, which shows why the empirical rule uses standard deviations because it relates to how far data is from the mean instead of to variance which relies more on outliers.

Q8 Hide the messages, but display the code and its results on the webpage.

Hint: Use message, echo and results in the chunk options. Refer to the RMarkdown Reference Guide.

Q9 Display the title and your name correctly at the top of the webpage.

Q10 Use the correct slug