We have 3 years of daily sales of Post-its. Those lovable note snippets that hang around your desk.
We will do the following:
Determine which factors influence sales.
Build a model to forecast daily sales.
# Install pacman if needed
if (!require("pacman")) install.packages("pacman")
## Loading required package: pacman
# load packages
pacman::p_load(pacman,
tidyverse, openxlsx, modeltime, parsnip, rsample, timetk, broom, ggthemes)
#Import data
postit <- read.xlsx("datasets/POSTITDATA.xlsx", skipEmptyRows = TRUE)
#Check results
str(postit)
## 'data.frame': 1096 obs. of 6 variables:
## $ Month : num 1 1 1 1 1 1 1 1 1 1 ...
## $ Day# : num 1 2 3 4 5 6 7 8 9 10 ...
## $ Day : num 40544 40545 40546 40547 40548 ...
## $ Price : num 7.52 7.52 5.95 6.2 6.1 6.2 6.98 5.95 7.12 6.98 ...
## $ Display? : num 1 0 0 0 0 0 1 0 1 0 ...
## $ actualsales: num 390 344 636 483 486 490 524 620 416 464 ...
We have 6 variables:
#Create New Formatted Date Column supplying origin argument
postit$new_date <- as.Date(postit$Day, origin = "1899-12-30")
#Check results
str(postit$new_date)
## Date[1:1096], format: "2011-01-01" "2011-01-02" "2011-01-03" "2011-01-04" "2011-01-05" ...
#Column renaming
postit <- postit %>%
rename(
day_num = 'Day#',
display = 'Display?'
)
#Check results
names(postit)
## [1] "Month" "day_num" "Day" "Price" "display"
## [6] "actualsales" "new_date"
#Create list of variables that need transformation
#fac_vars <- c("Month", "display", "Price")
fac_vars <- c("Month", "display")
#Factor Month, Display and Price Variables
postit[,fac_vars] <- lapply(postit[,fac_vars], factor)
#Check results
str(postit)
## 'data.frame': 1096 obs. of 7 variables:
## $ Month : Factor w/ 12 levels "1","2","3","4",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ day_num : num 1 2 3 4 5 6 7 8 9 10 ...
## $ Day : num 40544 40545 40546 40547 40548 ...
## $ Price : num 7.52 7.52 5.95 6.2 6.1 6.2 6.98 5.95 7.12 6.98 ...
## $ display : Factor w/ 2 levels "0","1": 2 1 1 1 1 1 2 1 2 1 ...
## $ actualsales: num 390 344 636 483 486 490 524 620 416 464 ...
## $ new_date : Date, format: "2011-01-01" "2011-01-02" ...
#Drop Excel formatted Day variable and day_num
postit <- postit %>%
select(-Day, -day_num)
#From timetk package - visualize sales
postit %>%
plot_time_series(new_date, actualsales, .interactive = TRUE)
Our plot of sales indicates that there is a positive trend of sales for our post its.
Trend in time series data is a specific pattern which in which observations are either increasing or decreasing over time. Trends can constantly change.
#Let's do some exploratory data analysis (EDA)
ggplot(data = postit, aes(x=actualsales)) + geom_histogram() + theme_minimal()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#Sales are a bit skewed to the right
ggplot(data=postit, aes(x=log(actualsales))) + geom_histogram() + theme_minimal()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#Sales now look more normal after log transformation
ggplot(data = postit, aes(x=Month, y = actualsales)) + geom_boxplot() + theme_minimal()
#Sales are lowest in March. Sales are generally higher in Q4 and the highest in December. December also appears to have the largest spread. Some outliers during months August and September.
ggplot(data = postit, aes(x=factor(Price), y = actualsales)) + geom_boxplot() + theme_minimal()
#This is quite interesting, as sales are highest when the price is 5.95. Sales decrease when price increases to 6.1 and so and so forth where the lowest sales happen when the price is at its highest 7.52. There appears to be a relationship between price and sales.
ggplot(data = postit, aes(x=display, y = actualsales)) + geom_boxplot() + theme_minimal() + theme(plot.title = element_text(hjust = 0.5), plot.subtitle = element_text(hjust = 0.5)) + ggtitle("Product sales based on whether or not product was on store display", subtitle = "(not on display = 0, on display = 1)") + ylab("Actual Sales")
#More sales when product is on display.
postit %>%
group_by(display) %>%
summarize(mean_sales = mean(actualsales), median_sales = median(actualsales), sd_sales = sd(actualsales))
#Average sales are 639 vs 587 when no display.
Now we have some idea of the factors that might influence sales, let’s check for significance by using a linear regression model.
# Model Spec
model_spec_lm <- linear_reg() %>%
set_engine('lm') %>%
set_mode('regression')
# Fit Linear Regression Model
model_fit_lm <- model_spec_lm %>%
fit(actualsales ~ Month + Price + display, data = postit)
#Uncomment if want to run model with date as xreg
# model_fit_lm <- model_spec_lm %>%
# fit(actualsales ~ Month + Price + display + as.numeric(new_date), data = postit)
#Print summary of model in a tidy object
lm_summary <- tidy(model_fit_lm) %>%
mutate(significant = p.value <= 0.05)
lm_summary
#Use the glance function from the broom package to get additional information from the model (e.g. model statitics like r.squared)
(mod_glance <- glance(model_fit_lm))
#If you prefer, you can use summary function to print output to console
summary(model_fit_lm$fit)
##
## Call:
## stats::lm(formula = actualsales ~ Month + Price + display, data = data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -421.68 -50.85 5.26 52.26 397.36
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 1801.392 31.151 57.829 < 2e-16 ***
## Month2 7.174 13.006 0.552 0.581
## Month3 -95.543 12.663 -7.545 9.56e-14 ***
## Month4 -5.871 12.772 -0.460 0.646
## Month5 77.631 12.671 6.127 1.26e-09 ***
## Month6 78.488 12.770 6.146 1.11e-09 ***
## Month7 74.032 12.684 5.837 7.03e-09 ***
## Month8 77.357 12.679 6.101 1.47e-09 ***
## Month9 86.232 12.785 6.745 2.49e-11 ***
## Month10 134.656 12.668 10.630 < 2e-16 ***
## Month11 252.429 12.771 19.766 < 2e-16 ***
## Month12 359.843 12.685 28.369 < 2e-16 ***
## Price -195.501 4.456 -43.877 < 2e-16 ***
## display1 63.640 6.530 9.746 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 86.33 on 1082 degrees of freedom
## Multiple R-squared: 0.793, Adjusted R-squared: 0.7905
## F-statistic: 318.8 on 13 and 1082 DF, p-value: < 2.2e-16
A generic interpretation of a linear regression model. For every one unit change in coefficient (term column), moves unit sales by the value of our estimate while all other variables remain at the same level.
Most months are significant but December is positive and the most statistically significant with the highest average sales. March is the month with the lowest average sales. Time of year does impact sales.
All price coefficients are negative and significant which means that all price levels above 5.95 reduces sales and continues to reduce at each price increase. Customers seem to be price sensitive when it comes to post-its. As suggested by our EDA, there is a negative relationship between price and sales.
For display advertising, we can expect an increase of 60 when there is a display.
-Evaluate model fit
As for the model statistics, we have a very low p-value and a pretty decent r.square of .885, which tells us that that 89% of the variation in sales is explained by our independent/explanatory variables.
We built this linear model for mostly gaining insights into what factors influence sales.
We will split our dataset into test and training data. We will use the last 12 months of the dataset as the training set.
set.seed(123)
#Step 1: Split data into test and training set
splits <- postit %>%
time_series_split(date_var = new_date, assess = "12 months", cumulative = TRUE)
#Visualize test train split
splits %>%
tk_time_series_cv_plan() %>%
plot_time_series_cv_plan(new_date, actualsales, .interactive = TRUE)
# Step 2: Model Spec and Model Fit
model_fit_prophet <- prophet_reg() %>%
set_engine('prophet') %>%
fit(actualsales ~ ., data = training(splits))
## Disabling daily seasonality. Run prophet with daily.seasonality=TRUE to override this.
model_fit_prophet
## parsnip model object
##
## Fit time: 2.2s
## PROPHET w/ Regressors Model
## - growth: 'linear'
## - n.changepoints: 25
## - changepoint.range: 0.8
## - yearly.seasonality: 'auto'
## - weekly.seasonality: 'auto'
## - daily.seasonality: 'auto'
## - seasonality.mode: 'additive'
## - changepoint.prior.scale: 0.05
## - seasonality.prior.scale: 10
## - holidays.prior.scale: 10
## - logistic_cap: NULL
## - logistic_floor: NULL
## - extra_regressors: 13
#Step 3: Put model into a modeltime table
models_tbl <- modeltime_table(model_fit_prophet)
models_tbl
#Step 4: Calibrate model
calibration_tbl <- models_tbl %>%
modeltime_calibrate(new_data = testing(splits))
#Can also print calibration model to console
calibration_tbl
#Step 5: Get Accuracy Metrics
(accuracy_table <- calibration_tbl %>%
modeltime_accuracy())
#Plot the residuals
# Out-of-Sample data (test set)
#Change new_data argument if you want to plot in-sample residuals (training set). Timeplot is the default but can change to acf or seasonality plot.
calibration_tbl %>%
modeltime_calibrate(new_data = testing(splits)) %>%
modeltime_residuals() %>%
plot_modeltime_residuals(.interactive = TRUE, .type = "timeplot", .legend_show = FALSE)
#Statistical tests
calibration_tbl %>%
modeltime_residuals_test(new_data = testing(splits))
modeltime_accuracy() function gives all accuracy metrics.
I like to pay attention to the the mean absolute percentage error (MAPE) which is 12.76 so our forecasts are off around 13%.
We want our time series residuals to hover around zero. Everything seems okay until November and December as we start to see more variability.
#Step 6: Create future forecast on test data
(forecast_tbl <- calibration_tbl %>%
modeltime_forecast(
new_data = testing(splits),
actual_data = postit,
keep_data = TRUE #Includes the new data (and actual data) as extra columns with the results of the model forecasts
))
#Step 7: Plot modeltime forecast - this is the test data
plot_modeltime_forecast(forecast_tbl)
#Create a tibble of observations with length out being the number of observations we want in reference to our date variable (new_data). Since new_date ends on Dec 31st, our future_frame starts on Jan 1st counts what we put into our length_out argument into the future.
#Create tibble of dates using future_frame() from timetk package
dates <- postit %>%
future_frame(new_date, .length_out = "1 year")
#Simulate display data
display <- rep(0:1, each = 2, length.out = 365)
#Simulate Price data
Price <- rep(c(5.95, 6.1, 6.2, 6.98, 7.12, 7.32, 7.52), length.out = 365)
#Put data into dataframe
explanatory_data <- cbind(dates, display, Price)
#Add additional Month and day_num variables
explanatory_data <- explanatory_data %>%
mutate(Month = format(new_date, "%m"),
day_num = format(new_date, "%d"))
#Need to factor variables
fac_vars <- c("Month", "display") #Price is numeric
#Factor Month, Display and Price Variables
explanatory_data[,fac_vars] <- lapply(explanatory_data[,fac_vars], factor)
#Change day_num from character to numeric vector and strip leading zeros
library(stringr)
explanatory_data$day_num <- as.numeric(str_remove(explanatory_data$day_num, "^0+"))
#Do the same for the Month variable
explanatory_data$Month <- as.factor(str_remove(explanatory_data$Month, "^0+"))
# explanatory_data$day_num <- as.numeric(explanatory_data$day_num)
#Check results
str(explanatory_data)
## 'data.frame': 365 obs. of 5 variables:
## $ new_date: Date, format: "2014-01-01" "2014-01-02" ...
## $ display : Factor w/ 2 levels "0","1": 1 1 2 2 1 1 2 2 1 1 ...
## $ Price : num 5.95 6.1 6.2 6.98 7.12 7.32 7.52 5.95 6.1 6.2 ...
## $ Month : Factor w/ 12 levels "1","10","11",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ day_num : num 1 2 3 4 5 6 7 8 9 10 ...
CAUTION: It is super important that the data in your new data frame matches the exact same formatting of the data in the data used in building the forecast. If errors occur during either the model fit or forecasting phase, differently formatted data may be the culprit.
#First, refit to the full dataset
refit_tbl <- calibration_tbl %>%
modeltime_refit(data = postit)
## Disabling daily seasonality. Run prophet with daily.seasonality=TRUE to override this.
#Specify the H or horizon argument to get a forecast into the future, after refitting model to the entire dataset, but if using xregs (independent regressors), you must create a new dataframe with the xregs to be used.
#Forecast on the new tibble dataframe
forecast_tbl_future_data <- refit_tbl %>%
modeltime_forecast(
new_data = explanatory_data
)
#Check results of forecast
head(forecast_tbl_future_data)
#Plot and visualize the forecasts
plot_modeltime_forecast(forecast_tbl_future_data, .interactive = TRUE)
#Prep 2013 sales data by renaming sales column
postit_2013_gr_01 <- rename(postit_2013_gr, sales_2013 = sales, month_year_2013 = month_year)
#Prep forecast data by renaming sales column
final_fc_gr_01 <- rename(final_fc_gr, sales_2014_forecast = sales, month_year_2014 = month_year)
#Combine data into one dataframe - bind by column
combined_df_col <- cbind(postit_2013_gr_01, final_fc_gr_01)
#Extract month column
combined_df_col <- combined_df_col %>%
mutate(month =
case_when(
grepl("2013", month_year_2014) ~ 2013,
grepl("2014", month_year_2014) ~ 2014)
)
#Extract a month column
combined_df_col <- combined_df_col %>%
mutate(month_of_yr = month.name[as.numeric(substr(c(month_year_2013),1,2))],
month = as.numeric(substr(c(month_year_2013),1,2)))
#Check results
combined_df_col
#Plot the results
ggplot(combined_df_col, aes(x=month)) +
geom_line(aes(x=month, y = sales_2013, color = "sales_2013"), size=1) +
geom_line(aes(x=month, y = sales_2014_forecast, color="sales_2014_forecast"), size=1, linetype="twodash")+
theme_calc() +
ylab("sales") +
ggtitle("Comparison of current annual sales \n with next year's forecasted sales", subtitle = "Assuming the same pricing and display promotions of the current year") +
scale_x_continuous(breaks = seq(from = 0, to = 12, by = 1)) +
scale_color_manual(name = " ", values = c("sales_2013" = "darkred", "sales_2014_forecast" = "steelblue")) +
theme(legend.position=c(0.2,.85), plot.title = element_text(family = "Arial", face = "bold", colour = "darkblue", size = 24, hjust = 0.5), plot.subtitle = element_text(hjust = 0.5))
FINAL THOUGHTS:
I really enjoyed performing time series analysis using Modeltime package. As a Tidyverse user, this way of working with machine learning models is streamlined, functional and flexible plus the added benefit of being able to compare many models at once.