Our goal is to extract all info pertaining to calculating wacc and performing dcf analysis from each workbook and enter pulled tables and calculations into a named list. When finished it should look like the following:
Our first step will be to set the dcf list up and name each element we are looking to pull from the sheet or calculate. Once it is set up, we will begin to fill each item individually by either pulling data from the source sheets or using previously entered data into our dcf list to calculate values for other list entries.
dcf <- vector("list", 10)
dcf=purrr::set_names(
dcf,
c(
"estimates", "exit_ebitda", "ev_ebitda", "terminal",
"wacc", "fcf", "npv", "total_net_debt", "equity_contribution",
"dcf_tbl"
)
)
First, we will pull the estimates table from the source sheets. This
table is located on the dcf_cells
sheet for each
company and we can narrow it down to just the first \(12\) rows, since everything we need will be
located there.
xlsx_toggle=1
if (xlsx_toggle==1) {
xlsx_obj=file.path(path_data, "RNFC_COMP1") %>% readRDS
} else { # if (xlsx_toggle==2)
xlsx_obj <- file.path(path_data, "RNFC_COMP2") %>% readRDS
}
dcf_cells <- xlsx_obj %>% filter(sheet == "DCF")
bag=dcf_cells %>%
filter(!is.na(character), row<=12) %>%
select(row, col, character)
bag %>%
format.dt.f()
Within this bag of cells, we can isolate the estimates table by performing an inner_join to index the location of the assumptions we wish to pull. We can do this by first creating a headings vector that includes “EBITDA”, “TAX”, “CAPEX”, “OTHER”, and “WORKING CAPITAL” and then joining it with the bag.
headings=c(
"EBITDA", "TAX", "CAPEX", "OTHER", "WORKING CAPITAL"
) #, "TERMINAL VALUE", "FCF", "FCF - DCF")
idx=bag %>%
regex_inner_join(
headings %>% as_tibble,
by = c("character"="value")
) %>%
select(-character)
idx
# A tibble: 5 × 3
row col value
<int> <int> <chr>
1 5 2 EBITDA
2 6 2 TAX
3 7 2 CAPEX
4 8 2 OTHER
5 9 2 WORKING CAPITAL
Our index is just missing the period heading, however the period values do not have a West heading in the source sheets, so we must add it ourselves. We can do this by creating a tibble with row and col values to the cell above the first cell in idx and a value equal to “PERIOD”; we can then bind this to our previous idx for a complete index of the headings for our table.
idx = tibble(
row = idx$row[1] - 1,
col = idx$col[1],
value = "PERIOD"
) %>%
bind_rows(idx)
idx
# A tibble: 6 × 3
row col value
<dbl> <int> <chr>
1 4 2 PERIOD
2 5 2 EBITDA
3 6 2 TAX
4 7 2 CAPEX
5 8 2 OTHER
6 9 2 WORKING CAPITAL
Our index is almost complete, but for COMP2
we need to
change the second instance of CAPEX to OTHER. We will
also take this time to determine the rows and columns that the data for
the headings are located in. To indicate where to grab our data from in
the next steps, we will set rows to idx row and then
cols to \(10:15\) for
COMP1
and \(5:10\) for
COMP2
. While making comp specific changes in the if
statement, we can also rename the \(5\)th observation of value from CAPEX to
OTHER and set the stub_toggle
to \(1\) for COMP2
, which we can
use later as a flag telling us whether or not to annualize the first
value of those pulled during wacc calculations.
rows=c(idx$row)
headings = idx %>%
mutate(
value = map_chr(
value,
~str_replace(., " ", "_")
) %>%
unlist
) %>%
pull(3)
if (xlsx_toggle==1){
cols=10:15 # EPIC
stub_toggle=0
}else{ # if (xlsx_toggle==2)
cols=5:10 # NFP
idx$value[5] = "OTHER"
headings[5] = "OTHER"
stub_toggle=1
}
Now we are ready to pull the data and assign the indexed headings
using unpivotr::enhead
. We can filter the
dcf_cells
for rows included in rows and cols in
cols
, and then assign headings to the pulled data by
enheading
the idx given the direction of
W
, since the heading cells are to the left of the data. To
clean up, we can rename the enheaded
value column to
heading.
data = dcf_cells %>%
filter(row %in% rows & col %in% cols) %>%
select(row, col, numeric, date, character, data_type) %>%
enhead(idx, direction = "W") %>%
rename("heading" = "value")
data %>%
format.dt.f()
Next, we need to clean up the data so that it matches the target
output; we will make these changes with a combination of mutate and map
functions. First on the list is to replace spaces in heading
observations with an underscore using map_chr
and a
str_replace
. Next we will change date observations to
character for now so that the formatting is not lost after
unpivotr::pack
with a map_chr
and the
as.character
function. Lastly, we need to make sure \(NA\) values show up as \(0\)s so that it conforms with our target;
we can do this by using a pmap_dbl
with an anonymous
function that sets numeric, variable \(3\), to \(0\) if the data_type
, variable
\(6\), is equal to “blank” or leaves it
as is if it is not; we also will set any data_type
observations that are “blank” to “numeric” using a map_chr
since these are the same observations corresponding to the \(NA\)s we just set to \(0\). Our cleaned up data will thus look
like:
dcf$estimates = data %>%
mutate(
heading = map_chr(
heading,
~str_replace(., " ", "_")
) %>%
unlist
) %>%
mutate(
date = map_chr(date, as.character)
) %>%
mutate(
numeric = pmap_dbl(
.,
~ifelse(..6=="blank", 0, ..3)
)
) %>%
mutate(
data_type = map_chr(
data_type,
~ifelse(.=="blank", "numeric", .)
)
)
dcf$estimates %>%
format.dt.f()
Now, we can transform the table into the tibble in our target by
using unpivotr::pack
and then spreading the values by
heading with pivot_wider
. Since pack leaves the values in
list type, we will also have to do a map_df(unlist)
and
then change the PERIOD variable observations back to date
format. We will set our the estimates value in our dcf list,
dcf$estimates
, to this tibble output.
dcf$estimates = dcf$estimates %>%
pack %>%
select(-row) %>%
pivot_wider(names_from = heading, values_from = value) %>%
select(-col, -character) %>%
map_df(unlist) %>%
mutate(
PERIOD = PERIOD %>% as.Date,
EBITDA = EBITDA %>% as.numeric,
TAX = TAX %>% as.numeric,
CAPEX = CAPEX %>% as.numeric,
OTHER = OTHER %>% as.numeric,
WORKING_CAPITAL = WORKING_CAPITAL %>% as.numeric
)
dcf$estimates
# A tibble: 6 × 6
PERIOD EBITDA TAX CAPEX OTHER WORKING_CAPITAL
<date> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2017-09-30 0 0 0 0 0
2 2017-12-31 57.7 0 -2.13 -13.5 -0.473
3 2018-12-31 66.2 0 -45.2 -23.7 0.465
4 2019-12-31 74.2 0 -42.2 -24.1 0.567
5 2020-12-31 82.4 -8.37 -41.7 -24.6 0.566
6 2021-12-31 90.7 -10.9 -41.8 -25.2 0.577
Next we will fill out the next three elements of our dcf
master list, which include values pertaining to terminal value
calculation. Since we will be working with the last observations of
estimates, we will save n=length(cols)
to easily reference
the last observation in the indicated variable. We will also pull the
period observations in the estimates table to be used later on.
n=length(rows)
periods=pull(dcf$estimates, PERIOD)
We will save dcf$exit_ebitda
as the value fo the last
observation of EBITDA in estimates.
# exit_ebitda
dcf$exit_ebitda=pull(dcf$estimates, EBITDA)[n] #%>% as.numeric
dcf$exit_ebitda
[1] 90.65458
We will calculate and save the value for dcf$ev_ebitda
using the following equations.
# ev_ebitda
if (xlsx_toggle==1){ # exit mult
dcf$ev_ebitda=-1+mean(valuation$public_comparables[[1]]$ev_ebitda)
}else{ # if (xlsx_toggle==2)
dcf$ev_ebitda=-1+capitalization$summary$cap_tables$capitalization$total_capitalization[1]/
capitalization$summary$cap_tables$capitalization$ebitda[1]
}
dcf$ev_ebitda
[1] 11.78333
Lastly, we will save dcf$terminal
as a double using
zoo
so that we can use it with the zoo
package
for calculations in later dcf list entries. we will assign it
the value of exit_ebitda
multiplied by
ev_ebitda
and assign the last observation of periods.
# terminal
dcf$terminal = zoo(dcf$exit_ebitda*dcf$ev_ebitda, periods[n]) #%>% as.numeric
dcf$terminal
2021-12-31
1068.213
With these values entered into our dcf list, we can safely remove
unneeded variables as we move away from the dcf_cells
sheets.
rm(n, bag, idx, rows, data, headings)
Our next step is to fill in the wacc entry, which will be a
list that includes a table for our input data, a table for our
assumptions, and a final value. We will also take this time to set our
bag of cells to the wacc_cells
corresponding to the comp we
are dealing with.
wacc=vector("list", 3)
wacc=wacc %>%
purrr::set_names(c("table", "assumptions", "value"))
wacc_cells <- xlsx_obj %>% filter(sheet == "WACC")
bag=wacc_cells %>%
select(row, col, character, numeric, date, data_type)
With our wacc list set up, we will fill the three values befor saving it to our master dcf list. The first step is pulling the input data table from the bag.
We will index the top left of the wacc table by using an inner join to find the cell containing “Ticker” in its character value.
#table
idx_tl = bag %>%
inner_join("Ticker" %>% as_tibble, by = c("character" = "value"))
idx_tl
# A tibble: 1 × 6
row col character numeric date data_type
<int> <int> <chr> <dbl> <dttm> <chr>
1 13 2 Ticker NA NA character
Next we will set the columns the wacc table spans from in each sheet.
if (xlsx_toggle==1){
cols=2:10
}else{ # if (xlsx_toggle==2)
cols=2:10
}
cols
[1] 2 3 4 5 6 7 8 9 10
Lastly, will index the bottom right cell of the wacc table
by finding the first cell in the same column as idx_tl
that
has an \(NA\) value or character value
of “AVERAGE” and then setting the column value to the max value of our
cols
variable. We will then count back one row to set it to
the bottom right cell of our desired table and save this as
idx_br
.
idx_br = bag %>%
filter(row>idx_tl$row & col==idx_tl$col) %>%
select(row, col, character) %>%
filter(is.na(character) | character=="AVERAGE") %>%
slice(1) %>%
mutate(col = max(cols)) %>%
mutate(row = row - 1)
idx_br
# A tibble: 1 × 3
row col character
<dbl> <int> <chr>
1 19 10 <NA>
With the top left and bottom right located, we can filter the bag for
only cells within the table and begin to clean it up. After filtering,
we will use unpivotr::behead
to strip the northern headings
of our tables and add them as a variable to each data observation.
table = bag %>%
filter(
row>=idx_tl$row & row<=idx_br$row &
col>=idx_tl$col & col<=idx_br$col
) %>%
behead(direction = "N", name = "heading")
table %>%
format.dt.f()
Next, we will clean up the headings to conform to the target output.
First we will map clean_names
to format the values and then
we will use another mutate
and map
to rename
variations of the target output’s names to what we want them to be. To
do this, we will use mutate
and map_chr
with
recode
onto the heading column.
table = table %>%
mutate(
heading = map_chr(heading, clean_names)
) %>%
mutate(
heading = map_chr(
heading,
~recode(
.,
"x_1_year_levered_beta" = "years_levered_beta",
"x_5_year_levered_beta" = "years_levered_beta",
"debt_debt_equity" = "d_de",
"relevered_beta_target_cap_structure" = "relevered_beta_target_cap"
)
)
)
table %>%
format.dt.f()
Now that we have a tibble with all the values we need, we can
transform it into a list of character and double vectors. First we will
use unpivotr::pack
to gather the character, numeric, and
date values into one variable column. Then we will use
pivot_wider
to spread the data out, with the data values
listed underneath variables named after our heading column. Next, we
map unlist
over the data frame to change the list values
left by pack back into their original double or character types.
wacc$table = table %>%
pack %>%
select(-date, -col) %>%
pivot_wider(names_from = heading, values_from = value) %>%
select(-row) %>%
map_df(unlist) %>%
mutate(
ticker=as.character(ticker),
company=as.character(company),
total_debt=as.numeric(total_debt),
market_cap=as.numeric(market_cap),
d_de=as.numeric(d_de),
years_levered_beta=as.numeric(years_levered_beta),
unlevered_beta=as.numeric(unlevered_beta),
relevered_beta_target_cap=as.numeric(relevered_beta_target_cap),
relevered_wacc=as.numeric(relevered_wacc)
) # %>% as.list()
wacc$table %>%
format.dt.f()
Next, we will extract and save the assumptions table as a list of
values. We can find the top left of the table with an
inner_join
of the bag with a tibble with
value = "Assumptions"
. Then we can set the column where the
data will be located for each sheet; in this case, both sheets have data
in column \(4\). Lastly, we can isolate
the bottom right cell by filtering for \(NA\) numeric values in the cols column for
rows below the idx_tl
; we will then slice the filtered
selection for the second value, since the first will be the \(NA\) row between the “Assumptions” title
and where the data starts.
idx_tl = bag %>%
inner_join(
"Assumptions" %>% as_tibble,
by = c("character" = "value")
)
if (xlsx_toggle==1){
cols=4
}else{ # if (xlsx_toggle==2)
cols=4
}
idx_br = bag %>%
filter(row>idx_tl$row & col==cols) %>%
select(row, col, numeric) %>%
filter(is.na(numeric)) %>%
slice(2) %>%
mutate(row = row - 2)
bind_rows(idx_tl, idx_br) %>%
select(row, col)
# A tibble: 2 × 2
row col
<dbl> <int>
1 3 2
2 11 4
With the corners identified, we can filter the bag for the data column and then assign new heading names to the values; in this case, we won’t be grabbing the headings on the sheet since they are variations of the headings we want to display.
wacc$assumptions = bag %>%
filter(col==cols & row>idx_tl$row & row<=idx_br$row) %>%
bind_cols(
tibble(
heading=c(
"valuation_date","marginal_tax_rate","treasury_yield",
"market_risk_premium","lt_target_cap_de","lt_target_cap_dc",
"relevered_cost_debt","other"
)
)
)
wacc$assumptions %>%
format.dt.f()
Now that we have the data we need in a tibble, we can transform it
into a named list of values. First we will use a combination of
mutate
and map
to change date observations to
character type so that we can preserve the formatting when we use
unpivotr::pack
. Next we will use pack to combine character,
numeric, and date columns and then pivot_wider
to spread
these values across the headings. Finally, we will
map unlist
across the data frame with map_df
to unlist
the values packed by pack and then change the
dates back to date type.
wacc$assumptions = wacc$assumptions %>%
mutate(date = map(date, as.character)) %>%
pack %>%
select(-character) %>%
pivot_wider(names_from=heading, values_from=value) %>%
select(-row, -col) %>%
map_df(unlist) %>%
mutate(
valuation_date = as.Date(valuation_date),
marginal_tax_rate=as.numeric(marginal_tax_rate),
treasury_yield=as.numeric(treasury_yield),
market_risk_premium=as.numeric(market_risk_premium),
lt_target_cap_de=as.numeric(lt_target_cap_de),
lt_target_cap_dc=as.numeric(lt_target_cap_dc),
relevered_cost_debt=as.numeric(relevered_cost_debt),
other=as.numeric(other)
)
wacc$assumptions %>%
format.dt.f()
Previously we extracted data directly from the sheets in
COMP1
and COMP2
, however we can now convert
some of those values into calculations that are computed locally in
r instead of on the source workbook.
#---------------------------------------------------------------------------------------------------------
#change pulled table values to calculations
wacc$table$d_de=wacc$table$total_debt/(wacc$table$total_debt+wacc$table$market_cap)
#change pulled assumptions values to calculations
wacc$assumptions$lt_target_cap_dc=mean(wacc$table$d_de) # LT Target Cap Structure - Debt/Cap
wacc$assumptions$lt_target_cap_de=wacc$assumptions$lt_target_cap_dc/
(1-wacc$assumptions$lt_target_cap_dc) # LT Target Cap Structure - Debt/Equity
wacc$table$relevered_beta_target_cap=wacc$table$unlevered_beta*
(1+wacc$assumptions$lt_target_cap_de*(1-wacc$assumptions$marginal_tax_rate))
wacc$table$relevered_wacc=(
wacc$table$relevered_beta_target_cap*wacc$assumptions$market_risk_premium+
wacc$assumptions$treasury_yield
)*(
1-wacc$assumptions$lt_target_cap_dc
)+(
wacc$assumptions$relevered_cost_debt+
wacc$assumptions$other
)*(
1-wacc$assumptions$marginal_tax_rate
)*wacc$assumptions$lt_target_cap_dc
Finally, we can use the assumptions and input values to calculate our
wacc$value
and then save wacc to the dcf
list.
wacc$value=mean(wacc$table$relevered_wacc) # [1] 0.0723333333333333 [2] 0.0738571428571429
wacc$value
[1] 0.07240424
dcf$wacc=wacc
To cacluclate the FCF, we will use the estimates table from dcf; for each observation, this tibble has a variable column for each type of cash flow and then also an associated period.
dcf$estimates
# A tibble: 6 × 6
PERIOD EBITDA TAX CAPEX OTHER WORKING_CAPITAL
<date> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2017-09-30 0 0 0 0 0
2 2017-12-31 57.7 0 -2.13 -13.5 -0.473
3 2018-12-31 66.2 0 -45.2 -23.7 0.465
4 2019-12-31 74.2 0 -42.2 -24.1 0.567
5 2020-12-31 82.4 -8.37 -41.7 -24.6 0.566
6 2021-12-31 90.7 -10.9 -41.8 -25.2 0.577
To get the FCF value, we can deselect the
PERIOD variable and then sum the remaining columns for each
row. We can create a new variable for each observation named
fcf
by using mutate
and setting the
fcf
to the value of rowSums
, which will add up
the other columns in the row. To preserve the associated date, we will
pull the fcf
variable and use set_names
with
periods, which we saved to the environment earlier. Lastly, we will
transform it into a zoo
double so that we can easily use it
with the zoo
package.
#fcf
fcf = dcf$estimates %>%
select(-PERIOD) %>%
mutate(fcf=rowSums(.)) %>%
select(fcf) %>%
pull() %>%
set_names(periods) %>%
zoo(periods)
fcf
2017-09-30 2017-12-31 2018-12-31 2019-12-31 2020-12-31 2021-12-31
0.000000 41.619019 -2.236361 8.572823 8.289942 13.344313
For COMP2
, we need to do one more adjustment; we need to
annualize the stub year, which will be the value in the second
period. We will also use zoo::difftime
to find the distance
between the first and second date to be used in our formula.
# stub adj.
if (stub_toggle==1){
k=2
fcf[k]=fcf[k]*(
difftime(
periods[k],
periods[k-1],
units="days"
) / 360) %>% as.numeric
fcf
rm(k)
}
fcf
2017-09-30 2017-12-31 2018-12-31 2019-12-31 2020-12-31 2021-12-31
0.000000 41.619019 -2.236361 8.572823 8.289942 13.344313
Lastly, we will transform this into a vector of values with the type
double s3:zoo
, which preserves the associated time
periods.
# fcf
dcf$fcf=as.vector(fcf) %>% zoo(periods)
dcf$fcf
2017-09-30 2017-12-31 2018-12-31 2019-12-31 2020-12-31 2021-12-31
0.000000 41.619019 -2.236361 8.572823 8.289942 13.344313
We can use our npv.z
function to calculate npv,
plugging in values from our dcf list. We will pass it values
from our dcf list for the wacc, zoosum
our fcf
and terminal values
for cf, and set
freq to \(365\). We can then confirm
this calculated value is correct by creating a new function,
dcf.z
, which will perform a dcf given the original
ev_ebitda
and wacc values, rather than just the
summary values in our dcf list. If it is true, we will have two
calculations confirming the NPV value.
dcf$npv=npv.z(dcf$wacc$value, zoosum(dcf$fcf, dcf$terminal), freq=365)
dcf.z <- function(ev_ebitda, wacc){
periods = zoo::index(dcf$fcf)
n = length(periods)
terminal = zoo(dcf$exit_ebitda*ev_ebitda, periods[n])
npv_val = npv.z(wacc, zoosum(dcf$fcf, terminal), freq=365)
npv_val
}
all.equal(dcf$npv, dcf.z(dcf$ev_ebitda, dcf$wacc$value))
[1] TRUE
dcf$npv
[1] 856.075
The final components to our dcf list are to calculate
total_net_debt
and equity_contoribution
and
then create a dcf tibble, which we will save as
dcf_tbl
.
First we will save some variables. For COMP1
, delta
ev_ebitda
will be saved as \(1\) and delta wacc will be \(0\). For COMP2
, we will save
delta ev_ebitda
as \(0.5\)
and delta wacc a \(0\).
if (xlsx_toggle==1){
x1 <- 1 # delta ev_ebitda (exit mult)
x2 <- 0 # delta wacc
}else{ # if (xlsx_toggle==2)
x1 <- 0.5 # delta ev_ebitda (exit mult)
x2 <- 0 # delta wacc
}
Next we will calculate and save total_net_debt
using
input from our capilization list created in a previous section. We will
also pull equity_contribution
from the capitalization list
as well.
dcf$total_net_debt=invoke(sum, capitalization$current[2:9])
dcf$equity_contribution=capitalization$current$equity
Now we will create our dcf_tbl
using inputs from our
dcf list. In creating our tibble, we will use
map2_dbl
to take inputs from both the
ev_ebitda
and wacc vectors and making calculations
across parallel values. For LTV, IEV, and EC we will use the EV column
with inputs from our dcf list to calculate values.
y1 <- c(
dcf$ev_ebitda-(x1*2),
dcf$ev_ebitda-(x1*1),
dcf$ev_ebitda,
dcf$ev_ebitda+(x1*1),
dcf$ev_ebitda+(x1*2)
)
y2 <- c(
dcf$wacc$value-(x2*2),
dcf$wacc$value-(x2*1),
dcf$wacc$value,
dcf$wacc$value+(x2*1),
dcf$wacc$value+(x2*2)
)
dcf$dcf_tbl <- tidyr::crossing(
ev_ebitda=y1,
wacc=y2
) %>%
mutate(
EV=map2_dbl(
.x=ev_ebitda,
.y=wacc,
~ dcf.z(.x, .y)
), # Enterprise Value
LTV=dcf$total_net_debt/EV, # Loan to Value
IEV=EV-dcf$total_net_debt, # Implied Equity Value,
EC=IEV/dcf$equity_contribution # Equity % of Cost
)
dcf$dcf_tbl
# A tibble: 5 × 6
ev_ebitda wacc EV LTV IEV EC
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 9.78 0.0724 721. 0.589 296. 0.570
2 10.8 0.0724 789. 0.539 364. 0.699
3 11.8 0.0724 856. 0.496 431. 0.829
4 12.8 0.0724 923. 0.460 498. 0.958
5 13.8 0.0724 991. 0.429 566. 1.09
Now that our dcf tibble is complete, we can save it to the valuation master list.
valuation=list_modify(valuation, dcf=dcf)
The final output will look like the following and match the target output:
# str(dcf$fcf)
# str(dcf$terminal)
attr(dcf$wacc$assumptions$valuation_date, "names") <- NULL
attr(dcf$estimates$PERIOD, "names") <- NULL
dcf %>%
listviewer::jsonedit()