Introduction

The aim of this app is to help communicate the process of conducting management strategy evaluations (MSE) in a succinct and visual form.

The idea is to focus on several key aspects, including: accounting for uncertainty, model reliability, and the overall message of the simulation results.

Accounting for uncertainty

The first image represents the context of uncertainty out of which various scenarios for testing management procedures are usually constructed. To illustrate the many kinds of unknown/uncertain features of the system which the models aim to represent we used an infographic approach. The main image uses a physical space to represent a conceptual space with recognisible elements: fishing vessels, adult fish, juvenile fish, environment, non-target species and incidental catch such as birds and turtles, and discards. This is the space where uncertainties that were identified through literature review, working groups, and expert elicitation are allocated in the shape of signposted arrows generally pointing in the direction of an object representing the main source of uncertainty. For example, ‘discards’ is pointed at a fishing vessel, while ‘growth’ at the fish.

The key below the figure classifies these sources of uncertainty which are also colour coded.

This image and the key are meant not just to illustrate graphically many different kind of uncertainties that are relevant in performing MSE, but to highlight the difficulty of accounting for all of these sources.

In the current version of the MSE trials for swordfish only 5 sources were modelled. These are highlighted by the solid colours in the image and by the bolder text in the key.

Model reliability

When decision makers are given scientific advice based on modelling, it is important to also be able communicate how reliable these results are so that the decision makers can factor in scientific uncertainty. Not all models are equally reliable, and it would be advantageous to decision makers and scientists to be able to compare models at a glance. This app follows developments in JAKFISH and MYFISH EU research projects that aimed to create a visual key that qualitatively addresses main concerns such as model inputs in terms of data and knowledge. Additionally, we have now included a qualitative key for representing results of model validation.

Model results

Modelling produces a mass variety of statistical information, and MSE are generally accompanied with technical reports that contain a lot of detail. In the app we wanted to summarise simulation results, so that in one image we could represent how a selection of management procedures perform on several criteria under different scenarios.

We normalised the results so that the best outcome on each of the criteria would be represted by a full green bar. This way it is relativily intuitive to look down a column representing a management procedure to see which scenarios causes it to underperform, and on which criteria. Similarly, it is possible to tell at a glance which scenarios cause a range of management procedures to underperform - these are the rows with low height of the bars, highlighted also in shades veering towards orange as performance deteriorates (low bars).

Processing data

In the remaining section, we describe the calculations necessary to turn the raw MSE results into the graphs and tables shown in the app.

Several R packages are used:

And the data is downloaded from a server, and stored in these dataframes:

Calculating performance measures

The results of simulations are processed to calculate how various management procedures perform on the four management measures under different operating model scenarios.

The limit reference point (LRP) is chosed to be 20% of unfished SSB (spawning stock biomass).

  • The first measure ‘Kobe Green’ is the probability that in the future the stock will be in the green Kobe quadrand. That is that SSB is above SSB_MSY and F is below F_MSY.

  • The second measure ‘Catch’ is the probability that catch is above 80% of Catch_MSY.

  • The third measure ‘Safety’ is the probability that the stock is above the LRP (>20% SSB_Virgin).

  • The fourth measure is ‘Stability’ which is represented by proximity to a 100% or by [100% - coefficient of variation (CV)], this is to make all measures comparable - ideally all four measures should be close to a 100%.

###################################################################################
## Analyse performance for all mps under all OMs and store results

#!One management procedure at a time

#__MP1_____________________________________
mp=pmb
name_mp<-"PMB"


##adjust the dim of results dataframe accordingly to the management procedure selected
results<-data.frame(row.names = c(1:(length(unique(mp$scen))*length(unique(mp$btrig))*length(unique(mp$ftar)))))
results<-mutate(results, OM =NA, MP = NA, Kobe = NA, Catch = NA, LRP_Virgin = NA)

i<-0
for (om in unique(mp$scen))
{
        for (tar in unique(mp$ftar))  
        {
                for (trigger in unique(mp$btrig))
                {
                        i<-i+1
                        MP<-mp %>% filter (ftar == tar)  %>% filter (btrig == trigger) %>% filter (scen == om) %>% select (year,iter,ssb,msy_ssb,catch, msy_yield, fbar, msy_harvest, virgin_ssb) %>% mutate(rel_ssb= ssb/msy_ssb) %>% mutate (rel_f= fbar/msy_harvest)%>% mutate(Below_LRP_MSY=(ssb<0.5*msy_ssb)) %>% mutate(Kobe_Green=(rel_ssb > 1&rel_f>1) ) %>% mutate(Target_Catch= catch >0.8*msy_yield) %>% mutate(Above_LRP_Virgin=(ssb>0.2*virgin_ssb)) %>% filter (year %in% c(2019:2038)) 
                        
                        
                        results$OM[i]<-paste("OM",om)
                        results$MP[i]<-paste("M: F_tar=",tar," B_trig=", trigger, sep="")
                        
                        #Probability of being in the green Kobe quadrant
                        results$Kobe[i]<-sum(MP$Kobe_Green)*100/length(MP$iter)
                        #Probability of being above 80% of MSY for catch
                        results$Catch[i]<-sum(MP$Target_Catch)*100/length(MP$iter)
                        #Probability of being above LRP 
                        results$LRP_Virgin[i]<-sum(MP$Above_LRP_Virgin)*100/length(MP$iter)
                        #CV of catch
                        results$Catch_Var[i]<-sd(MP$catch)*100/mean(MP$catch)        
                        
                        
                }      
        }   
        
}


results<-mutate(results, Catch_Var = 100-Catch_Var)
dat=melt(results,id=c("MP","OM"))

## rename the performance measures 
dat$variable=factor(dat$variable, levels=c("Kobe","Catch","LRP_Virgin","Catch_Var"),
                    labels=c("Kobe Green","Catch","Safety","Stability"))
## save for later
dat1=dat

#__MP2_____________________________________
mp=pmd
name_mp<-"PMD"

##adjust the dim of results accordingly

results<-data.frame(row.names = c(1:(length(unique(mp$scen))*length(unique(mp$gamma))*length(unique(mp$k1))*length(unique(mp$k2)))))

results<-mutate(results, OM =NA, MP = NA, Kobe = NA, Catch = NA, LRP_Virgin = NA)

i<-0
for (om in unique(mp$scen))
{
        for (x in unique(mp$gamma))  
        {
                for (y in unique(mp$k1))
                {
                        for (z in unique(mp$k2))
                        {
                        i<-i+1
                        MP<-mp %>% filter (gamma == x) %>% filter (k1 == y) %>% filter (k2 == z) %>% filter (scen == om) %>% select (year,iter,ssb,msy_ssb,catch, msy_yield, fbar, msy_harvest, virgin_ssb) %>% mutate(rel_ssb= ssb/msy_ssb) %>% mutate (rel_f= fbar/msy_harvest)%>% mutate(Below_LRP_MSY=(ssb<0.5*msy_ssb)) %>% mutate(Kobe_Green=(rel_ssb > 1 & rel_f<1)) %>% mutate(Target_Catch= catch >0.8*msy_yield) %>% mutate(Below_LRP_Virgin=(ssb<0.2*virgin_ssb)) %>% filter (year %in% c(2019:2038)) 
                        
                        
                        results$OM[i]<-paste("OM",om)
                        
                        results$MP[i]<-paste("D: G=",x," K1=", y, " K2=", z, sep="")
                        
                        #Probability of being in the green Kobe quadrant
                        results$Kobe[i]<-sum(MP$Kobe_Green)*100/length(MP$iter)
                        #Probability of being above 80% of MSY for catch
                        results$Catch[i]<-sum(MP$Target_Catch)*100/length(MP$iter)
                        #Probability of being above LRP 
                        results$LRP_Virgin[i]<-sum(1-MP$Below_LRP_Virgin)*100/length(MP$iter)
                        #CV of catch
                        results$Catch_Var[i]<-sd(MP$catch)*100/mean(MP$catch)        
                        
                        
                }      
        }   
        
}}


results<-mutate(results, Catch_Var = 100-Catch_Var)
dat=melt(results,id=c("MP","OM"))

## rename the performance measures 
dat$variable=factor(dat$variable, levels=c("Kobe","Catch","LRP_Virgin","Catch_Var"),
                    labels=c("Kobe Green","Catch","Safety","Stability"))
## save for later
dat2=dat

#__MP3_____________________________________
mp=pmp
name_mp<-"PMP"

##adjust the dim of results accordingly

results<-data.frame(row.names = c(1:(length(unique(mp$scen))*length(unique(mp$k1))*length(unique(mp$k2)))))
results<-mutate(results, OM =NA, MP = NA, Kobe = NA, Catch = NA, LRP_Virgin = NA)

i<-0
for (om in unique(mp$scen))
{
    for (y in unique(mp$k1))
                {
                        for (z in unique(mp$k2))
                        {
                                i<-i+1
                                MP<-mp  %>% filter (k1 == y) %>% filter (k2 == z) %>% filter (scen == om) %>% select (year,iter,ssb,msy_ssb,catch, msy_yield, fbar, msy_harvest, virgin_ssb) %>% mutate(rel_ssb= ssb/msy_ssb) %>% mutate (rel_f= fbar/msy_harvest)%>% mutate(Below_LRP_MSY=(ssb<0.5*msy_ssb)) %>% mutate(Kobe_Green=(rel_ssb > 1 & rel_f<1)) %>% mutate(Target_Catch= catch >0.8*msy_yield) %>% mutate(Below_LRP_Virgin=(ssb<0.2*virgin_ssb)) %>% filter (year %in% c(2019:2038)) 
                                
                                
                                results$OM[i]<-paste("OM",om)
                                results$MP[i]<-paste("P: K1=", y, " K2=", z, sep="")
                                
                                #Probability of being in the green Kobe quadrant
                                results$Kobe[i]<-sum(MP$Kobe_Green)*100/length(MP$iter)
                                #Probability of being above 80% of MSY for catch
                                results$Catch[i]<-sum(MP$Target_Catch)*100/length(MP$iter)
                                #Probability of being above LRP 
                                results$LRP_Virgin[i]<-sum(1-MP$Below_LRP_Virgin)*100/length(MP$iter)
                                #CV of catch
                                results$Catch_Var[i]<-sd(MP$catch)*100/mean(MP$catch)        
                                
                                
                        }      
                }   
                
        }


results<-mutate(results, Catch_Var = 100-Catch_Var)
dat=melt(results,id=c("MP","OM"))

## rename the performance measures 
dat$variable=factor(dat$variable, levels=c("Kobe","Catch","LRP_Virgin","Catch_Var"),
                    labels=c("Kobe Green","Catch","Safety","Stability"))
##save for later
dat3=dat

#!Save processed results

save(dat1,dat2,dat3, file = "Swordfish_MSE_Vis/data/results.RData")

Visualise

Now that we processed all of the results to record how a management procedure performs (out of 100) on each of the four management measures under each of the operating model, we can visualise the results.

Bellow is the code to create the table that we find in the app, where the rows are operating models and the columns are management procedures and the height of the bar represents performance:

##Make a table and save the image to be used by the app

##get rid of empty columns
dat2NA<-dat2 %>% filter(!is.na(value))

##combine all of the results
datX=rbind(dat1,dat2NA,dat3)

##relable the scenarios to be more descriptive
datX$OM=factor(datX$OM, levels=c("OM 1","OM 2","OM 3","OM 4", "OM 5","OM 6","OM 7","OM 8","OM 9","OM 10"),
                    labels=c("Base Case","High h","Low h","Low M", "High M","Lorenzen","Sel dome","Sel flat","High rec var","Down weighted L comp"))

##save the image in the www folder used by the app
file_name<-"Swordfish_MSE_Vis/www/Table.png"
png(file=file_name,  width = 1200, height = 1000, units = "px")


p<-ggplot(aes(variable, value),data=datX)+geom_col(aes(fill = value))+
        facet_grid(OM~MP,scale="free_y", labeller=label_wrap_gen(10))+
        scale_fill_gradient2(low ="#ea5c0c", high = "#57b88f", mid = "#f4eecd", midpoint = 50, limit = c(0,100),name = "Performance")+
        ylab("Acceptability")+xlab("Performance Measure")+
        ggtitle(paste("Management Procedures:",sep ="")) +
        labs(subtitle = "Four variations of three mps: D = trend based, M = model based, and P = historical period based)")+
        theme_light()+theme(axis.text.x=element_text(angle=45, hjust=1),plot.title = element_text(lineheight=.8, face="bold", colour="#204a60", size=14))
p
dev.off()
## quartz_off_screen 
##                 2
p

Visualise changes over time in a dynamic plot

The last tab of the app contains animation plots that show random trajectories of biomass and catches. Below is the relevant code.

Looking at individual trajectories is a good way to communicate dynamics over time. The historic part is the same for all of the simulations, but it is included in the plot to give a sense of how projected volatility compares to annual changes in catches observed in the past.

## create a function that is needed for the plot
accumulate_by <- function(dat, var) {
        var <- lazyeval::f_eval(var, dat)
        lvls <- plotly:::getLevels(var)
        dats <- lapply(seq_along(lvls), function(x) {
                cbind(dat[var %in% lvls[seq(1, x)], ], frame = lvls[[x]])
        })
        dplyr::bind_rows(dats)
}

##  process results for a chosen management procedure and an operating model
pmb_1<-pmb%>% filter (ftar == 0.5)  %>% filter (btrig == 0.6) %>% filter (scen == 1) %>% select (year,iter,ssb,catch) %>% filter (year %in% c(1980:2038)) %>% mutate (Year = factor(year))%>% accumulate_by(~year)

## select random numbers to pick specific trajectories
traj<-round(runif(5)*100)

## create a plot of these random trajectories 

mp<-filter(pmb_1, pmb_1$iter %in% traj)

p <- mp %>%
        plot_ly(
                x = ~year, 
                y = ~catch,
                split = ~iter,
                frame = ~frame, 
                type = 'scatter',
                mode = 'lines', 
                line = list(simplyfy = F)
        ) %>% 
        layout(
                xaxis = list(
                        title = "Date",
                        zeroline = F
                ),
                yaxis = list(
                        title = "Catch",
                        zeroline = F
                ),
                showlegend = FALSE
        ) %>% 
        animation_opts(
                frame = 100, 
                transition = 0, 
                redraw = FALSE
        ) %>%
        animation_slider(
                hide = T
        ) %>%
        animation_button(
                x = 1, xanchor = "right", y = 0, yanchor = "bottom"
        )

p
#Save results to be used by the app in a similar manner
save(pmb_1, file = "Swordfish_MSE_Vis/data/mp.RData")