Drought hazard assessment for Eastern Cape, South Africa

provided by : ZFL

Outline

Part B: Drought hazard assessment

Part A: Software installation and data access

Part A is a guide to install the required software and download the required data for this Recommended Practice.


Step 1: Install TIMESAT

  • Visit http://web.nateko.lu.se/timesat/timesat.asp and click on Download in the menu on the left. You will be asked to register to download the software.
  • Click on the link leading to the registration form. Enter the required data and click Submit. You should soon receive an E-Mail with your password.
  • Once you have a password, go back to the TIMESAT download page, click on the login link. Enter your E-Mail address and password to login.
  • Download TIMESAT 3.3. Most likely, you will not have Matlab installed on your machine. Thus, download the standalone Windows 64-bit version (see image below).


  • Download the zip-file and extract it to an empty folder on your PC and remember the location. Inside the uncompressed TIMESAT folder, go to the subdirectory /compiled/Win64/ and start the MCRInstaller.exe (see image below). Follow the instructions in the installer.


Step 2: Install R Studio

  • This practice uses R Script. In case you have not yet installed R on your machine, visit https://cran.r-project.org/ and install the Windows 64-bit version. The R-Studio interface is also used in this practice. Hence, we recommend installing it as well (visit https://www.rstudio.com/).

Step 3: Login to appEARS

  • Go to https://lpdaacsvc.cr.usgs.gov/appeears/ and click Sign In in the upper right-hand corner. Log in with your Earthdata credentials. If you do not have an Earthdata account, click on Register and follow the instructions on the page to create one.


Step 4: Start a new request

  • On the top menu, select Extract and then Area Sample.

  • Click on Start a new request.
  • Enter a request name and the start and end date for your data. Keep in mind that the Vegetation Condition Index (VCI) works more reliable the more years of data you have. Please make sure that for South Africa in the southern hemisphere the start date is 1st July and the end date is 30th June. Also, please note that the first full month of MODIS data is March 2000. If you want to select your study area manually, you can do so using the map and its tools to the right (right map box). If you want to use specific country borders, please see step 4a (optional).

Step 4a: Download and prepare boundary shapefile with QGIS

  • Unzip the folder.
  • Load shapefile into QGIS by dragging and dropping the shapefile (.shp).
  • Open Attribute table (right-click on layer) and select all rows with EC in the column Province. Sort the column by clicking on Province.
  • Type dissolve in the Processing Toolbox in the right top corner.
  • Open Dissolve tool (double-click), set Input layer, check Selected features only, select Output file (.shp) below Dissolved and click Run.
  • Navigate to the folder where the shapefile is stored and select all related files, right-click, and use Send to and Compressed (zipped) folder.
  • Finally, drag-and-drop the prepared zip-file into the designated area on the appEEARS website.
  • Your area will now appear in the map to the right.
  • If you are notified that “The shape you selected was too complex to display and was automatically reduced [.]” - just proceed with the following steps. It will have no negative effect.

Step 5: Select MODIS data

  • Enter MOD13Q1 in the Search for a product box and click on the MOD13Q1.006 250m vegetation indices, then select 250m 16-days EVI and 250m 16-days pixel-reliability (optional, select if quality masking is wanted[recommended]).
  • Select GeoTiff as File Format, click the box below (Search for a projection) and select Geographic under Projection. If you have done everything correctly, it should look like this:
  • You can now submit your request using the button Submit on the bottom of the page. Depending on the size of your study area and the selected time period, the data preparation may take some hours. You can check the progress using the Explore tab on the top of the page. You will also get an email notification as soon as your request is finished.

Step 6: Conclude request & download data file

  • Once your request is finished, click on the Explore tab, then on the download button of your request.
  • Scroll down, select All and save the download list. Remember the location of your download list.

Part B: Drought hazard assessment

Part B focuses on the agricultural drought hazard assessment which uses a weighted linear combination of Enhanced Vegetation Index (EVI) phenology and Vegetation condition index (VCI) (see In Detail section of this practice for more information).


Step 1: Prepare R code

Download the R code for this practice in the beginnig of this page (under Part B) and open it in R-Studio. Before running it, you will need to add information to a few lines. The lines are displayed and explained below.

NOTE
  • To write lines in R as comments use “#” before starting the sentence.
  • Do not use backslashes (’') to enter data paths but “/”.
  • Folder names should not have empty spaces because R cannot read them.



  • Data path: Enter the path to a folder where you have enough free space to store your MODIS data and the resulting products. You would at least need 50BG.
dataPath <- "D:/UN-SPIDER/WP1/Data/EC"



  • TIMESAT path: Enter the path to your TIMESAT 3.3 installation including the subfolder /compiled/Win64.
TSpath <-"D:/UN-SPIDER/WP1/TimeSat/timesat33/compiled/Win64"



  • Download list path: Enter the path to your appEEARS download-list (including “.txt”!). If you downloaded the data from appEEARS beforehand, enter the path to the data folder here and comment out the lines for the data download (all downloaded data needs to be in one folder).
downloadList <- "D:/UN-SPIDER/WP1/Data/EC/evi_pixelrel_2001-2017-download-list.txt"




  • Enter the hemisphere of your study area: 1 = north; 0 = south. For South Africa set hemisphere = 0.
hemisphere <- 0




  • Quality masking: set 1 to use MODIS pixel reliability data, set 0 to not use MODIS pixel reliability data. The use of quality masking is recommended.
cloudmask <- 1




  • Analysis period: Define the period of analysis with a vector of two components. The first is the starting year and the second the final year. If you want to process all available years just comment the line with a #. In this example all data is used, thus the line is commented.
#AnalysisPeriod <- c(2001, 2017)



  • Install all required R packages for this workflow. For more information about packages visit http://r-pkgs.had.co.nz/.
####Installing all needed packages
install.packages("curl")
install.packages("raster")
install.packages("rgdal")
install.packages("zoo")
install.packages("stringr")
install.packages("rts")



  • Load packages using the function library when required.
library(raster)
library(curl)
library(rgdal)
library(rts)
library(stringr)

Step 2: Run the first part of the R code

Once you entered all necessary information, select the first part of the code until PART II -STARTING TIMESAT POST-PROCESSING and click on run (or: CTRL + Enter). Depending on your study area, calculations will take some time. The script will keep you updated with the current steps being performed in the console. You will also see a message in the console once the first part is finished. Sometimes execution might seem unresponsive. As long as you can see the little stop sign above the console, you can be sure the script is still running.

Do not close your R session once the first part is finished. It is needed for later. The first part of the code is finished once DONE - Please proceed in TIMESAT. Do NOT close this R-Session is displayed in the console.

Step 2a: Download data
  • Download EVI and Pixel Reliability data which was previously ordered from appEEARS. The text file contains all ordered granules (including ‘VI quality control’ data which is automatically ordered but which is not needed). If you downloaded your data from appEEARS beforehand, comment out the following eight lines below using #.
filesl <- scan(downloadList, what='list', sep='\n')
for (d in 1:length(filesl)){
   #"VI-Quality"-rasters are not downloaded, as "Pixel Reliability"-rasters are used for quality masking instead. This saves resources and provides more accurate results.
   if (grepl("_VI_Quality_",filesl[d]) == F){
     curl_download(url=filesl[d], destfile=paste0(dataPath,"/",basename(filesl[d])),quiet = T, handle = new_handle())
   }
   print(paste0('Downloading source data (Step 1 of 6): ',round(d / length(filesl) * 100, digits=2),'%'))
}



  • Create a temporary directory for intermediate calculations and setting memory limit to improve the computing capacity of R.
#Creating a temp-directory, setting memory limit
dir.create(paste0(dataPath,'/temp/'))
rasterOptions(tmpdir=paste0(dataPath,'/temp/'))
memory.limit(50000)



  • List all downloaded tifs (EVI and pixel reliability) in the data folder and extract their basenames (e.g. MOD13Q1.006_250m_16_days_EVI_doy2001193_aid0001.tif).
rasterData <- list.files(path=dataPath, pattern='.tif$', recursive=F, ignore.case=T, full.names=T)
rasterFiles <- basename(rasterData)



  • Set chunk size for processing (in pixel). Splitting the data into chunks allows processing of large data sets without memory problems. You can reduce these numbers to 500 or 250 if you face RAM-problems. Only reduce the numbers if you face memory problems (Error: cannot allocate vector of size […])
chwidth <- 1000
chheight <- 1000

Step 2b: Mask EVI and prepare VCI calculation
  • List all EVI rasters (EVIrasterData) and their corresponding pixel reliability data (EVIqc).
EVIrasterData <- rasterData[grepl('EVI',basename(rasterData))]
EVIqc <- rasterData[grepl('pixel_reliability',basename(rasterData))]



  • Load example image of downloaded rasters for chunk shapefile determination and automatically adjust chunk size for small scenes (if needed).
exRST <- raster(EVIrasterData[1])
if (as.numeric(ncol(exRST)) <= chwidth || as.numeric(nrow(exRST)) <= chheight){
  chwidth <- ceiling(ncol(exRST)/2)
}



  • Extract all Days of Year (DOY) and YEARs from the filenames. MODIS vegetation data consists of 16-day composites. The filename contains the year and Julian day of acquisition (e.g. MOD13Q1.006_250m_16_days_EVI_doy2001193_aid0001.tif, YEARs=2001, DOY=193).
DOYs <- unique(substr(basename(EVIrasterData),38,40))
YEARs <- unique(substr(basename(EVIrasterData),34,37))



  • Create output folders for masked EVI and VCI.
if (!file.exists(paste0(dataPath,'/VCI')))
  dir.create(paste0(dataPath,'/VCI'))
if (!file.exists(paste0(dataPath,'/EVI')))
  dir.create(paste0(dataPath,'/EVI'))



  • Determine data chunks and create a shapefile for each chunk (see image above).
h        <- ceiling(ncol(exRST)/ceiling(ncol(exRST) / chwidth))
v        <- ceiling(nrow(exRST)/ceiling(nrow(exRST) / chheight))
split      <- aggregate(exRST,fact=c(h,v))
split[]    <- 1:ncell(split)
splitshp <- rasterToPolygons(split)
names(splitshp) <- 'shapes'
notiles <- ncell(split)



  • Define good values of pixel reliability data. Filter values 0 and 1 represent good and marginal data. Both are considered in this analysis. Cloudy pixels and pixels with snow or ice are removed.
goodValues <- c(0,1)
Rank Key Summary QA
-1 Fill/no data
0 Good data
1 Marginal data
2 Snow/ice
3 Cloudy
  • Mask data with pixel reliability (good values = 0, 1) and split masked data into chunks.
for (d in 1:length(DOYs)){
  #Filtering all relevant data for this DOY
  vrasterData <- EVIrasterData[grepl(paste0(DOYs[d],'_aid'),EVIrasterData)]
  #..and their corresponding pixel reliability data
  vQC <- EVIqc[grepl(paste0(DOYs[d],'_aid'),EVIqc)]
  #Reading years of available data
  vYear <- substr(basename(vrasterData),34,37)
  for (y in 1:length(vYear)){
  viPRE <- raster(vrasterData[y])
    #Applying quality mask to each image (if masking was activated)
    if (cloudmask == 1){
      qc <- raster(vQC[y])
      viPRE <- overlay(viPRE, qc, fun = function(x, y) {
        x[!(y %in% goodValues)] <- NA
        return(x)
      })
    }
    #####Splitting (masked) data into Chunks 
    for(i in 1:ncell(split)){
      ex          <- extent(splitshp[splitshp$shapes==i,])
      exx         <- crop(viPRE,ex)
      writeRaster(exx,filename=paste0(dataPath,'/temp/',DOYs[d],'_',vYear[y],'_EVICHUNK',formatC(i, width=3, flag='0')),format='GTiff', overwrite=TRUE)
    }
  }
  #(Progress report)
  print(paste0('Data preparation (VCI) & masking (Step 2 of 6): ',round(d / length(DOYs)  * 100, digits=2),'%'))
}

Step 2c: Calculate VCI chunkwise and merge chunks
EVIchunks <- list.files(path=paste0(dataPath,'/temp/'), pattern='_EVICHUNK', recursive=F, ignore.case=T, full.names=T)
for (d in 1:length(DOYs)){
  for (t in 1:notiles){
    #Filtering relevant chunks for this specific date
    sEVIchunks <-EVIchunks[grepl(DOYs[d],substr(basename(EVIchunks),1,3))]
    sEVIchunks <- sEVIchunks[grepl(paste0('_EVICHUNK',formatC(t, width=3, flag='0')),sEVIchunks)]
    #Listing years of data available for each DOY
    #Checking which years are available for a chunk as not every year might be available for a day (in the southern hemisphere)
    vvYear <- substr(basename(sEVIchunks),5,8)
    
    if (length(sEVIchunks) > 0){
      sT <- stack(sEVIchunks)
      
      #Removing filler-values from EVI data
      sT[sT<(-2999)] <- NA
      
      #VCI formula application
      vimax <- stackApply(sT , rep(1, nlayers (sT)), max, na.rm=T)
      vimin <- stackApply(sT , rep(1, nlayers (sT)), min, na.rm=T)
      z <- vimax - vimin
      VCI <- ((sT -vimin)/z)*100
      
      #Writing VCI-chunks for each available year
      for (y in 1:length(vvYear)){
        writeRaster(VCI[[y]],filename=paste0(dataPath,'/temp/',DOYs[d],'_',vvYear[y],'_VCICHUNK',formatC(t, width=3, flag='0')),format='GTiff', overwrite=TRUE)
      }
    }
  }
  #(Progress report)
  print(paste0('VCI processing (Step 3 of 6): ',round(d / length(DOYs)  * 100, digits=2),'%'))
}



  • Merge the masked EVI chunks and VCI chunks and write them as rasters (*.tif).
VCIchunks <- list.files(path=paste0(dataPath,'/temp/'), pattern='_VCICHUNK', recursive=F, ignore.case=T, full.names=T)
#Looping through each available DOY_YEAR combination
for (y in 1:length(YEARs)){
  for (d in 1:length(DOYs)){
    #Listing relevant chunks (EVI, VCI) for this specific date (quality-masked EVI data is also exported for optional further use)
    sVCIchunks <- VCIchunks[grepl(paste0(DOYs[d],'_',YEARs[y],'_'),VCIchunks)]
    sEVIchunks <- EVIchunks[grepl(paste0(DOYs[d],'_',YEARs[y],'_'),EVIchunks)]
    #Creating raster-list of the EVI chunks
    if (length(sEVIchunks) > 0){
      if (length(sEVIchunks) > 1){
        sMos <- list()
        for (o in 1:length(sEVIchunks)){
          sMos <- append(sMos, raster(sEVIchunks[o]))
        }
        #Mosaicking the EVI chunks
        
        #"mean"-function is only set as a placeholder for the mosaicing-call to work, the raster-chunks are not overlapping
        sMos$fun = mean
        Mos <- do.call(mosaic, sMos)
        Mos <- Mos*0.0001
        #Output of mosaicked, scaled EVI image
        writeRaster(Mos,filename=paste0(dataPath,'/EVI/',DOYs[d],'_',YEARs[y],'_EVI'),format='GTiff', overwrite=TRUE)
      }else{
        #Fail-safe in case only one chunk is available
        exp <- raster(sEVIchunks[1])
        #Scaling EVI values from source-data
        exp <- exp*0.0001
        writeRaster(exp,filename=paste0(dataPath,'/EVI/',DOYs[d],'_',YEARs[y],'_EVI'),format='GTiff', overwrite=TRUE)
      }
    }
    #Creating raster-list of the VCI chunks
    if (length(sVCIchunks) > 0){
      if (length(sVCIchunks) > 1){
        sMos <- list()
        for (o in 1:length(sVCIchunks)){
          sMos <- append(sMos, raster(sVCIchunks[o]))
        }
        #Mosaicking the VCI chunks
        sMos$fun = mean
        Mos <- do.call(mosaic, sMos)
        
        #Output: final VCI mosaic
        writeRaster(Mos,filename=paste0(dataPath,'/VCI/',DOYs[d],'_',YEARs[y],'_VCI'),format='GTiff', overwrite=TRUE)
      }else{
        #Fail-safe in case only one chunk is available
        writeRaster(raster(sVCIchunks[1]),filename=paste0(dataPath,'/VCI/',DOYs[d],'_',YEARs[y],'_VCI'),format='GTiff', overwrite=TRUE)
      }
    }
    print(paste0('EVI output & VCI output (Step 4 of 6): ',round((d+(length(DOYs)*(y-1)))/(length(DOYs)*length(YEARs))*100, digits=2),'%')) 
  }
}



  • Remove temporary files (e.g. EVI and VCI chunks) to free space.
tmp <- list.files(path=paste0(dataPath,'/temp/'), recursive=F, ignore.case=T, full.names=T)
file.remove(tmp)

Step 2d: Prepare TIMESAT

TIMESAT helps extracting seasonal parameters such as Start of Season (SoS), Peak of Season (PoS) and End of Season (EoS). The processing in TIMESAT is controlled by a number of settings. Depending on these settings, such as in case of Savitzky-Golay-filtering, the size of the moving window, the result may be very different. For this Recommended Practice a rather conservative setting file (.set) is created in the script (see setting example below). However, the settings can also be generated with the TIMESAT graphical user interface (GUI). For further information please also refer to the TIMESAT33_SoftwareManual (http://web.nateko.lu.se/timesat/docs/TIMESAT33_SoftwareManual.pdf).

Row Setting Example Short description Explanation
1 Version: 3.3 Keeps track of the settings file version
2 RCMDPract Job name Job name (no blanks) - max 100 chars.This will determine the name of output files from Timesat
3 1 Image/series mode (1/0) 1 = image mode, 0 = ASCII time-series
4 0 Trend (1/0) 1 = STL trend fitting activated. Overrules choice of fitting method (row 32).
5 0 Use quality data (1/0) 1 = use quality data, 0 = do not use quality data
6 C:/UN-SPIDER/ WP1/Data/EC/forTIMESAT/ TIMESATlist.txt Data file list/name Name, followed by %, of file list (for images) or data file name (for ASCII data)
7 dummy Quality file list/name Name, followed by %, of quality list (for images) or quality file name (for ASCII data)
8 3 Image file type 1 = 8-bit unsigned integer, 2 = 16-bit signed integer, 3 = 32-bit real
9 0 Byte order (1/0) 0 = little endian byte order, 1 = big endian byte order (for 16-bit integers)
10 2022 3581 Image dimension No. of rows in image, and no. of columns per row
11 1 2022 1 3581 Processing window Window to process (start row, end row, start column, end column)
12 16 23 Years and points per year No. of years and no. of points per year
13 0 1 Valid data range Lower and upper data values for valid range. Data outside range will be assigned weight 0
14 -1e+06 1e+06 1 Quality range 1 and weight Lower and upper values for quality class 1 and assigned weight
15 -1e+06 1e+06 1 Quality range 2 and weight Lower and upper values for quality class 2 and assigned weight
15 -1e+06 1e+06 1 Quality range 3 and weight Lower and upper values for quality class 3 and assigned weight
17 0 Amplitude cutoff value Cutoff for low amplitude. Series with amplitude smaller than this value will not be processed. 0 processes all data
18 0 Debug (3/2/1/0) Debug flag. 1 - 3 = print debug data, 0 = do not print debug data
19 1 1 0 Output files (1/0 1/0 1/0) Flags for output data (seasonality, fitted data, and original data)
20 0 Use land cover (1/0) 1 = use land cover map, 0 = do not use land cover map
21 Name of land cover file Name, followed by %, of land cover file
22 0 Spike method (3/2/1/0) Spike method. 3 = weights from STL multiplied with original weights, 2 = weights from STL, 1 = method based on median filtering, 0 = no spike filtering
23 2 Spike value If spike method = 1 the spike value determines the degree of spike removal.A low value will remove more spikes
24 0 STL stiffness value Parameter for STL trend stiffness. Varies between 1.0 and 10.0; default = 3.0.
25 1 No. of land cover classes No. of land cover classes (if land cover data are used)
26 ************ Separator After separator comes class specific parameters
27 1 Land cover code for class 1 Land cover code for class 1
28 1 Seasonality parameter (0-1) A value near 1 will attempt to fit one season per year, a value close to zero will attempt to fit two seasons
29 1 No. of envelope iterations (3/2/1) No. of iterations for upper envelope adaptation (3,2,1). Choosing 1 means no envelope adaptation
30 2 Adaptation strength (1-10) Strength of the envelope adaptation. 10 is the maximum strength
31 0 -99999 Force to minimum (1/0) and value of minimum Force to minimum. 1 = points below given minimum value will be forced to the specified minimum value. 0 = no forcing to value
32 2 Fitting method (3/2/1) Fitting method. 3 = double logistic, 2 = asymmetric Gauss, 1 = Savitzky-Golay If STL trend fitting activated (row 4) this overrides the fitting method.
33 1 Weight update method Weight update method (not in use)
34 4 Window for Savitzky-Golay Half window for Savitzky-Golay filtering. A large value of the window will give a high degree of smoothing
35 1 %Spatio-temporal smoothing factor
36 2 %Spatio-temporal adaptation factor
37 1 Season start start/end method (4/3/2/1) Method for determining start/end of season based on intersection of the fitted curve. 4 = STL trend: at the intersection with the trend line from STL. 3 = Relative amplitude: at the point where the curve intersects a proportion of the relative seasonal amplitude. 2 = Absolute value: at the point where the curve intersects an absolute value in units of the data. 1 = at the point where the curve intersects a proportion of the seasonal amplitude.
38 0.5 0.5 Season start / end values Values for determining season start/end.If start method is 1 or 3 the values must be between 0 and 1



  • Create output folders for TIMESAT.First, the output folder for the TIMSAT preparation (timesatPREP) for the setting file and second, a folder in which the TIMESAT outputs can be written (forTIMESAT).
#Creating output folders
if (!file.exists(paste0(dataPath,'/timesatPREP')))
  dir.create(paste0(dataPath,'/timesatPREP'))
if (!file.exists(paste0(dataPath,'/forTIMESAT')))
  dir.create(paste0(dataPath,'/forTIMESAT'))
if (!file.exists(paste0(dataPath,'/timesat')))
  dir.create(paste0(dataPath,'/timesat'))



  • List all EVI tiffs and change their file format to binary for TIMESAT. TIMESAT can only read binary data, thus the tiffs need to be changed.
#Creating output folders
allEVITIF <- list.files(paste0(dataPath,"/EVI/"), full.names=T, pattern=".tif$",recursive=F)
  #Change file-format to binary for TIMESAT
for(i in 1:length(allEVITIF)){
  tc <- raster(allEVITIF[i])
  #NAs can cause trouble with TIMESAT. Replacing with -999
  tc[is.na(tc)] <- -999
  #switch filename-convention for correct sorting
  flname <- paste0(substr(basename(allEVITIF[i]),5,8),"_",substr(basename(allEVITIF[i]),1,3),"_EVI-TS")
  writeRaster(tc, filename=paste0(dataPath,'/timesatPREP/',flname), format="IDRISI", overwrite=T)
  #(Progress report)
  print(paste0("Preparing data for TIMESAT... ", i , "/" , length(allEVITIF)))
}



  • List TIMESAT compatible files (binary files) and create a TIMESAT image list (.txt) with these files.
#List new, TIMESAT-compatible files
allEVITS <- list.files(paste0(dataPath,"/timesatPREP/"), full.names=T, pattern=".rst$",recursive=F)

#Create TIMESAT image-list
imglist <- c(length(allEVITS),allEVITS)
fileCr<-file( paste0(dataPath,'/forTIMESAT/TIMESATlist.txt'))
writeLines(imglist,fileCr)
close(fileCr)



  • Create TIMESAT setting file (.set) as described above and finish Part I.
#Recreate TIMESAT settings-file convention - line by line
setlines <- c()
#1: Headerline
setlines <- c(setlines, "Version: 3.3 ")
#2: Jobname
setlines <- c(setlines, "RCMDPract         %Job_name (no blanks) ")
setlines <- c(setlines, "1               %Image /series mode (1/0) ")
setlines <- c(setlines, "0               %Trend (1/0) ")
setlines <- c(setlines, "0               %Use mask data (1/0) ")
#6: Link to Filelist
setlines <- c(setlines, paste0(dataPath,"/forTIMESAT/TIMESATlist.txt         %Data file list/name "))
setlines <- c(setlines, paste0("dummy         %Mask file list/name "))
setlines <- c(setlines, paste0("3               %Image file type "))
setlines <- c(setlines, paste0("0               %Byte order (1/0) "))
#7: File dimensions
setlines <- c(setlines, paste0(nrow(raster(allEVITS[1])), " ", ncol(raster(allEVITS[1])) ,"         %File dimension (nrow ncol) "))
#Processing extent (=whole image)
setlines <- c(setlines, paste0("1 ", nrow(raster(allEVITS[1])), " 1 ", ncol(raster(allEVITS[1])) ,"   %Processing window (start row stop row start col stop col) "))
#Number of years and images per year

#Correcting the number of seasons, as seasons start in July of one and end in June of the next year in the southern hemisphere
if (hemisphere == 0){
  ycorr <- 1
}else{
  ycorr <- 0
}

setlines <- c(setlines, paste0(length(YEARs)-ycorr," 23            %No. years and no. points per year "))
#Valid value-range (0 to 1 for EVI)
setlines <- c(setlines, paste0("0 1         %Valid data range (lower upper) "))
#More settings... (mostly not relevant for this practice but needed for the TIMESAT settings-file)
setlines <- c(setlines, paste0("-1e+06 1e+06 1         %Mask range 1 and weight "))
setlines <- c(setlines, paste0("-1e+06 1e+06 1         %Mask range 2 and weight "))
setlines <- c(setlines, paste0("-1e+06 1e+06 1         %Mask range 3 and weight "))
setlines <- c(setlines, paste0("0               %Amplitude cutoff value "))
setlines <- c(setlines, paste0("0               %Print functions and weights (1/0) "))
setlines <- c(setlines, paste0("1 1 0           %Output files (1/0 1/0 1/0) "))
setlines <- c(setlines, paste0("0               %Use land cover (1/0) "))
setlines <- c(setlines, paste0("   %Name of landcover file "))
setlines <- c(setlines, paste0("0               %Spike method "))
setlines <- c(setlines, paste0("2               %Spike value "))
setlines <- c(setlines, paste0("0               %Spatial half dimension "))
setlines <- c(setlines, paste0("1               %No. of landcover classes "))
setlines <- c(setlines, paste0("************ "))
setlines <- c(setlines, paste0("1               %Land cover code for class  1 "))
setlines <- c(setlines, paste0("1               %Season parameter "))
setlines <- c(setlines, paste0("1               %No. of envelope iterations (1-3) "))
setlines <- c(setlines, paste0("2               %Adaptation strength (1-10) "))
setlines <- c(setlines, paste0("0 -99999            %Force minimum (1/0) and value "))
setlines <- c(setlines, paste0("2               %Fitting method (1-4) "))
setlines <- c(setlines, paste0("1               %Weight update method "))
setlines <- c(setlines, paste0("4               %Window size for Sav-Gol. "))
setlines <- c(setlines, paste0("1               %Spatio-temporal smoothing factor. "))
setlines <- c(setlines, paste0("2               %Spatio-temporal adaptation factor. "))
setlines <- c(setlines, paste0("1               %Season start method "))
#Thresholds for season start/stop. Using standard-values.
TSthr <- 0.5
setlines <- c(setlines, paste0(TSthr, " ", TSthr,"         %Season start / stop values "))

#Writing TIMESAT settingsfile
fileCr<-file( paste0(dataPath,'/forTIMESAT/TIMESAT_input.set'))
writeLines(setlines,fileCr)
close(fileCr)

#Part I finished
if (length(allEVITS) > 0){
print("DONE - Please proceed in TIMESAT. Do NOT close this R-Session")
}

Step 3: Run TIMESAT

In this part the phenometrics are calculated with TIMESAT.

  • Navigate to your TIMESAT folder.
  • Go to the subfolder compiled/Win64.
  • Run TIMESAT.exe.
  • TIMESAT will ask you to select the path of your TIMESAT installation. Navigate to your timesat33 folder and confirm. The TIMESAT GUI should now be visible (see image below).
  • To make the process quicker, we want to use parallel processing. Click on TSF_process parallel (see image below).
  • TIMESAT will now ask you to navigate to a settings file. The settings file using standard parameters was created by the code. Navigate to the datapath you entered into the R code. In the subfolder forTIMESAT a settings file called TIMESAT_input.set should be located. Select it. (Note: if you want to create your own settings file, you can do so using the TIMESAT GUI. Please refer to the TIMESAT manual. The respective image list can be found in the forTIMESAT subfolder of your data path).
  • Once the settings file is selected, TIMESAT will ask you to enter the number of processors that should be used for the calculations. Do so according to your machine’s specifications. We advise to leave one core free if you want to work on your PC during the process.
  • A command window should now pop up. Wait until it looks like the image below and type RCMDPract_script.bat to start the process and confirm with Enter.
  • TIMESAT should now be processing your data. Depending on your study area it might take some hours. The command window will notify you when the process is finished (see image below). Once TIMESAT is finished, close it and go back to your R session.

Step 4: Run the second part of the R code

Back in your R session, mark the skript from PART II to the end and run. Once again, the script will inform you about the progress. Once the script is finished, output locations will be displayed - outputs will be put into subfolders at the path you entered in line 7.


Step 4a: Start TIMESAT post-processing
  • Define function bintiff that reads TIMESAT outputs. In order to use the TIMESAT outputs in this analysis the format of the TIMESAT output needs to be changed to tiff.
##Function to read TIMESAT outputs using TIMESAT-tools
bintiff <- function (rast,base_image){
  timesatbinobj <- file(rast, "rb")
  raw_vector <- readBin(timesatbinobj, n=file.info(rast)$size, what="int", signed=T, endian="little",size=2)
  close(timesatbinobj)
  data_matrix <- matrix(raw_vector,nrow(base_image), ncol(base_image),byrow=T)
  tiff_raster <- raster(data_matrix, template=base_image)
  return(tiff_raster)
}



  • Read in the TIMESAT output file (.tpa) and set path to TIMESAT converting tool.
#Reading raw TIMESAT output (Variant B; Timesat-fortran tools)
path_tpaFile <-paste0(TSpath,"/RCMDPract_TS.tpa")
#Path to TIMESAT-seas2img-tool for converting
pTSpath <- gsub("compiled/Win64/","compiled/Win64", TSpath)
seasImg <- gsub("compiled/Win64","timesat_fortran/tools/TSF_seas2img", pTSpath)
}



  • TIMESAT does not calculate the phenology for the first year, thus a correction variable is needed (for more information check TIMESAT33_SoftwareManual (p.22)). This correction variable is subtracted from the years in the time series to determine the number of seasons.
#TIMESAT will not calculate phenology for the very first season, hence the following correction-variable
if (hemisphere == 1){
  corr <- 1
}else{
  corr <- 2
}

noSeasons  <- length(YEARs) - corr
setwd(paste0(dataPath,"/temp/"))
base_image = exRST
}



  • Loop through seasons (assumed is 1 season per year) and write rasters (.tif) for Start of Season (SoS), Peak of Season (PoS) and End of Season (EoS) per year. Hence, for each pixel a SoS, PoS and EoS is detected per year.
#SeasonIndicators in .tpa file: 1 = SoS, 2 = EoS, Peaktime = 5

#Looping through season-indocators and writing output for each
for (d in 1:3){

SeasonIndicator <- d

#Peaktime is indicator #5
if (d == 3){
  SeasonIndicator <- 5
}

if(SeasonIndicator == 1){
  descrinfo <- "SoS"
}else if(SeasonIndicator == 2){
  descrinfo <- "EoS"
}else if(SeasonIndicator == 5){
  descrinfo <- "PoS"
}

#Loop through seasons, create output for each
for (s in 1:noSeasons){
startIX <- 1 + (23*(s))
endIX <- 23 + (23*(s))
print(paste0("Reading/converting ", descrinfo, " ", s,"/",noSeasons,"..."))

#Calling seas2img via command-prompt
#(Parameters: infile, seaspar, datemin, datemax, misseason, misspix, nameout, filetype)
system(paste0("\"",seasImg,"\" \"",path_tpaFile,"\" ",SeasonIndicator, " ", startIX," ",endIX, " 0 0 NSE",SeasonIndicator,"_",s," 2" ), wait=TRUE, show.output.on.console = F)

#Convert binary to tiff
r <- bintiff(paste0(dataPath,"/temp/NSE",SeasonIndicator,"_",s,"_season1"),base_image)%%23
writeRaster(r, filename = paste0(dataPath,"/timesat/",as.numeric(YEARs[s])+1,"_",descrinfo), format="GTiff", overwrite=T)
}

}
}
  • Stack SoS over time series. Do the same with PoS and EoS and prepare a stack of original data for later. TIMESAT uses 0 instead of NA, thus all pixel with 0 need to be set to NA.
#Parsing Tiffs of Start, Peak, End of Season
timesatRasterALL <- list.files(paste0(dataPath,"/timesat/"), pattern=".tif$", recursive=F, full.names=T)
timesatRasterSOS <- stack(timesatRasterALL[grepl("_SoS",timesatRasterALL)])
timesatRasterPOS <- stack(timesatRasterALL[grepl("_PoS",timesatRasterALL)])
timesatRasterEOS <- stack(timesatRasterALL[grepl("_EoS",timesatRasterALL)])

###Checking for integrity of seasonal metrics

#(Prepare a stack of original data for later)
allDetSeas = timesatRasterSOS

#Filter out TIMESAT-NAs
timesatRasterSOS[timesatRasterSOS == 0] <- NA
timesatRasterPOS[timesatRasterPOS == 0] <- NA
timesatRasterEOS[timesatRasterEOS == 0] <- NA
}



  • TIMESAT detects the phenology parameters based on the EVI signal but does not necessarily consider temporal crop phenology stages e.g. grain filling periods. Thus, minimum and maximum time gaps are defined to ensure that agricultural growth stages are guaranteed. The minimum time between SoS and PoS is 20 days, between PoS and EoS 10 days and the overall season can only (maximum) have 240 days. These thresholds can be adjusted here in the script.
##Check for difference between Start, Peak, End to remove unreliable data (in days)
# thresh1:  minimum allowed difference between peak of the season (POS) and start of the season (SOS) in days
thresh1 <- 20
# thresh2:  maximum allowed difference between start of the season (SOS) and end of the season (EOS) or the equivalent length of the season (LOS) in days
thresh2 <- 240
# thresh3:  minimum allowed difference between end of the season (EOS) and peak of the season (POS) in days
thresh3 <- 10
}
  • Convert the thresholds (in days) to the image indexes (MODIS provides an image every 16 days).
#Convert to image-index
thresh1 <- thresh1/16
thresh2 <- thresh2/16
thresh3 <- thresh3/16
}



  • Detect all pixels for which less than 75% of all possible seasons were detected and prepare mask. This step filters out heavily mixed pixels.
#Detect all pixels for which less than 75% of all possible seasons were detected (filters out heavily mixed pixels with less vegetation)
allDetSeas[allDetSeas == 0] <- NA
allDetSeas <- allDetSeas/allDetSeas
nSeasons <- calc(allDetSeas, fun = sum, na.rm=T)
nSeasons[nSeasons<(0.75*length(YEARs)-corr)] <- NA
#Binarize the mask
unrelpxmask <- nSeasons/nSeasons
}



  • Apply the checks (thresholds) and masks and write the new SoS, PoS and EoS rasters.
for(i in 1:nlayers(timesatRasterSOS)){

  diffT1 <- timesatRasterPOS[[i]] - timesatRasterSOS[[i]]
  diffT2 <- timesatRasterEOS[[i]] - timesatRasterSOS[[i]]
  diffT3 <- timesatRasterEOS[[i]] - timesatRasterPOS[[i]]
  
  #Preparing Masks
  diffT1[diffT1<thresh1] <- NA
  diffT2[diffT2>thresh2] <- NA
  diffT3[diffT3<thresh3] <- NA

  #Combining the masks to one binary mask
  diffALL <- (diffT1/diffT1) * (diffT2/diffT2) * (diffT3/diffT3) * unrelpxmask

  #Apply mask and write files
  timesatRasterSOS[[i]] <- timesatRasterSOS[[i]]*diffALL
  timesatRasterPOS[[i]] <- timesatRasterPOS[[i]]*diffALL
  timesatRasterEOS[[i]] <- timesatRasterEOS[[i]]*diffALL
  
  namestr <- formatC(i,digits=2,flag=0)

  writeRaster(timesatRasterSOS[[i]], filename=paste0(dataPath,'/timesat/SOS-',namestr), format="GTiff", overwrite = T )
  writeRaster(timesatRasterPOS[[i]], filename=paste0(dataPath,'/timesat/POS-',namestr), format="GTiff", overwrite = T )
  writeRaster(timesatRasterEOS[[i]], filename=paste0(dataPath,'/timesat/EOS-',namestr), format="GTiff", overwrite = T )
  
}
}



  • Calculate a mean metric from the masked SoS, PoS and EoS. Hence, for each pixel calculate an average of SoS, PoS and EoS over the time series.
#Calculation of mean Seasonal Metrics + output
meanSOS <- calc(timesatRasterSOS,fun=mean,na.rm=T)
meanPOS <- calc(timesatRasterPOS,fun=mean,na.rm=T)
meanEOS <- calc(timesatRasterEOS,fun=mean,na.rm=T)

writeRaster(meanSOS, filename=paste0(dataPath,'/timesat/SOS-MEAN'), format="GTiff", overwrite = T )
writeRaster(meanPOS, filename=paste0(dataPath,'/timesat/POS-MEAN'), format="GTiff", overwrite = T )
writeRaster(meanEOS, filename=paste0(dataPath,'/timesat/EOS-MEAN'), format="GTiff", overwrite = T )

}

Step 4b: Start VCI weighting block

The weighted VCI uses a weighted linear combination. Thus, the raw VCI values are split into three groups depending on their timely position and given different weights when averaged over the season; Block A is the time from SoS to PoS, block B is PoS and block C is the time from PoS to EoS. From the mean seasonal metrics, the respective VCI images are derived per pixel and then averaged over the season.

  • List all VCI images.
allVCI <- list.files(paste0(dataPath,'/VCI/'), pattern=".tif$", recursive=F, full.names = T)



  • Load mean seasonality rasters.
SoS <- as.integer(raster(paste0(dataPath,"/timesat/SOS-MEAN.tif")))
PoS <- as.integer(raster(paste0(dataPath,"/timesat/POS-MEAN.tif")))
EoS <- as.integer(raster(paste0(dataPath,"/timesat/EOS-MEAN.tif")))



  • Create an output folder for the final output named weighted.
dir.create(paste0(dataPath,'/weighted'))



  • Loop through each year(or season) and check whether analysis period is defined.
for (sy in min(as.numeric(YEARs)):max(as.numeric(YEARs))){

if ((exists("AnalysisPeriod") && (sy > AnalysisPeriod[1] && sy < AnalysisPeriod[2])) || exists("AnalysisPeriod") == F){
  
   if (!(sy == max(as.numeric(YEARs)) && hemisphere == 0)){
    
    #Check for hemisphere, load VCI data for the season
    if (hemisphere == 1){
      lyVCI <- allVCI[grepl(sy,allVCI)]
      dscr <- sy
    }else{
      #Load correct data if southern hemisphere was selected (seasonality shift)
      lyVCI <- c(allVCI[grepl(sy,allVCI)],allVCI[grepl(sy+1,allVCI)])
      #Adjusting output filenames if southern hemisphere was selected
      if (sy == min(as.numeric(YEARs))){
        lyVCI <- lyVCI[1:23]
      }else{
        lyVCI <- lyVCI[13:35]
      }  
      #Adjusting output filenames if southern hemisphere was selected
      dscr <- paste0(sy,"_",sy+1)
    }



  • Stack all VCI for each year.
  yVCI <- stack(lyVCI)


Block A: Start of Season (SoS) * Extract all pixels that fall in Block A (SoS to PoS) and set all other pixels NA.

BlockA <- stack()
  #Iterating through all 23 images of the season
  for (i in 1:23){
    #Read start-of-season dates
    s <- SoS
    #Read dates up to 16 days before peak of season
    e <- (Peak - 1)
    
    #Check if current iteration lies within the seasonal dates of each pixel
    s[s > i] <- NA
    e[e < i] <- NA
    
    #Creating a binary mask and apply to the VCI image
    m <- s*e
    v <- yVCI[[i]]*(m/m)

    #Add layer to stack
    BlockA <- stack(BlockA, v)
    
  }



  • Create mean of the time block BlockA (SoS to PoS).
BlockA <- calc(BlockA, fun=mean, na.rm=T)


Block B: Peak of Season (PoS) * Extract all pixels that fall in Block P (PoS) and set all other pixels NA.

BlockB <- stack()
  for (i in 1:23){
    s <- Peak
    e <- Peak
    
    s[s > i] <- NA
    e[e < i] <- NA
    
    m <- s*e
    v <- yVCI[[i]]*(m/m)
            
    BlockB <- stack(BlockB, v)
  }



  • Create mean of the time block BlockB (PoS).
BlockB <- calc(BlockB, fun=mean, na.rm=T)


Block C: End of Season (EoS) * Extract all pixels that fall in Block C (PoS to EoS) and set all other pixels NA.

BlockC <- stack()
  for (i in 1:23){
    s <- (Peak + 1)
    e <- EoS
    
    s[s > i] <- NA
    e[e < i] <- NA

    m <- s*e
    v <- yVCI[[i]]*(m/m)
       
    BlockC <- stack(BlockC, v)
  }



  • Create mean of the timeblock BlockC (EOS).
 BlockC <- calc(BlockC, fun=mean, na.rm=T)



  • Stack the VCI blocks (according of their weight) and calculate the mean.
    #Weighting: 5-7-1 start-peak-end of season mean
    #Result: weighted VCI by season
    WeightedStack <- stack(BlockA,BlockA,BlockA,BlockA,BlockA,BlockB,BlockB,BlockB,BlockB,BlockB,BlockB,BlockB,BlockC)
    WeightedVCI <- calc(WeightedStack, fun=mean, na.rm=T)



  • The weighted indices are then classified into five classes resulting in the following values for the final product:
Hazard severity classes Value in final output VCI values (weighted over season)
D0 0 >40
D1 1 30-40
D2 2 20-30
D3 3 10-20
D4 4 <10
 #Output: five classes
    cWeightedVCI <- reclassify(WeightedVCI, c(-1, 10, 4,  10, 20, 3,  20, 30, 2, 30,40, 1,40,101,0))  
    writeRaster(cWeightedVCI,filename=paste0(dataPath,"/weighted/",dscr,"_FiveCl.tif"),format="GTiff",overwrite=T)



  • However, for the computation of the SFDRR indicators, three classes (not affected, damaged and destroyed) cropland are required. Thus, D1-D3 are summarized into one class (damaged), while D0 (not affected) and D5 (destroyed) remain the same.
Hazard severity classes Value in final output VCI values (weighted over season)
H0 0 >40
H1 1 10-40
H2 2 <10
 #Output: three classes
    dWeightedVCI <- reclassify(WeightedVCI, c(-1, 10, 2,  10, 40, 1,  40, 101, 0))  
    writeRaster(dWeightedVCI,filename=paste0(dataPath,"/weighted/",dscr,"_ThreeCl.tif"),format="GTiff",overwrite=T)
  • Delete all temporary files and finish.
 #Deleting all temp files
tmp <- list.files(path=paste0(dataPath,'/temp/'), recursive=F, ignore.case=T, full.names=T)
file.remove(tmp)

#Finished
print(paste0(Sys.time(),' +++++ Finished. You will find the data in the subfolders \'VCI\' and \'weighted\' at ', dataPath,"."))