In today’s practical we will look at different conceptualizations of connectivity in ecology and explore two kinds of analysis of connectivity. We will take the example of least cost paths, and that of network connectivity, drawing on graph theory to investigate how species movement records can be used to model networks of habitats and the relative importance of individual patches. This practical draws on ideas and techniques covered in Chapter 9 of Fletcher and Fortin (2019): https://link.springer.com/chapter/10.1007/978-3-030-01989-1_9

The data for the least-cost path analysis will be based on the same cohort of wolves that we looked at in a previous week. We will use the information we calculated on resource selection to suggest a theoretical least-cost-path for these animals between two protected areas. We will use the associated cost layer to suggest a least-cost-corridor which, from a conservation point of view, is a more practical outcome and a common implementation. In the second half of the practical we will use movement data on the Snail Kite to model the importance of individual habitats (wetland) in a network. All of the data that you will need to carry out this practical can be found in the “Connect” folder on the course dropbox here (right-click and open in a new tab/window): https://www.dropbox.com/sh/ldg85odo2l5ttbj/AABcuxS54O66LGE6mTKY0vW_a?dl=0

#Code to tell R to only use the terra installation of PROJ (library for performing projections)
plib<-Sys.getenv("PROJ_LIB")
prj<-system.file("proj",package="terra")[1]
Sys.setenv("PROJ_LIB"=prj)

setwd("C:/SE/Connect")

install.packages(c("raster","terra","rgeos","gdistance","igraph","fitdistrplus"))
#Install libraries for today

library(raster)           #for raster covariate data
## Loading required package: sp
library(terra)            #for spatial data 
## terra 1.7.73
library(gdistance)        #for least-cost paths/circuit theory 
## Loading required package: igraph
## 
## Attaching package: 'igraph'
## The following objects are masked from 'package:terra':
## 
##     blocks, compare, similarity, union
## The following object is masked from 'package:raster':
## 
##     union
## The following objects are masked from 'package:stats':
## 
##     decompose, spectrum
## The following object is masked from 'package:base':
## 
##     union
## Loading required package: Matrix
## 
## Attaching package: 'gdistance'
## The following object is masked from 'package:igraph':
## 
##     normalize
library(igraph)           #for patch-based graphs
library(fitdistrplus)     #for fitting kernels
## Loading required package: MASS
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:terra':
## 
##     area
## The following objects are masked from 'package:raster':
## 
##     area, select
## Loading required package: survival

Exercise One. Least cost Paths/Corridors: Read in the data

#Read in the data (adapt to read from your own working directory)

CA<-rast("C:/SE/Connect/AlbertaCanis.tif")

Read in the national park data using the rgdal function readOGR(). The arguments needed for this function are the file folder containing the shapefile and the name of the file (without the “.shp”). You will need to change the file path to where you have saved the data.

Park<-vect("C:/SE/Connect/AlbertaProtectedArea.shp")


plot(Park)

#transform the Park layer to the same crs as the raster layer.
Park<-project(Park,crs(CA))

ParkCoords<-crds(Park)


#Crop the study area but leave a buffer of 10km

x.min <- min(ParkCoords[,1]) - 10000
x.max <- max(ParkCoords[,1]) + 10000
y.min <- min(ParkCoords[,2]) - 10000
y.max <- max(ParkCoords[,2]) + 10000

extent.new <- extent(x.min, x.max, y.min, y.max)


AlbertaCrop<-crop(CA,extent.new)

In order to carry out our least cost path analysis for these two areas of parkland, we need to specify two points between which to identify the path. For this we will identify the centroids of both areas:

#Get centroids of the protected areas

park_centroids <- centroids(Park)

plot(AlbertaCrop)

plot(Park,add=TRUE)


plot(park_centroids,add=TRUE)

Now we need to create a cost layer to theorise resistance in the landscape. We will do this based on selection ratios taken from a previous analysis. These selection ratios are a measure of the degree to which one of our wolves would be attracted to or averse to each land-cover type (so think of this as a preference for different land-covers). We can use this ratio therefore as the basis of our path where the least cost path is that which follows the land-cover types with the highest selection ratio. However, these ratios are a measure of preference which is antithetical to the notion of cost. Therefore, we want to calculate the inverse of this value which we do by dividing 1 by the selection ratio. We calculate this in the code below - first we create a vector of 1s that is the same length as the number of land-cover types, then we divide this by another vector containing our selection ratios.

#raster categories as factors
AlbertaCrop<-as.factor(AlbertaCrop)


#inspect levels (list of categories)
levels(AlbertaCrop)
## [[1]]
##    ID AlbertaCanis
## 1  21           21
## 2  25           25
## 3  31           31
## 4  41           41
## 5  42           42
## 6  45           45
## 7  46           46
## 8  51           51
## 9  62           62
## 10 71           71
## 11 73           73
## 12 74           74
## 13 91           91
#selection ratios for each land cover type calculated in advance

selRatio<-c(0.07432084, 0.28252848, 1.47999203, 0.86921276, 1.20631538, 3.13583267, 1.11143436, 0.01596055, 0.12475284, 1.05520108, 1.06501535,
1.79176454,NA) #add NA as this occurs as a value in our original raster but not in our selection ratios


vectorOfOnes <- rep(1,13)


Cost<-vectorOfOnes/selRatio # this gets the inverse of the selection ratio which I am asserting as a rough estimate of the "cost" associated with each land cover

Next we can create our cost raster by binding together a vector containing the raster levels and the vector containing the cost values.

#cbind (bind by column) the raster values and the Cost values
RCmat<- cbind(levels(AlbertaCrop)[[1]],Cost)
RCmat$AlbertaCanis<-as.numeric(RCmat$AlbertaCanis)
#inspect


#reclassify
AlbertaCost<-classify(AlbertaCrop, RCmat[,2:3])#only need columns 2 and 3 here



plot(AlbertaCost)

Now we have a cost layer but this is not yet enough to carry out our least cost path. For that we need a means of working out the average cost of moving to each cell given the cells in its neighbourhood. We create this with a transition matrix using the transition() function of the gDistance package. With this function, a transition object is created according to a user-defined function. A common approach is to set the transition value between each pair of cells to the mean of the two cell values that are being connected. The directions argument is set to 8, which connects all cells to their 8 neighbours. To do this for the Moore neighbourhood (i.e. the focal cell and the 8 surrounding cells) we pass “transitionFunction = mean,8” (here “mean” is used to calculate the conductance between each pair of cells based on the mean cost values of both). Note that the transition matrix is effectively modelling conductance - which is the inverse of the resistance(cost) - thus the “1/AlbertaCost” in the code below. You can also think of conductance as the probability of transition from one cell to another. Unfortunately, the dgistance package still insists on working with objects from the raster package so we have to convert to this format either before or inside the function. We’ll do it inside.

#create a conductance-based transition layer - conductance being 1/cost i.e.  


land_cond <- transition(1/raster(AlbertaCost), transitionFunction=mean, 8)

class(land_cond)#check object information
## [1] "TransitionLayer"
## attr(,"package")
## [1] "gdistance"

Next we use the shortestPath() function to get our least cost path between park centroids, setting the output to a SpatialLines (for use with the sp Package).

#get least cost path between points 

#convert vect object to spatial points (gdistance package needs this)
centroidsSP<-SpatialPoints(crds(park_centroids))

Canis_lcp <- shortestPath(land_cond, centroidsSP[1,], centroidsSP[2,], output="SpatialLines")

#plot

plot(AlbertaCrop, axes=F, box=F)
plot(Park, add=T)
points(park_centroids, col="grey20")
lines(Canis_lcp, col="red", lw=3)

Now we have our least cost path, but a more practical/flexible solution in conservation modeling and planning is to work with a corridor approach. One way is to model the corridor by creating a cumulative cost layer originating from both points, summing them and then choosing some upper threshold, typically defined as a quantile of the sum of these cost layers.

#get cumulative costs from each centroid
Centroid1.cost <- accCost(land_cond, centroidsSP[1,])
Centroid2.cost <- accCost(land_cond, centroidsSP[2,])




#plot
par(mfrow=c(1,2))
plot(Centroid1.cost)
plot(Centroid2.cost)

#get least-cost corridor - use the overlay() function of the raster package to add both rasters together. 
leastcost_corridor <- overlay(Centroid1.cost, Centroid2.cost, fun=function(x, y){return(x + y)})

#plot
plot(leastcost_corridor, legend=F, axes=F)
plot(Park, add=T)
points(park_centroids, col="grey30")

#get lower quantile (I have chosen 5 here which seemed to work well but this number isn't special!)
quantile5 <- quantile(leastcost_corridor, probs=0.05, na.rm=TRUE)

#make new truncated layer
leastcost_corridor5 <- leastcost_corridor
values(leastcost_corridor5) <- NA
leastcost_corridor5[leastcost_corridor < quantile5] <- 1 #truncate to identify corridor

#plot
plot(leastcost_corridor5, legend=F,axes=F)
points(park_centroids, col="grey30")
lines(Canis_lcp, col="red", lw=3)

If we want to set the corridor to a certain regular width, we could do so using the distance() function to set a fixed-width corridor. For example we could create a mask of the existing corridor based on the original least cost path and then use the distance function to create a distance raster where the distance from the least cost path to each NA cell is calculated and returned as the cell values of our distance raster. Then we can set any threshold we like to trim the width of the corridor (I have arbitrarily chosen 1000m width below to illustrate).Note that this approach may result in some less suitable (higher resistance) cells being included in the corridor.

line_corridor<-mask(leastcost_corridor5,Canis_lcp)

lineDist<-distance(line_corridor)

distanceCorridor<-lineDist<500

plot(distanceCorridor)

plot(Park,add=TRUE)

Exercise Two - network connectivity

In the next stage of the practical we will consider how graph theory can be used to model connectivity via a network of points via the igraph() package. Specifically we will look at a data set quantifying the dispersal of Snail Kites (Rostrhamus sociabilis) during their breeding season. The data consist of two csv files - one of point locations of habitat patches and one quantifying the number of recorded movements between each patch.

#site attributes
nodes <- read.csv("C:/SE/Connect/kite_nodes.csv", header=T)

#inspect
head(nodes)
##         Name XCoord  YCoord       area
## 1       BICY 495505 2849390 848.240387
## 2     DEVILS 497670 2925290   0.678051
## 3        ENP 531302 2840110 638.301684
## 4      ETOHO 472324 3130400  47.598391
## 5         GW 582883 2962720  20.226264
## 6 HarnsMarsh 431024 2948020   2.280975
#movement data of snail kites
A.obs <- read.csv("C:/SE/Connect/kite_movement.csv", header=T)
head(A.obs)
##   X V1 V2 V3  V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15 V16 V17 V18 V19 V20 V21
## 1 1 11  0  0   0  0  0  0  0  0   0   0   0   0   0   0   0   0   0   0   0   0
## 2 2  0  0  0   0  0  0  0  0  0   0   0   0   0   0   0   0   0   0   0   0   0
## 3 3  0  0 13   0  1  0  0  0  0   0   0   0   0   0   0   0   0   0   0   0   0
## 4 4  0  0  0 101  0  0  0  0  2  11   0   0   0   2   0   0  17   5   0   0   0
## 5 5  0  0  0   0 86  0  0  0  0   0   0   0   0   0   0   0   0   0   0   1   0
## 6 6  0  0  0   0  0  6  0  0  0   0   0   0   0   0   0   0   0   1   0   0   0
##   V22 V23 V24 V25 V26 V27 V28 V29
## 1   0   0   0   0   0   0   0   0
## 2   0   0   0   0   0   0   0   0
## 3   0   0   0   0   1   1   0   0
## 4   0   0  10   0   0   0   0   0
## 5   0   0   0   0   1   1   0   0
## 6   0   0   1   0   0   0   0   0
A.obs <- as.matrix(A.obs[,2:30])
#inspect
head(A.obs)
##      V1 V2 V3  V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15 V16 V17 V18 V19 V20
## [1,] 11  0  0   0  0  0  0  0  0   0   0   0   0   0   0   0   0   0   0   0
## [2,]  0  0  0   0  0  0  0  0  0   0   0   0   0   0   0   0   0   0   0   0
## [3,]  0  0 13   0  1  0  0  0  0   0   0   0   0   0   0   0   0   0   0   0
## [4,]  0  0  0 101  0  0  0  0  2  11   0   0   0   2   0   0  17   5   0   0
## [5,]  0  0  0   0 86  0  0  0  0   0   0   0   0   0   0   0   0   0   0   1
## [6,]  0  0  0   0  0  6  0  0  0   0   0   0   0   0   0   0   0   1   0   0
##      V21 V22 V23 V24 V25 V26 V27 V28 V29
## [1,]   0   0   0   0   0   0   0   0   0
## [2,]   0   0   0   0   0   0   0   0   0
## [3,]   0   0   0   0   0   1   1   0   0
## [4,]   0   0   0  10   0   0   0   0   0
## [5,]   0   0   0   0   0   1   1   0   0
## [6,]   0   0   0   1   0   0   0   0   0
#re-label
rownames(A.obs) <- 1:29
colnames(A.obs) <- 1:29
diag(A.obs) <- 0 # setting the diagonal of the matrix to zero ensures that circular connectivity is not considered (i.e. connections are not possible between a patch and itself) 

Next we create a distance matrix which will give the distance between each patch and all other patches.

#make distance matrix
coords <- cbind(nodes$XCoord, nodes$YCoord)
distmat <- pointDistance(coords, lonlat=F) #set lonlat to FALSE because our data is projected (i.e in metres)
distmat <- distmat/1000 #in km

View(distmat)

Now we have a matrix of distances between points next we want, for each distance, to calculate the number of movements made for each distance. Once we have this we an attempt to fit a kernel (a curve) that best describes the distribution of these movements (i.e. that best describes the number of movements as a function of distance).

First, we extract all the positive values in the matrix where a positive value means at least one journey between patches was made. We then subset the distmat object to include only these “positive cases”.

####################################
# Dispersal kernels
####################################

matLocations <- which(A.obs>0, arr.ind=T)#this returns the index (arr.ind) of all non-zero values in the matrix

#inspect
head(matLocations)
##    row col
## 18  18   1
## 27  27   3
## 10  10   4
## 12  12   4
## 13  13   4
## 14  14   4
#link movements with distances between sites
within_disp <- cbind(distmat[matLocations], A.obs[matLocations])

#inspect
head(within_disp)
##           [,1] [,2]
## [1,] 129.90269    1
## [2,]  22.32200    2
## [3,]  43.15229   13
## [4,]  24.75392    1
## [5,]  32.73088    1
## [6,]  44.85707    2
#repeat distances based on freq of movements for each distance (second column)
within_disp <- rep(within_disp[,1], within_disp[,2])

#plot
hist(within_disp)

#get summary stats for dispersal distances
mean.dist <- mean(within_disp)         

mean.dist
## [1] 72.32945
#fit kernels to the data
disp.lnorm <- fitdist(data=within_disp, distr="lnorm", method="mle")#log normal curve
disp.exp <- fitdist(data=within_disp, distr="exp", method="mle") #exponential curve


#AIC Akaike's Information Criterion check of model performance where a lower values means the model performs better.The gofstat() function is used to to calculate goodness of fit ("gof")
disp.AIC <- gofstat(list(disp.exp, disp.lnorm),
                    fitnames=c("exponential", "lognormal"))

#inspect
disp.AIC$aic
## exponential   lognormal 
##    4110.798    4111.573
#plot 

denscomp(disp.exp, addlegend=FALSE)

denscomp(disp.lnorm,addlegend = FALSE)

We can see that the exponential model (just) out-performs the log-normal model (has a slightly lower AIC statistic) so we’ll use that in some of our later analysis.

Next we will create two graph objects (networks) based on different three definitions of adjacency. The first is based simply on whether or not the distance between two patches is below the mean distance computed above. The second is based on the exponential curve modelled in the previous step. This we name A.prob as the function is essentially describing the probability of movement between patches. The third is a simple graph based only on the number of movements measured between each patch in the original A.obs dataframe of Snail Kite movements.

Two adjacency matrices are created (A.mean and A.prob). With these matrices we generate the graphs (networks) using the the graph.adjacency() function in the igraph package. The mean distance-based object has a binary orientation (i.e. is categorical) so no weights are applied but the probability graph we assign as weighted (weighted=T) as the strength of connections is based on our derived probability estimates.

##############################
# Creating a patch-based graph
###############################

#Create a binary adjacency matrix with mean distance
A.mean <- matrix(0, nrow=nrow(A.obs), ncol=ncol(A.obs))
A.mean[distmat < mean.dist] <- 1
diag(A.mean) <- 0

View(A.mean)

#Create adjacency matrix with negative exponential function (probability)
A.prob <- matrix(0, nrow=nrow(A.obs), ncol=ncol(A.obs))

Next we apply a negative exponential model to build the probabilities of movement between all patches. A common way to do this is to take the inverse of the estimated dispersal distance as our alpha value a (from the equation exp(-a dij) shown in the lecture).

We then raise the natural exponent e (which is a useful number, like pi and is equal to ~2.71828) to the power of negative alpha multiplied by our inter-patch distances (dij) so e(-a dij) for each pair of patches. You don’t need to know much more about this exponent, just to understand that, similar to the way pi is useful for geometric calculations, the natural exponent “e” (2.71828) is useful for acting as the base for exponential functions and is a popular formula used in ecological applications (e.g. to estimate rates of extinction and population growth). We take negative alpha because raising a number to the power of another negative number will give us a result between 0 and 1 (which we like and can use to represent probability). Raising e to a positive number would give us values > 1 (potentially very very large numbers) which would be both difficult to interpret as a probability and would also increase with distance, which doesn’t make sense (because increasing distance between patches should reduce probability of dispersal success). Below we’ll look at the effect of changing the value of alpha on the distribution of dispersal probabilities.

# set alpha as inverse of mean dispersal distance 
alpha <- 1/mean.dist

A.prob<-2.71828^(-alpha*distmat) #raise e to the power of -alpha x dij

# we can do this more simply with the exp() function though:
A.prob <- exp(-alpha*distmat) #negative exponential
diag(A.prob) <- 0

#Now plot distances on the x axis and dispersal probability on the y. We remove all zeroes from the plot (by only selecting case with values >0 using square braces [] to subset) as these represent probability of movement from a patch to itself.

plot(distmat[A.prob>0],A.prob[A.prob>0])

We can play around with alpha to make the shape of the exponential kernel either steeper or closer to a straight line. If we assumed a larger dispersal capacity e.g. 175km we would have:

alpha175 <- 1/175 #175km dispersal capacity

A.prob175 <- exp(-alpha175*distmat) #negative exponential

diag(A.prob175) <- 0

#Note the more linear rate of decreasing movement probability:

plot(distmat[A.prob175>0],A.prob175[A.prob175>0]) #175km dispersal

Another popular option is to set alpha such that at the mean dispersal distance the probability of dispersal is roughly equivalent to 5%. We could code that like so:

alpha05 <- log(0.05)/mean.dist

#Note that here log(0.05) returns a negative value so no need for the "-" before alpha this time
A.prob05 <- exp(alpha05*distmat) #negative exponential
diag(A.prob05) <- 0


#If we believe dispersal success should be modeled more optimistically we could increase the probability to, say, 25% by putting 0.25 inside the log function

alpha25 <- log(0.25)/mean.dist


A.prob25 <- exp(alpha25*distmat) #negative exponential
diag(A.prob25) <- 0


#Now note that probability falls much quicker with increasing distance for the 0.05 model:
plot(distmat[A.prob05>0],A.prob05[A.prob05>0]) # 0.05 prob

plot(distmat[A.prob25>0],A.prob25[A.prob25>0]) #0.25 prob

Okay - now let’s create the igraph objects from the adjacency matrices we just built

# first build a binary graph from the A.mean matrix
graph.Amean <- graph_from_adjacency_matrix(A.mean, mode="undirected", weighted=NULL)

#Then a weighted graph from the probability matrix 
graph.Aprob <- graph_from_adjacency_matrix(A.prob, mode="undirected", weighted=T)

We can plot these graphs in R Studio with the base R plot() command. Within the graph objects we have just created we have information on the properties of vertices (V) and edges (E). Of particular use for plotting is the weight of edges (links in the network) which describe the strength of the relationship (connectedness) between each of the vertices (nodes). We can plot the network in geographical space by calling the original “coords” object created above. We can manipulate the edge weights using arithmetic adjustments to make them more pronounced (as below):

#plot
plot(graph.Amean, layout=coords)

plot(graph.Aprob, layout=coords, edge.width=E(graph.Aprob)$weight^10, vertex.label=NA)

Next we will look at some different ways to conceptualize the significance of each patch in our network of habitats according to their relationship with other nodes in the graph. This idea of significance is often referred to as centrality and is broadly, speaking, a measure of how well connected a node is within the network.

Here we will consider degree and betweenness centrality. The first is a measure that considers connections that start or finish at a given node whereas the latter is a measure of connections in the network that pass through a given node. From an ecological point of view this idea of betweenness is important, particularly in relation to habitat fragmentation and the use of stepping stones by different species to access resources.

We can compare betweenness graphs based on the mean distance of the network (graph.Amean) and the probability network (graph.Aprob). Note that betweenness is calculated based on shortest paths between nodes (i.e. lower values for edges are better). Given that our graph.Aprob graph gives higher weights to “more connected” nodes we have to use the inverse of this metric to calculate betweenness below.

###############################################
#Patch connectivity
###############################################

Amean.degree <- degree(graph.Amean)

Amean.between <- betweenness(graph.Amean)

#betweenness centrality for weighted graph
Aprob.between <- betweenness(graph.Aprob, weights=1/E(graph.Aprob)$weight) #betweenness is calculated based on shortest paths (i.e. lower values are better) so we take the inverse of the weights from our graph.Aprob object.

We can plot the graphs as before but this time vary the size of the vertices (nodes) as a function of the centrality measure calculated in the previous step. Can you think why these different graphs produce different results (node sizes)?

#plot

plot(graph.Amean, layout=coords,vertex.size=Amean.degree, main= "Degree",
     vertex.label=NA, frame=T)

plot(graph.Amean, layout=coords,vertex.size=Amean.between^0.5, main= "Betweenness",
     vertex.label=NA, frame=T)

plot(graph.Aprob, layout=coords,vertex.size=Aprob.between^0.5, main= "BetweennessProb",
     vertex.label=NA, frame=T)

We can consider the clustering of different groups of nodes as a function of their shared betweeness. Modularity is based on the idea that modules (clusters) are groups of nodes that have more linkages between them than would be expected based on chance. Here we use the Louvain method in the igraph package to compute modularity:

############################################
# meso-scale connectivity
############################################

#components/clusters



#modularity
Amean.modularity <- cluster_louvain(graph.Amean)

#inspect
modularity(Amean.modularity)
## [1] 0.4559035
membership(Amean.modularity)
##  [1] 1 2 1 3 1 2 1 1 3 3 2 3 3 3 3 1 3 2 3 3 1 1 1 3 1 1 1 1 1

Plot the resulting clusters in all their glory!

#plot

plot(graph.Amean,layout=coords,vertex.size=8, main= "Modularity",
     vertex.label=NA, vertex.color=Amean.modularity$membership, frame=T)

Finally we can calculate the probability of connectivity metric (PC) which is a metric that symbolizes the overall connectivity of entire networks. The PC metric is worked out by calculating the shortest path between all nodes in the network (again we need to transform the graph.Aprob weights to represent cost). Once we have this we get the product of (i.e. multiply) the areas of all pairs of patches in the network and multiply the shortest paths from the first step by these area products. Dividing this by the total area of the study gives us the PC metric. So this is effectively very similar to calculating betweenness as above only incorporating patch area into the metric.

With this we can then calculate the contribution of each patch to PC by removing each one sequentially and re-calculating the PC metric. The resulting difference in PC is therefore the contribution of each patch which we will call “dPC”.

#Probability of Connectivity, PC

AL <- 63990  #total area of study in km^2
area<-nodes$area
#calculate landscape-scale connectivity
pstar.mat <- shortest.paths(graph.Aprob, weights= -log(E(graph.Aprob)$weight)) #calculate all shortest paths between nodes
## Warning: `shortest.paths()` was deprecated in igraph 2.0.0.
## ℹ Please use `distances()` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
pstar.mat <- exp(-pstar.mat)                                                   #back-transform to probabilities of connectedness
PCnum <- outer(area, area)*pstar.mat                                           #get product of all patch areas ij and multiply by probabilities above
PC <- sum(PCnum)/AL^2 #divide by total area of the study squared to get the PC metric





#function for calculating dPC - contribution of each patch to PC. Here we simply remove each patch using a for loop and repeat the above function, adding the difference between PC above and each new calculation (minus patch "i") to am empty vector "dPC 

prob.connectivity <- function(prob.matrix, area,landarea){

  #dPC
  N <- nrow(prob.matrix) #each row is a patch
  dPC <- rep(NA, N) #empty vector to store results for each patch
  
  

  for (i in 1:N) {
    prob.matrix.i <- prob.matrix[-i,-i]
    area.i <-area[-i]
    pc.graph.i <- graph_from_adjacency_matrix(prob.matrix.i, mode="undirected", weighted=TRUE)
    pstar.mat.i <- shortest.paths(pc.graph.i, weights= -log(E(pc.graph.i)$weight))
    pstar.mat.i <- exp(-pstar.mat.i)
    PCmat.i <- outer(area.i, area.i)*pstar.mat.i
    PC.i <- sum(PCmat.i)/landarea^2
    dPC[i] <- (PC-PC.i)/PC*100
  }

    return(dPC)
}

Aprob.PC <- prob.connectivity(prob.matrix=A.prob, area=area, landarea=AL)


plot(graph.Aprob,layout=coords,vertex.size=Aprob.PC^0.8, main= "dPC",
    vertex.label.color="transparent", frame=T)

As you can see, area plays a pretty big role here in determining node importance and this can be helpful when working with habitat patches of varying sizes.

That is the end of this week’s practical. You might start to consider how these techniques could be incorporated into your A2 work if you are considering the connectivity scenario.