Daño floral en función de frecuencia de visitas

La frambuesa es un cultivo muy importante en la Comarca Andina y depende fuertemente de polinizadores. Para que un fruto de frambuesa quede bien formado es necesario que sea polinizado y en general se supone que la presencia de polinizadores en el cultivo es beneficiosa. Sin embargo, es posible que demasiados polinizadores resulten perjudiciales. Un exceso de visitas puede dañar los pistilos de las flores y producir frutos de menor calidad. ¿Pero cuÔnto es un exceso de visitas? Con esta pregunta en mente, Agustín Saez estimó la tasa de vistas en \(16\) cultivos de la Comarca Andina y ademÔs colectó flores de distintas plantas en cada cultivo para ver cuÔntos pistilos sanos y rotos tenían. Veamos los datos y grafiquemos cómo cambia la proporción de pistilos rotos en función de la frecuencia de visitas.

datos <- read.table("https://github.com/jmmorales/cursoMyD/raw/master/Data/frambuesas.txt",
                    header = TRUE)

Empecemos con un grƔfico:

sitio <- as.numeric(as.factor(datos$Sitio))

op <- par(cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, las = 1, bty = "n")

plot(1, type = "n" , 
     ylim = c(0, 1.2), 
     xlim = c(0, 3.5),
     xlab = "Frecuencia de Visitas", 
     ylab = "Prop. de pistilos rotos")

colores <- rainbow(16, alpha = 0.5)
for(i in 1:max(sitio)){
  idx <- which(sitio == i)
  points((datos$Frecuencia[idx]), 
         jitter(datos$Rotos[idx]/datos$Total[idx]), 
         pch = 21, 
         bg = colores[i], 
         cex = 1.5)
}

par(op)

Los colores de los puntos corresponden a los distintos sitios. Como se puede ver, hay una sola estimación de frecuencia de visitas por sitio. AdemÔs de variar por la frecuencia de vistas, cada sitio puede tener una proporción de pistilos rotos por otras causas como por ejemplo heladas. Vamos a ajustar un modelo de daño de pistilos en función de la frecuencia de visitas por flor donde permitamos que cada sitio tenga valores diferentes de ordenada al origen. Es decir, el daño esperado cuando tenemos cero visitas. Vamos a asumir que la respuesta en cuanto a la frecuencia de visitas es la misma para todos los sitios.

Ejercicio

  • Escriban (en ā€œpapel y lĆ”pizā€) el modelo que acabamos de describir.

Ahora vamos a ajustar usando JAGS el modelo jerƔrquico que acabamos de describir. Para eso, escribimos el modelo en lenguaje BUGS:

cat(file = "mod1.bug",
"
model {
  for(i in 1:n) {
    rotos[i] ~ dbin(p[i], N[i])
    logit(p[i]) <- b0[sitio[i]] + b1 * freq[i]
    }
    
  for( j in 1:S){
    b0[j] ~ dnorm(mub0, taub0)
  }

  b1 ~ dnorm(0, 1)
  mub0  ~ dnorm(0, 0.1)
  taub0 <- 1/(sdb0 * sdb0)
  sdb0 ~ dnorm(0, 1)T(0, )
  }
")

Luego definimos los datos, la función para los valores iniciales y los parÔmetros que queremos guardar.

sitio <- as.numeric(as.factor(datos$Sitio))

data <- list(rotos = datos$Rotos, 
             freq = datos$Frecuencia, 
             N = datos$Total, 
             n = length(datos$Rotos), 
             S =  max(sitio), 
             sitio = sitio
             )

params <- c("b0", "b1", "mub0", "sdb0")
inits <- function() { 
        list (b0   = rnorm(max(sitio), 0, 1), 
              b1   = rnorm(1, 0, 1),  
              mub0 = rnorm(1, 0, 1), 
              sdb0 = runif(1, 0, 10))
  }

Ahora definimos cuÔntas iteraciones queremos correr, el thining, el burn in, número de cadenas:

ni <- 5000  # nĆŗmero de iteraciones
nt <- 2     # si descartamos "thin" algunos valores
nb <- 2500  # cuantas iteraciones usamos de "burn in"
nc <- 3     # y cuantas cadenas corremos

Ahora llamamos a JAGS

library(jagsUI)
m1.sim <- jags(data, 
               inits, 
               params, 
               model.file = "mod1.bug", 
               n.chains = nc, 
               n.iter = ni, 
               n.burnin = nb, 
               n.thin = nt)
## 
## Processing function input....... 
## 
## Done. 
##  
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 477
##    Unobserved stochastic nodes: 19
##    Total graph size: 1987
## 
## Initializing model
## 
## Adaptive phase..... 
## Adaptive phase complete 
##  
## 
##  Burn-in phase, 2500 iterations x 3 chains 
##  
## 
## Sampling from joint posterior, 2500 iterations x 3 chains 
##  
## 
## Calculating statistics....... 
## 
## Done.

Vemos que hayan convergido y que los tamaƱos efectivos de las muestras de las posteriores (n.eff) sean razonables.

print(m1.sim)
## JAGS output for model 'mod1.bug', generated by jagsUI.
## Estimates based on 3 chains of 5000 iterations,
## adaptation = 100 iterations (sufficient),
## burn-in = 2500 iterations and thin rate = 2,
## yielding 3750 total samples from the joint posterior. 
## MCMC ran for 0.126 minutes at time 2021-12-08 14:35:09.
## 
##              mean    sd     2.5%      50%    97.5% overlap0     f  Rhat n.eff
## b0[1]      -3.579 0.566   -4.653   -3.586   -2.421    FALSE 1.000 1.010   210
## b0[2]      -2.818 0.328   -3.463   -2.819   -2.192    FALSE 1.000 1.009   590
## b0[3]      -4.173 0.609   -5.554   -4.132   -3.107    FALSE 1.000 1.047    73
## b0[4]      -3.741 0.456   -4.617   -3.753   -2.833    FALSE 1.000 1.005   534
## b0[5]      -3.937 0.679   -5.226   -3.956   -2.587    FALSE 1.000 1.008   338
## b0[6]      -4.395 0.395   -5.167   -4.398   -3.616    FALSE 1.000 1.022   104
## b0[7]      -4.652 0.444   -5.574   -4.648   -3.792    FALSE 1.000 1.002  2330
## b0[8]      -2.968 0.295   -3.547   -2.971   -2.396    FALSE 1.000 1.023    97
## b0[9]      -2.990 0.368   -3.812   -2.964   -2.365    FALSE 1.000 1.051    87
## b0[10]     -4.459 0.422   -5.298   -4.454   -3.657    FALSE 1.000 1.087    31
## b0[11]     -1.529 0.790   -2.985   -1.549    0.062     TRUE 0.970 1.019   122
## b0[12]     -5.147 0.722   -6.516   -5.168   -3.714    FALSE 1.000 1.009   294
## b0[13]     -2.719 0.338   -3.434   -2.718   -2.096    FALSE 1.000 1.095    31
## b0[14]     -4.179 0.530   -5.513   -4.124   -3.276    FALSE 1.000 1.047   102
## b0[15]     -2.482 0.737   -3.848   -2.522   -0.965    FALSE 0.998 1.008   279
## b0[16]     -3.017 0.795   -4.499   -3.043   -1.378    FALSE 1.000 1.004   541
## b1          1.477 0.248    0.981    1.482    1.946    FALSE 1.000 1.007   394
## mub0       -3.520 0.478   -4.424   -3.533   -2.539    FALSE 1.000 1.013   223
## sdb0        1.068 0.216    0.729    1.036    1.571    FALSE 1.000 1.021   111
## deviance 1276.470 6.301 1266.418 1275.597 1291.282    FALSE 1.000 1.001  1295
## 
## Successful convergence based on Rhat values (all < 1.1). 
## Rhat is the potential scale reduction factor (at convergence, Rhat=1). 
## For each parameter, n.eff is a crude measure of effective sample size. 
## 
## overlap0 checks if 0 falls in the parameter's 95% credible interval.
## f is the proportion of the posterior with the same sign as the mean;
## i.e., our confidence that the parameter is positive or negative.
## 
## DIC info: (pD = var(deviance)/2) 
## pD = 19.8 and DIC = 1296.299 
## DIC is an estimate of expected predictive error (lower is better).

Para algunos sitios los valores de n.eff son un poco bajos y deberƭamos correr mƔs iteraciones para estimar mejor las posteriores.

Podemos ver como es la posterior de la pendiente, es decir la tasa de cambio en el daƱo de pistilos a medida que cambia la frecuencia de visitas

op <- par(cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, las = 1, bty = "n")
plot(density(m1.sim$sims.list$b1), lwd = 2, main = "", xlab = "b1")

par(op)

Parece que hay un efecto claro de la frecuencia de visitas en la cantidad de pistilos rotos.

TambiĆ©n podemos ver grĆ”ficamente las diferencias entre las posteriores de las ordenadas al origen para los distintos sitios b0 y la ordenada al origen para la ā€œpoblaciónā€ de sitios mub0.

op <- par(cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, las = 1, bty = "n")

plot(density(m1.sim$sims.list$mub0), 
     ylim = c(0,2), 
     xlim = c(-10,3), 
     lwd = 4, 
     main = "", 
     xlab = "Ordenada al origen")

for(i in 1:max(sitio)){
  lines(density(m1.sim$sims.list$b0[,i]), 
        col = "gray", 
        lwd = 2)
  }

par(op)

Veamos ahora las curvas estimadas para cada sitio en un grƔfico

op <- par(cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, las = 1, bty = "n")

plot(1, type = "n", 
     ylim = c(0, 1.2), 
     xlim = c(0, 3.5), 
     xlab = "Frecuencia de Visitas", 
     ylab = "Proporción de pistilos rotos")

colores <- rainbow(16, alpha = 0.5)

xf <- seq(min(datos$Frecuencia), max(datos$Frecuencia), length.out = 100)

for(i in 1:max(sitio)){
        idx <- which(sitio == i)
        points((datos$Frecuencia[idx]), jitter(datos$Rotos[idx]/datos$Total[idx]), 
               pch = 21, 
               bg = colores[i])
       
        lines(xf, plogis(m1.sim$mean$b0[i] + m1.sim$mean$b1 * xf), 
              col = colores[i], 
              lwd = 2)
        
}
lines(xf, plogis(m1.sim$mean$mub0 + m1.sim$mean$b1 * xf), 
      lwd = 4)

par(op)

Pero tambiƩn tenƭamos plantas dentro de sitios. Podemos tener cuanta la variablidad entre plantas asumiendo que no todas las plantas dentro de un sitio estƔn en la mismas condiciones.

Ejercicio

  • Escriban (en ā€œpapel y lĆ”pizā€) el modelo que permite variabilidad entre plantas.

Escribimos el modelo en BUGS:

cat(file = "mod2.bug",
"    
model {
  for(i in 1: n) {
    rotos[i] ~ dbin(p[i], N[i])
    logit(p[i]) <- b0[planta[i]] + b1 * freq[i]
    }
    
  for( j in 1:P){
    b0[j] ~ dnorm(mub0[Ns[j]], taub0)
  }
    
  for(s in 1:S){
    mub0[s] ~ dnorm(mub0s, taub0s)
  }
    
  b1 ~ dnorm(0, 0.1)
    
  taub0 <- 1/(sdb0 * sdb0)
  sdb0 ~ dnorm(0, 1)T(0, )
  mub0s  ~ dnorm(0, 0.1)

  taub0s <- 1/(sdb0s * sdb0s)
  sdb0s ~ dnorm(0, 1)T(0, )
  }
")

Luego definimos los datos, la función para los valores iniciales y los parÔmetros que queremos guardar.

sitio <- as.numeric(as.factor(datos$Sitio))

ns <- unique(sitio)
Ns <- aggregate(sitio, by = list(datos$Planta), FUN = "max")[, 2]

data <- list(rotos = datos$Rotos,
             freq = datos$Frecuencia,
             n = length(datos$Rotos),
             N = datos$Total,
             S = max(sitio),
             P = max(datos$Planta),
             planta = datos$Planta,
             Ns = Ns
             )

params <- c("b0", "b1", "mub0", "sdb0", "mub0s", "sdb0s")

inits <- function() { 
        list (b1   = rnorm(1, 0, 1),  
              mub0 = rnorm(max(sitio), 0, 1), 
              sdb0 = runif(1, 0, 10),
              mub0s = rnorm(1, 0, 1),
              sdb0s = runif(1, 0, 5))}

Ahora definimos cuÔntas iteraciones queremos correr, el thining, el burn in, número de cadenas:

ni <- 5000  # nĆŗmero de iteraciones
nt <- 1     # si descartamos "thin" algunos valores
nb <- 2500  # cuantas iteraciones usamos de "burn in"
nc <- 3     # y cuantas cadenas corremos

Finalmente llamamos a JAGS

m2.sim <- jags(data, inits, params, model.file = "mod2.bug", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin = nt)
## 
## Processing function input....... 
## 
## Done. 
##  
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 477
##    Unobserved stochastic nodes: 206
##    Total graph size: 2707
## 
## Initializing model
## 
## Adaptive phase..... 
## Adaptive phase complete 
##  
## 
##  Burn-in phase, 2500 iterations x 3 chains 
##  
## 
## Sampling from joint posterior, 2500 iterations x 3 chains 
##  
## 
## Calculating statistics....... 
## 
## Done.

Vemos que hayan convergido y que los tamaƱos efectivos de las muestras de las posteriores (n.eff) sean razonables.

print(m2.sim)
## JAGS output for model 'mod2.bug', generated by jagsUI.
## Estimates based on 3 chains of 5000 iterations,
## adaptation = 100 iterations (sufficient),
## burn-in = 2500 iterations and thin rate = 1,
## yielding 7500 total samples from the joint posterior. 
## MCMC ran for 0.193 minutes at time 2021-12-08 14:35:18.
## 
##             mean     sd    2.5%     50%   97.5% overlap0     f  Rhat n.eff
## b0[1]     -7.422  1.493 -10.303  -7.343  -4.530    FALSE 1.000 1.123    22
## b0[2]     -3.720  0.703  -5.141  -3.709  -2.379    FALSE 1.000 1.006   345
## b0[3]     -7.725  1.458 -10.306  -7.847  -4.759    FALSE 1.000 1.023   171
## b0[4]     -8.342  1.591 -11.211  -8.404  -5.362    FALSE 1.000 1.096    26
## b0[5]     -7.830  1.624 -11.556  -7.628  -5.187    FALSE 1.000 1.131    31
## b0[6]     -7.544  1.826 -11.424  -7.454  -4.145    FALSE 1.000 1.015  1720
## b0[7]     -7.776  1.729 -11.415  -7.626  -4.754    FALSE 1.000 1.024   130
## b0[8]     -7.494  1.663 -11.367  -7.351  -4.701    FALSE 1.000 1.112    26
## b0[9]     -7.581  1.865 -11.510  -7.569  -4.297    FALSE 1.000 1.037    63
## b0[10]    -8.094  1.776 -11.491  -7.935  -4.900    FALSE 1.000 1.187    15
## b0[11]    -7.655  1.657 -10.750  -7.618  -4.584    FALSE 1.000 1.005   891
## b0[12]    -7.636  1.422 -10.544  -7.570  -5.108    FALSE 1.000 1.036    76
## b0[13]    -6.844  1.331  -9.400  -6.907  -4.242    FALSE 1.000 1.096    26
## b0[14]    -7.300  1.826 -11.338  -7.079  -4.385    FALSE 1.000 1.038    63
## b0[15]    -7.177  1.647 -10.747  -6.899  -4.397    FALSE 1.000 1.027   106
## b0[16]    -7.469  1.562 -10.608  -7.472  -4.691    FALSE 1.000 1.065    37
## b0[17]    -7.267  1.613 -10.469  -7.249  -4.525    FALSE 1.000 1.178    16
## b0[18]    -7.198  1.706 -10.670  -7.149  -4.237    FALSE 1.000 1.101    25
## b0[19]    -3.783  0.709  -5.213  -3.756  -2.432    FALSE 1.000 1.013   198
## b0[20]    -4.348  0.941  -6.339  -4.305  -2.591    FALSE 1.000 1.019   159
## b0[21]    -4.646  0.918  -6.702  -4.568  -2.988    FALSE 1.000 1.014   250
## b0[22]    -7.234  1.666 -10.791  -7.177  -4.309    FALSE 1.000 1.032    73
## b0[23]    -6.834  1.598 -10.266  -6.718  -4.070    FALSE 1.000 1.078    31
## b0[24]    -7.086  1.964 -11.636  -6.859  -3.965    FALSE 1.000 1.065    63
## b0[25]    -7.040  1.788 -11.278  -6.733  -4.197    FALSE 1.000 1.053    82
## b0[26]    -7.392  1.763 -11.114  -7.283  -4.455    FALSE 1.000 1.548     7
## b0[27]    -5.564  0.912  -7.268  -5.589  -3.782    FALSE 1.000 1.003  2188
## b0[28]    -6.186  1.054  -8.271  -6.197  -4.202    FALSE 1.000 1.012   270
## b0[29]    -5.110  0.911  -6.916  -5.100  -3.356    FALSE 1.000 1.014   247
## b0[30]    -7.152  1.106  -9.284  -7.202  -4.928    FALSE 1.000 1.002   960
## b0[31]    -2.375  1.248  -4.706  -2.427   0.160     TRUE 0.968 1.025    93
## b0[32]    -3.557  0.897  -5.295  -3.552  -1.808    FALSE 1.000 1.029    86
## b0[33]    -2.189  1.182  -4.383  -2.266   0.252     TRUE 0.961 1.015   166
## b0[34]    -7.908  1.568 -11.653  -7.709  -5.281    FALSE 1.000 1.081   138
## b0[35]    -4.185  0.913  -5.920  -4.186  -2.268    FALSE 1.000 1.021   130
## b0[36]    -7.748  1.347 -10.494  -7.741  -5.110    FALSE 1.000 1.026   130
## b0[37]    -8.347  1.359 -11.197  -8.247  -5.926    FALSE 1.000 1.108    23
## b0[38]    -7.541  1.618 -10.853  -7.382  -4.625    FALSE 1.000 1.118    24
## b0[39]    -7.323  1.445 -10.263  -7.286  -4.654    FALSE 1.000 1.219    14
## b0[40]    -6.715  2.171 -11.833  -6.398  -3.281    FALSE 1.000 1.766     6
## b0[41]    -3.840  1.371  -6.824  -3.687  -1.534    FALSE 1.000 1.004   737
## b0[42]    -7.059  1.853 -10.394  -7.155  -3.719    FALSE 1.000 1.030    85
## b0[43]    -6.029  1.772  -9.746  -5.985  -2.698    FALSE 1.000 1.015   302
## b0[44]    -4.088  1.105  -6.389  -4.004  -2.092    FALSE 1.000 1.001  7500
## b0[45]    -6.170  1.664  -9.255  -6.190  -2.885    FALSE 1.000 1.017   195
## b0[46]    -6.396  1.840 -10.293  -6.245  -2.920    FALSE 1.000 1.091    30
## b0[47]    -6.215  1.053  -8.251  -6.227  -4.069    FALSE 1.000 1.007   609
## b0[48]   -10.053  1.553 -13.228  -9.966  -7.185    FALSE 1.000 1.103    24
## b0[49]    -7.642  1.083  -9.824  -7.607  -5.540    FALSE 1.000 1.008   277
## b0[50]    -5.946  1.063  -8.105  -5.945  -3.861    FALSE 1.000 1.014   359
## b0[51]    -8.060  1.202 -10.304  -8.101  -5.712    FALSE 1.000 1.031    70
## b0[52]    -8.747  1.242 -11.085  -8.797  -6.227    FALSE 1.000 1.019   216
## b0[53]    -7.705  1.120  -9.939  -7.707  -5.570    FALSE 1.000 1.026   101
## b0[54]    -7.320  1.105  -9.463  -7.315  -5.125    FALSE 1.000 1.030    73
## b0[55]    -6.042  1.233  -8.446  -6.008  -3.616    FALSE 1.000 1.003  6090
## b0[56]    -5.803  1.701  -9.251  -5.632  -2.870    FALSE 1.000 1.060    62
## b0[57]    -5.182  1.735  -8.823  -5.167  -2.026    FALSE 1.000 1.047    57
## b0[58]    -6.119  1.832 -10.362  -5.798  -3.346    FALSE 1.000 1.067    89
## b0[59]    -5.663  1.479  -8.538  -5.628  -2.850    FALSE 1.000 1.086    31
## b0[60]    -5.512  1.839  -9.194  -5.529  -2.257    FALSE 1.000 1.008   550
## b0[61]    -3.848  1.192  -6.644  -3.832  -1.561    FALSE 1.000 1.111    30
## b0[62]    -5.994  1.478  -9.078  -5.974  -3.391    FALSE 1.000 1.027    84
## b0[63]    -5.494  1.863 -10.527  -5.290  -2.505    FALSE 1.000 1.096   103
## b0[64]    -4.262  1.319  -7.196  -4.187  -2.003    FALSE 1.000 1.143    19
## b0[65]    -5.674  1.475  -8.648  -5.583  -3.069    FALSE 1.000 1.047    81
## b0[66]    -6.105  1.850  -9.744  -6.142  -2.571    FALSE 1.000 1.145    19
## b0[67]    -5.694  1.897  -9.725  -5.427  -2.832    FALSE 1.000 1.020   143
## b0[68]    -6.053  1.561  -9.312  -6.070  -2.881    FALSE 1.000 1.043   105
## b0[69]    -5.517  1.461  -8.659  -5.402  -2.937    FALSE 1.000 1.014   270
## b0[70]    -2.438  0.667  -3.780  -2.441  -1.176    FALSE 1.000 1.007   279
## b0[71]    -4.493  1.032  -6.650  -4.408  -2.621    FALSE 1.000 1.007  4088
## b0[72]    -2.200  0.929  -3.988  -2.189  -0.362    FALSE 0.989 1.001  7500
## b0[73]    -2.665  0.647  -3.971  -2.643  -1.444    FALSE 1.000 1.008   459
## b0[74]    -4.509  0.977  -6.587  -4.473  -2.806    FALSE 1.000 1.045    58
## b0[75]    -6.034  1.365  -9.041  -5.939  -3.396    FALSE 1.000 1.065    51
## b0[76]    -2.272  0.934  -4.078  -2.297  -0.385    FALSE 0.992 1.006   341
## b0[77]    -5.595  1.506  -8.862  -5.472  -3.002    FALSE 1.000 1.050    48
## b0[78]    -6.283  1.506  -9.572  -6.188  -3.562    FALSE 1.000 1.028    82
## b0[79]    -5.966  1.436  -9.271  -5.748  -3.633    FALSE 1.000 1.014   417
## b0[80]    -3.273  0.634  -4.502  -3.274  -2.024    FALSE 1.000 1.007   973
## b0[81]    -5.853  1.728  -9.828  -5.670  -3.036    FALSE 1.000 1.043   573
## b0[82]    -8.613  1.221 -11.044  -8.590  -6.354    FALSE 1.000 1.017   131
## b0[83]    -5.143  1.028  -7.183  -5.136  -3.201    FALSE 1.000 1.007   365
## b0[84]    -4.628  1.170  -6.792  -4.637  -2.329    FALSE 1.000 1.008   268
## b0[85]    -2.876  1.521  -5.786  -2.918   0.396     TRUE 0.963 1.056    45
## b0[86]    -6.222  0.997  -8.235  -6.224  -4.320    FALSE 1.000 1.007   397
## b0[87]    -5.493  1.014  -7.552  -5.492  -3.511    FALSE 1.000 1.016   152
## b0[88]    -8.635  1.304 -11.463  -8.539  -6.320    FALSE 1.000 1.030   139
## b0[89]    -5.525  1.066  -7.657  -5.515  -3.387    FALSE 1.000 1.012   220
## b0[90]    -3.438  1.281  -5.811  -3.490  -0.669    FALSE 0.988 1.007   314
## b0[91]    -5.694  1.034  -7.873  -5.677  -3.763    FALSE 1.000 1.008   539
## b0[92]    -5.416  1.081  -7.593  -5.416  -3.319    FALSE 1.000 1.013   211
## b0[93]    -8.815  1.337 -11.764  -8.743  -6.313    FALSE 1.000 1.008   348
## b0[94]    -2.973  1.382  -5.498  -3.042  -0.151    FALSE 0.981 1.008  1368
## b0[95]    -5.147  1.209  -7.445  -5.221  -2.770    FALSE 1.000 1.051    66
## b0[96]    -4.949  1.374  -7.613  -4.917  -2.361    FALSE 1.000 1.116    29
## b0[97]    -4.926  1.646  -8.466  -4.760  -2.218    FALSE 1.000 1.046   164
## b0[98]    -5.058  1.485  -8.462  -4.874  -2.620    FALSE 1.000 1.025   122
## b0[99]    -4.890  1.663  -8.875  -4.680  -2.268    FALSE 1.000 1.035   356
## b0[100]   -5.053  1.676  -8.181  -5.041  -1.932    FALSE 1.000 1.029    85
## b0[101]   -2.987  1.068  -5.366  -2.918  -1.119    FALSE 0.999 1.008   765
## b0[102]   -3.492  1.148  -6.258  -3.398  -1.584    FALSE 1.000 1.028   202
## b0[103]   -4.988  1.405  -7.877  -4.937  -2.602    FALSE 1.000 1.031    72
## b0[104]   -4.848  1.436  -8.105  -4.727  -2.476    FALSE 1.000 1.026    83
## b0[105]   -4.935  1.475  -8.152  -4.782  -2.318    FALSE 1.000 1.108    31
## b0[106]   -1.931  0.650  -3.282  -1.907  -0.700    FALSE 1.000 1.012   179
## b0[107]   -0.522  0.550  -1.603  -0.511   0.522     TRUE 0.830 1.019   115
## b0[108]   -5.456  1.721  -9.047  -5.322  -2.530    FALSE 1.000 1.066    38
## b0[109]   -4.950  1.467  -7.920  -4.963  -2.485    FALSE 1.000 1.027    80
## b0[110]   -5.286  1.610  -8.609  -5.232  -2.482    FALSE 1.000 1.109    23
## b0[111]   -5.213  1.609  -8.554  -5.012  -2.471    FALSE 1.000 1.039    68
## b0[112]   -1.535  0.581  -2.747  -1.503  -0.451    FALSE 0.998 1.027    81
## b0[113]   -1.696  0.665  -3.126  -1.655  -0.480    FALSE 0.997 1.006   466
## b0[114]   -5.052  1.513  -8.184  -4.950  -2.223    FALSE 1.000 1.092    30
## b0[115]   -5.281  1.832  -9.604  -4.909  -2.511    FALSE 1.000 1.330    10
## b0[116]   -3.676  1.227  -6.588  -3.471  -1.734    FALSE 1.000 1.111    26
## b0[117]   -5.524  1.527  -8.167  -5.583  -2.505    FALSE 1.000 1.221    14
## b0[118]   -5.142  1.776  -9.457  -4.929  -2.299    FALSE 1.000 1.090    42
## b0[119]   -3.227  0.731  -4.799  -3.175  -1.893    FALSE 1.000 1.019   120
## b0[120]   -4.315  0.861  -6.087  -4.271  -2.667    FALSE 1.000 1.020   110
## b0[121]   -3.140  0.692  -4.601  -3.116  -1.840    FALSE 1.000 1.008   338
## b0[122]   -5.835  1.717  -9.892  -5.572  -3.188    FALSE 1.000 1.058   122
## b0[123]   -5.526  1.214  -7.863  -5.467  -3.000    FALSE 1.000 1.040    62
## b0[124]   -5.047  1.579  -8.761  -4.881  -2.444    FALSE 1.000 1.026    99
## b0[125]   -4.322  0.875  -6.185  -4.237  -2.883    FALSE 1.000 1.060    65
## b0[126]   -3.709  0.801  -5.466  -3.622  -2.371    FALSE 1.000 1.023   189
## b0[127]   -5.926  1.333  -8.470  -5.855  -3.516    FALSE 1.000 1.039    58
## b0[128]   -2.412  0.599  -3.594  -2.407  -1.266    FALSE 1.000 1.004  4940
## b0[129]   -5.582  1.270  -8.453  -5.420  -3.456    FALSE 1.000 1.037    68
## b0[130]   -1.500  0.652  -2.817  -1.478  -0.273    FALSE 0.991 1.013   180
## b0[131]   -3.561  0.817  -5.239  -3.504  -2.109    FALSE 1.000 1.001  7500
## b0[132]   -3.927  0.940  -6.024  -3.826  -2.373    FALSE 1.000 1.036    66
## b0[133]   -7.060  1.599 -10.624  -6.863  -4.334    FALSE 1.000 1.082    31
## b0[134]   -3.810  0.748  -5.257  -3.821  -2.323    FALSE 1.000 1.033    72
## b0[135]   -7.045  1.353  -9.996  -6.986  -4.643    FALSE 1.000 1.009   263
## b0[136]   -5.477  1.185  -7.982  -5.404  -3.373    FALSE 1.000 1.029    93
## b0[137]   -7.278  1.587 -10.700  -7.034  -4.814    FALSE 1.000 1.098    34
## b0[138]   -7.048  1.559 -10.330  -6.932  -4.361    FALSE 1.000 1.022   130
## b0[139]   -3.828  0.716  -5.275  -3.819  -2.499    FALSE 1.000 1.015   148
## b0[140]   -7.373  1.621 -10.894  -7.270  -4.528    FALSE 1.000 1.125    21
## b0[141]   -7.494  1.709 -11.795  -7.335  -4.752    FALSE 1.000 1.171    20
## b0[142]   -7.215  1.513 -10.174  -7.134  -4.507    FALSE 1.000 1.127    23
## b0[143]   -6.723  1.227  -9.014  -6.709  -4.306    FALSE 1.000 1.012  1679
## b0[144]   -7.188  1.672 -11.128  -7.012  -4.467    FALSE 1.000 1.083    46
## b0[145]   -4.557  1.212  -6.826  -4.564  -1.987    FALSE 0.999 1.012   315
## b0[146]   -6.432  1.039  -8.384  -6.459  -4.264    FALSE 1.000 1.016   169
## b0[147]   -4.089  1.211  -6.302  -4.113  -1.555    FALSE 0.997 1.027   124
## b0[148]   -2.697  1.429  -5.500  -2.647   0.182     TRUE 0.965 1.036    77
## b0[149]   -4.215  1.215  -6.482  -4.217  -1.781    FALSE 0.999 1.017   286
## b0[150]   -4.203  1.112  -6.293  -4.253  -1.898    FALSE 1.000 1.003  1771
## b0[151]   -3.467  1.215  -5.760  -3.493  -1.039    FALSE 0.994 1.005   585
## b0[152]   -2.570  1.726  -5.546  -2.766   1.470     TRUE 0.926 1.059    60
## b0[153]   -4.219  1.255  -6.658  -4.279  -1.671    FALSE 1.000 1.018   194
## b0[154]   -3.655  1.261  -5.975  -3.702  -0.771    FALSE 0.992 1.020   144
## b0[155]   -6.045  1.188  -8.968  -5.926  -4.025    FALSE 1.000 1.102    29
## b0[156]   -3.107  0.758  -4.552  -3.111  -1.547    FALSE 1.000 1.007   462
## b0[157]   -3.880  0.816  -5.466  -3.889  -2.277    FALSE 1.000 1.019   108
## b0[158]   -7.418  1.392 -10.351  -7.264  -4.919    FALSE 1.000 1.065    43
## b0[159]   -6.442  1.159  -9.193  -6.366  -4.352    FALSE 1.000 1.024   106
## b0[160]   -5.634  0.891  -7.449  -5.626  -3.982    FALSE 1.000 1.021   113
## b0[161]   -4.748  0.843  -6.420  -4.737  -3.097    FALSE 1.000 1.008   276
## b0[162]   -5.968  0.884  -7.789  -5.925  -4.262    FALSE 1.000 1.019   225
## b0[163]   -7.547  1.711 -11.080  -7.393  -4.433    FALSE 1.000 1.086    28
## b0[164]   -3.943  0.759  -5.415  -3.954  -2.466    FALSE 1.000 1.015   162
## b0[165]   -6.697  1.416  -9.679  -6.609  -4.230    FALSE 1.000 1.041    55
## b0[166]   -5.941  1.080  -8.264  -5.875  -4.051    FALSE 1.000 1.034    84
## b0[167]   -1.874  1.759  -4.940  -1.970   1.802     TRUE 0.852 1.088    30
## b0[168]   -3.257  1.358  -5.694  -3.333  -0.354    FALSE 0.986 1.041    59
## b0[169]   -2.012  1.603  -4.954  -1.956   1.012     TRUE 0.907 1.009   259
## b0[170]   -5.213  1.197  -7.545  -5.241  -2.693    FALSE 1.000 1.031   102
## b0[171]   -2.090  1.610  -4.954  -2.179   1.155     TRUE 0.900 1.204    15
## b0[172]   -3.489  1.387  -6.097  -3.527  -0.620    FALSE 0.993 1.009   231
## b0[173]   -1.930  1.736  -4.818  -2.096   2.294     TRUE 0.870 1.084    32
## b0[174]   -1.884  1.703  -4.876  -2.005   1.863     TRUE 0.872 1.023   124
## b0[175]   -1.728  1.859  -4.817  -1.835   3.171     TRUE 0.854 1.161    20
## b0[176]   -3.023  1.511  -5.995  -2.985  -0.162    FALSE 0.981 1.033    82
## b0[177]   -1.699  1.560  -4.672  -1.714   1.283     TRUE 0.857 1.020   180
## b0[178]   -2.557  1.756  -5.777  -2.670   1.235     TRUE 0.925 1.044    50
## b0[179]   -5.751  1.158  -8.013  -5.761  -3.406    FALSE 1.000 1.005   571
## b0[180]   -4.865  1.239  -7.297  -4.886  -2.287    FALSE 1.000 1.004  1471
## b0[181]   -4.817  1.254  -7.251  -4.812  -2.248    FALSE 1.000 1.003  1019
## b0[182]   -5.324  1.198  -7.687  -5.321  -3.034    FALSE 1.000 1.007   296
## b0[183]   -3.239  1.652  -6.245  -3.324   0.020     TRUE 0.974 1.055    42
## b0[184]   -3.247  1.539  -6.030  -3.300  -0.095    FALSE 0.978 1.035    87
## b0[185]   -4.164  1.270  -6.559  -4.177  -1.550    FALSE 1.000 1.015   155
## b0[186]   -6.884  1.142  -9.026  -6.924  -4.549    FALSE 1.000 1.008   375
## b1         2.202  0.329   1.521   2.210   2.854    FALSE 1.000 1.017   200
## mub0[1]   -5.441  0.919  -7.245  -5.438  -3.614    FALSE 1.000 1.019   129
## mub0[2]   -4.517  0.708  -5.948  -4.514  -3.152    FALSE 1.000 1.008   546
## mub0[3]   -5.970  0.888  -7.762  -5.949  -4.267    FALSE 1.000 1.081    32
## mub0[4]   -5.555  0.780  -7.104  -5.552  -3.968    FALSE 1.000 1.029    81
## mub0[5]   -5.596  0.986  -7.517  -5.596  -3.664    FALSE 1.000 1.008   355
## mub0[6]   -6.269  0.823  -8.020  -6.218  -4.780    FALSE 1.000 1.022   135
## mub0[7]   -7.084  0.901  -8.835  -7.083  -5.321    FALSE 1.000 1.020   123
## mub0[8]   -4.297  0.609  -5.515  -4.297  -3.130    FALSE 1.000 1.011   230
## mub0[9]   -4.564  0.783  -6.090  -4.564  -3.045    FALSE 1.000 1.182    15
## mub0[10]  -6.371  0.789  -8.023  -6.327  -4.962    FALSE 1.000 1.074    34
## mub0[11]  -2.978  1.121  -5.005  -3.038  -0.567    FALSE 0.991 1.054    47
## mub0[12]  -7.074  1.041  -9.164  -7.054  -5.071    FALSE 1.000 1.009   275
## mub0[13]  -4.296  0.665  -5.598  -4.293  -2.983    FALSE 1.000 1.012   175
## mub0[14]  -5.468  0.778  -7.130  -5.397  -4.108    FALSE 1.000 1.041    63
## mub0[15]  -4.229  1.029  -6.178  -4.248  -2.140    FALSE 1.000 1.025   106
## mub0[16]  -4.882  1.116  -7.071  -4.893  -2.627    FALSE 1.000 1.005   620
## sdb0       1.852  0.194   1.508   1.841   2.260    FALSE 1.000 1.126    21
## mub0s     -5.224  0.651  -6.469  -5.242  -3.884    FALSE 1.000 1.043    59
## sdb0s      1.332  0.311   0.809   1.305   2.032    FALSE 1.000 1.010   256
## deviance 829.667 18.320 797.927 828.172 867.453    FALSE 1.000 1.063    37
## 
## **WARNING** Rhat values indicate convergence failure. 
## Rhat is the potential scale reduction factor (at convergence, Rhat=1). 
## For each parameter, n.eff is a crude measure of effective sample size. 
## 
## overlap0 checks if 0 falls in the parameter's 95% credible interval.
## f is the proportion of the posterior with the same sign as the mean;
## i.e., our confidence that the parameter is positive or negative.
## 
## DIC info: (pD = var(deviance)/2) 
## pD = 158.6 and DIC = 988.275 
## DIC is an estimate of expected predictive error (lower is better).

Podemos hacer un grƔfico un poco sobrepoblado con las ordenadas al origen por sitio (en negro) y por planta (en gris)

op <- par(cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, las = 1, bty = "n")
plot(density(m2.sim$sims.list$mub0s), ylim = c(0, 1), xlim = c(-12, 3), lwd = 3, main = "", xlab = "b0")
ns <- unique(sitio)
for( i in 1:max(sitio)) {
  lines(density(m2.sim$sims.list$mub0[, i]), ylim = c(0, 2), xlim = c(-12, 3), lwd = 2)
  for (j in which(Ns == ns[i])) {
    lines(density(m2.sim$sims.list$b0[,j]), col="gray")
  }
}

par(op)

Ahora hagamos una comparación mÔs interesante entre las posteriores de las pendientes en los dos modelos:

op <- par(cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, las = 1, bty = "n")
plot(density(m1.sim$sims.list$b1), lwd = 3, main = "", xlab = "b1", xlim = c(0, 4))
lines(density(m2.sim$sims.list$b1), lwd = 3, col = 2)

par(op)

Ejercicio:

Ajusten modelos equivalentes a los dos anteriores pero asumiendo alguna relación entre visitas y rotura de pistilos que no use la transformación logit.


Ocpional: Ajuste con Stany brms

Modelo de Stan para la primera opción

cat(file = "m1.stan",
"
data{
  int<lower=1> n;
  int<lower=1> S;
  vector[n] freq;
  int<lower=0> rotos[n];
  int<lower=0> N[n];
  int<lower=1> sitio[n];
}
parameters{
  real b1;
  vector[S] b0;
  real mub0;
  real<lower=0> sigma;
}

model {
  vector[n] logit_p;
  sigma ~ normal(0,1);
  mub0 ~ normal(0,10);
  b0 ~ normal(mub0, sigma);
  b1 ~ normal(0,1);
  
  for(i in 1:n) {
    logit_p[i] = b0[sitio[i]] + b1 * freq[i];
  }
  target += binomial_logit_lpmf(rotos | N, logit_p);
}
")

Ahora ajustamos del modelo

library(rstan)
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())

sitio <- as.numeric(as.factor(datos$Sitio))

m1_dat <- list(n = length(datos$Rotos),
               S = max(sitio),
               rotos = datos$Rotos,
               freq = datos$Frecuencia,
               N = datos$Total,
               sitio = sitio)

inits  <- function() list(a = runif(1, 1, 5), 
                          b = runif(1, 5, 20)
                          )

fit <- stan(file = 'm1.stan', 
            data = m1_dat,
            iter = 5000, 
            thin = 2, 
            chains = 3)

Vemos los resultados

print(fit)
## Inference for Stan model: m1.
## 3 chains, each with iter=5000; warmup=2500; thin=2; 
## post-warmup draws per chain=1250, total post-warmup draws=3750.
## 
##           mean se_mean   sd    2.5%     25%     50%     75%   97.5% n_eff Rhat
## b1        1.51    0.01 0.28    0.95    1.34    1.52    1.71    2.02   859    1
## b0[1]    -3.65    0.02 0.62   -4.80   -4.07   -3.67   -3.26   -2.39   853    1
## b0[2]    -2.86    0.01 0.35   -3.56   -3.10   -2.87   -2.62   -2.15  1264    1
## b0[3]    -4.24    0.01 0.62   -5.58   -4.63   -4.19   -3.80   -3.15  2801    1
## b0[4]    -3.82    0.02 0.49   -4.73   -4.16   -3.83   -3.50   -2.80   927    1
## b0[5]    -4.02    0.03 0.75   -5.44   -4.53   -4.06   -3.53   -2.52   877    1
## b0[6]    -4.51    0.01 0.44   -5.40   -4.81   -4.51   -4.20   -3.69  1352    1
## b0[7]    -5.02    0.01 0.54   -6.15   -5.36   -5.00   -4.65   -4.02  1418    1
## b0[8]    -3.01    0.01 0.29   -3.58   -3.21   -3.01   -2.81   -2.43  1288    1
## b0[9]    -2.93    0.01 0.34   -3.63   -3.14   -2.92   -2.70   -2.30  3441    1
## b0[10]   -4.53    0.01 0.45   -5.40   -4.83   -4.53   -4.23   -3.67  1318    1
## b0[11]   -1.58    0.03 0.86   -3.15   -2.17   -1.61   -1.06    0.23   956    1
## b0[12]   -5.28    0.03 0.80   -6.80   -5.82   -5.30   -4.76   -3.67   915    1
## b0[13]   -2.70    0.01 0.31   -3.34   -2.90   -2.69   -2.49   -2.15  3081    1
## b0[14]   -4.33    0.01 0.59   -5.61   -4.70   -4.28   -3.91   -3.27  2779    1
## b0[15]   -2.59    0.03 0.80   -4.08   -3.15   -2.62   -2.09   -0.91   912    1
## b0[16]   -3.15    0.03 0.88   -4.78   -3.76   -3.18   -2.58   -1.36   878    1
## mub0     -3.64    0.02 0.52   -4.65   -4.00   -3.66   -3.32   -2.58  1113    1
## sigma     1.13    0.00 0.24    0.76    0.96    1.10    1.27    1.67  2657    1
## lp__   -649.07    0.07 3.21 -656.41 -650.95 -648.75 -646.76 -643.83  2300    1
## 
## Samples were drawn using NUTS(diag_e) at Wed Dec  8 14:36:23 2021.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

Comparen los valores de n.eff obtenidos con JAGS con los de Stan.

Ahora ajustemos el modelo con brms

library(brms)

priors = c(prior(normal(0, 1), class = Intercept),
           prior(normal(0,1), class = b),
           prior(cauchy(0, 1), class = sd))

fit_brms = brm(y | trials(m) ~ freq + (1|site), 
          data = data.frame(y = datos$Rotos, 
                            freq = datos$Frecuencia, 
                            m = datos$Total, 
                            site = sitio), 
          family = binomial(), 
          prior = priors,
          chains = 3, 
          iter = 5000, 
          warmup = 2500, 
          thin = 2)
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## clang -mmacosx-version-min=10.13 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG   -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/unsupported"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/src/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppParallel/include/"  -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DBOOST_NO_AUTO_PTR  -include '/Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/usr/local/include   -fPIC  -Wall -g -O2  -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:88:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:1: error: unknown type name 'namespace'
## namespace Eigen {
## ^
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:628:16: error: expected ';' after top level declarator
## namespace Eigen {
##                ^
##                ;
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:13:
## In file included from /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Dense:1:
## /Library/Frameworks/R.framework/Versions/4.1/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found
## #include <complex>
##          ^~~~~~~~~
## 3 errors generated.
## make: *** [foo.o] Error 1
pos <- as.data.frame(fit)
pos_brms <- posterior_samples(fit_brms)

op <- par(cex.lab = 1.5 , font.lab = 2, cex.axis = 1.3, las = 1, bty = "n")
plot(density(m1.sim$sims.list$b1), main = "", xlab = "Tasa de  mortalidad", lwd = 3)
lines(density(pos$b1), lty = 2, lwd = 3)
lines(density((pos_brms$b_freq)), lty = 3, lwd = 3)

par(op)

Juan Manuel Morales. 30 de Septiembre del 2015. Última actualización: 2021-12-08