Optimización heurística

Los métodos de optimización heurística son alternativas a los métodos basados en el gradiente. Algunos, como los algoritmos evolutivos y las colonias de hormigas, se inspiran en paradigmas de la biología. El objetivo de este trabajo es la aplicación de métodos heurísticos.

Optimización de funciones de prueba

Funciones de prueba

Considere las siguientes funciones de prueba:

Optimización:

  1. Escoja dos funciones de prueba.

  2. Optimice las funciones en dos y tres dimensiones usando un método de descenso por gradiente con condición inicial aleatoria.

  3. Optimice las funciones en dos y tres dimensiones usando: algoritmos evolutivos, optimización de partículas y evolución diferencial.

  4. Represente con un gif animado o un video el proceso de optimización de descenso por gradiente y el proceso usando el método heurístico.

El problema del vendedor viajero

En este apartado se aplican métodos heurísticos para resolver el problema del vendedor viajero. Un vendedor debe hacer un recorrido por las siguientes ciudades de Colombia en su carro (no necesariamente en este orden):

Utilice colonias de hormigas y algoritmos genéticos para encontrar el orden óptimo. El costo de desplazamiento entre ciudades es la suma del valor de la hora del vendedor (es un parámetro que debe estudiarse), el costo de los peajes y el costo del combustible. Cada equipo debe definir en qué carro hace el recorrido el vendedor y de allí extraer el costo del combustible. Adicionalmente represente con un gif animado o un video cómo se comporta la mejor solución usando un gráfico del recorrido en el mapa de Colombia.

Solución

  1. Escoja dos funciones de prueba.

Función de Schwefel: \(f(x1,x2) = -∑_{i=1}^2 x_i sin(\sqrt{|xi|}), xi∈[−500,500]\), \(i=1,2\). Alcanza su valor mínimo en \(x1=420.9687\) y \(x2=420.9687\).

determinamos la función de schwefel a utilizar para graficar:

Curvas de nivel de la función de Schwefel

Definimos la funcion schwefel vectorizada

Optimización 1

Optimización por gradiente multivariado funcion schwefel

Calculamos la derivada parcial y se obtiene cada una de las derivadas parciales de f en x:

partial_dev <- function(x,i,fun,h=0.01){
  e <- x*0 # crea un vector de ceros de la misma longitud de x
  e[i] <- h
  y <- (fun(x+e)-fun(x-e))/(2*h)
  return(y)
}

num_grad <- function(x,fun,h=0.01){
  d <- length(x)
  y <- mapply(FUN=partial_dev,i=1:d,MoreArgs=list(x=x,h=h,fun=fun))
  return(y)
}

Sin precondicinamiento:

determinamos la función para evaluar el gradiente de la función Rastrigin:

deriv_grad <- function(x,fun,i=1,h=0.01){
  e <- x*0 # crea un vector de ceros de la misma longitud de x
  e[i] <- h
  y <- (num_grad(x+e,fun=fun,h=h)-num_grad(x-e,fun=fun,h=h))/(2*h)
  return(y)
}

# Calculamos la matriz hessiana:
matriz_hessiana <- function(x,fun,h=0.01){
  d <- length(x)
  y <- mapply(FUN=deriv_grad,i=1:d,MoreArgs=list(x=x,h=h,fun=fun),SIMPLIFY = TRUE)
  return(y)
}

# Determinamos la función de optimizador multivariado
optimizador_mult_numdev <- function(x0,fun,max_eval=100,h=0.01,eta=0.01){
  x <- matrix(NA,ncol =length(x0), nrow = max_eval)
  x[1,] <- x0
  for (i in 2:max_eval){
    num_grad_fun <- num_grad(x[i-1,],fun,h)
    H <- matriz_hessiana(x[i-1,],fun,h)
    cambio <- - eta*solve(H)%*%num_grad_fun
    x[i,] <- x[i-1,] + cambio
    cambio_opt <- sqrt(sum((x[i-1,]-x[i,])^2))
    if (cambio_opt<0.00001){
      break
    }
  }
  return(x[1:i,])
}

#pasamos schwefel al optimizador SIN precondicionamiento iniciando desde (380,390)
opti_MDG_schwefel <- optimizador_mult_numdev(f_schwefel2d_vec,x0=c(380,390),eta=1)

Curvas de nivel

Con preacondicionamiento:

se hace el precondicionamiento para la cantidad de dimensiones

deriv_segunda <- function(x,fun,i=1,h=0.01){
  e <- x*0
  e[i] <- h
  y <- (fun(x+e)-2*fun(x)+fun(x-e))/(h^2)
  return(y)
}

# Determinamos la función de optimizador multivariado
optimizador_mult_precond <- function(x0,fun,max_eval=100,h=0.01,eta=0.01){
  x <- matrix(NA,ncol =length(x0), nrow = max_eval)
  d <- length(x0)
  x[1,] <- x0
  for (i in 2:max_eval){
    num_grad_fun <- num_grad(x[i-1,],fun,h)
    diag_H <- mapply(FUN=deriv_segunda,i=1:d,MoreArgs=list(x=x[i-1,],h=h,fun=fun))
    H_precond <- diag(1/diag_H)
    cambio <- - eta*H_precond%*%num_grad_fun
    x[i,] <- x[i-1,] + cambio
    cambio_opt <- sqrt(sum((x[i-1,]-x[i,])^2))
    if (cambio_opt<0.0000001){
      break
    }
  }
  return(x[1:i,])
}

#pasamos schwefel al optimizador CON precondicionamiento iniciando desde (380,390)
opti_MDG_schwefel_precon <- optimizador_mult_precond(f_schwefel2d_vec,x0=c(380,390),eta=1)

Curvas de nivel

Optimización 2

Otimización por métodos evolutivos utilizando la funcion ga

#Modificamos la función de Schwefel para minimizarla usando un método de maximización:
f_schwefel2d_vec_inv <- function(x){
  x1 <- x[1]
  x2 <- x[2]
  y <- (-((x1*sin(sqrt(abs(x1)))) + (x2*sin(sqrt(abs(x2)))) ))
  return(-y)
}

opti_GA_schwefel <- ga(type="real-valued", fitness = f_schwefel2d_vec_inv, lower=c(-500,-500), upper = c(500,500), seed=5)

Resultados de la optimizacion

## -- Genetic Algorithm ------------------- 
## 
## GA settings: 
## Type                  =  real-valued 
## Population size       =  50 
## Number of generations =  100 
## Elitism               =  2 
## Crossover probability =  0.8 
## Mutation probability  =  0.1 
## Search domain = 
##         x1   x2
## lower -500 -500
## upper  500  500
## 
## GA results: 
## Iterations             = 100 
## Fitness function value = 837.9658 
## Solution = 
##            x1       x2
## [1,] 420.9662 420.9706

Curvas de nivel

Optimización 3

optimización de partículas utilizando la funcion psoptim

opti_pso_schwefel <- psoptim(par=c(NA,NA), fn=f_schwefel2d_vec, lower=c(-500,-500),
                            upper = c(500,500), control = list(trace.stats=TRUE,trace=1))
## S=12, K=3, p=0.2297, w0=0.7213, w1=0.7213, c.p=1.193, c.g=1.193
## v.max=NA, d=1414, vectorize=FALSE, hybrid=off
## It 10: fitness=-717.4
## It 20: fitness=-831.2
## It 30: fitness=-832.9
## It 40: fitness=-837.9
## It 50: fitness=-837.9
## It 60: fitness=-838
## It 70: fitness=-838
## It 80: fitness=-838
## It 90: fitness=-838
## It 100: fitness=-838
## It 110: fitness=-838
## It 120: fitness=-838
## It 130: fitness=-838
## It 140: fitness=-838
## It 150: fitness=-838
## It 160: fitness=-838
## It 170: fitness=-838
## It 180: fitness=-838
## It 190: fitness=-838
## It 200: fitness=-838
## It 210: fitness=-838
## It 220: fitness=-838
## It 230: fitness=-838
## It 240: fitness=-838
## It 250: fitness=-838
## It 260: fitness=-838
## It 270: fitness=-838
## It 280: fitness=-838
## It 290: fitness=-838
## It 300: fitness=-838
## It 310: fitness=-838
## It 320: fitness=-838
## It 330: fitness=-838
## It 340: fitness=-838
## It 350: fitness=-838
## It 360: fitness=-838
## It 370: fitness=-838
## It 380: fitness=-838
## It 390: fitness=-838
## It 400: fitness=-838
## It 410: fitness=-838
## It 420: fitness=-838
## It 430: fitness=-838
## It 440: fitness=-838
## It 450: fitness=-838
## It 460: fitness=-838
## It 470: fitness=-838
## It 480: fitness=-838
## It 490: fitness=-838
## It 500: fitness=-838
## It 510: fitness=-838
## It 520: fitness=-838
## It 530: fitness=-838
## It 540: fitness=-838
## It 550: fitness=-838
## It 560: fitness=-838
## It 570: fitness=-838
## It 580: fitness=-838
## It 590: fitness=-838
## It 600: fitness=-838
## It 610: fitness=-838
## It 620: fitness=-838
## It 630: fitness=-838
## It 640: fitness=-838
## It 650: fitness=-838
## It 660: fitness=-838
## It 670: fitness=-838
## It 680: fitness=-838
## It 690: fitness=-838
## It 700: fitness=-838
## It 710: fitness=-838
## It 720: fitness=-838
## It 730: fitness=-838
## It 740: fitness=-838
## It 750: fitness=-838
## It 760: fitness=-838
## It 770: fitness=-838
## It 780: fitness=-838
## It 790: fitness=-838
## It 800: fitness=-838
## It 810: fitness=-838
## It 820: fitness=-838
## It 830: fitness=-838
## It 840: fitness=-838
## It 850: fitness=-838
## It 860: fitness=-838
## It 870: fitness=-838
## It 880: fitness=-838
## It 890: fitness=-838
## It 900: fitness=-838
## It 910: fitness=-838
## It 920: fitness=-838
## It 930: fitness=-838
## It 940: fitness=-838
## It 950: fitness=-838
## It 960: fitness=-838
## It 970: fitness=-838
## It 980: fitness=-838
## It 990: fitness=-838
## It 1000: fitness=-838
## Maximal number of iterations reached

Curvas de nivel

Optimización 4

Optimización de la función con evolucion diferencial,con la ayuda de la función DEoptim:

opti_evo_dif_schwefel <- DEoptim(fn=f_schwefel2d_vec, lower=c(-500,-500), upper = c(500,500))
## Iteration: 1 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 2 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 3 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 4 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 5 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 6 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 7 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 8 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 9 bestvalit: -677.338141 bestmemit: -285.606024  428.414279
## Iteration: 10 bestvalit: -712.980757 bestmemit: -300.926654  413.927489
## Iteration: 11 bestvalit: -712.980757 bestmemit: -300.926654  413.927489
## Iteration: 12 bestvalit: -712.980757 bestmemit: -300.926654  413.927489
## Iteration: 13 bestvalit: -712.980757 bestmemit: -300.926654  413.927489
## Iteration: 14 bestvalit: -712.980757 bestmemit: -300.926654  413.927489
## Iteration: 15 bestvalit: -712.980757 bestmemit: -300.926654  413.927489
## Iteration: 16 bestvalit: -823.542999 bestmemit:  428.285429  413.150098
## Iteration: 17 bestvalit: -823.542999 bestmemit:  428.285429  413.150098
## Iteration: 18 bestvalit: -826.228088 bestmemit:  428.443018  414.860603
## Iteration: 19 bestvalit: -826.228088 bestmemit:  428.443018  414.860603
## Iteration: 20 bestvalit: -826.228088 bestmemit:  428.443018  414.860603
## Iteration: 21 bestvalit: -826.228088 bestmemit:  428.443018  414.860603
## Iteration: 22 bestvalit: -835.233334 bestmemit:  424.898768  423.457528
## Iteration: 23 bestvalit: -835.980098 bestmemit:  424.772697  422.089124
## Iteration: 24 bestvalit: -835.980098 bestmemit:  424.772697  422.089124
## Iteration: 25 bestvalit: -835.980098 bestmemit:  424.772697  422.089124
## Iteration: 26 bestvalit: -835.980098 bestmemit:  424.772697  422.089124
## Iteration: 27 bestvalit: -836.006970 bestmemit:  424.772697  421.989722
## Iteration: 28 bestvalit: -836.006970 bestmemit:  424.772697  421.989722
## Iteration: 29 bestvalit: -836.006970 bestmemit:  424.772697  421.989722
## Iteration: 30 bestvalit: -837.429481 bestmemit:  422.127771  419.263286
## Iteration: 31 bestvalit: -837.429481 bestmemit:  422.127771  419.263286
## Iteration: 32 bestvalit: -837.429481 bestmemit:  422.127771  419.263286
## Iteration: 33 bestvalit: -837.429481 bestmemit:  422.127771  419.263286
## Iteration: 34 bestvalit: -837.446148 bestmemit:  422.556785  419.705481
## Iteration: 35 bestvalit: -837.446148 bestmemit:  422.556785  419.705481
## Iteration: 36 bestvalit: -837.446148 bestmemit:  422.556785  419.705481
## Iteration: 37 bestvalit: -837.448702 bestmemit:  422.556785  419.713525
## Iteration: 38 bestvalit: -837.918974 bestmemit:  420.742229  420.403357
## Iteration: 39 bestvalit: -837.918974 bestmemit:  420.742229  420.403357
## Iteration: 40 bestvalit: -837.918974 bestmemit:  420.742229  420.403357
## Iteration: 41 bestvalit: -837.918974 bestmemit:  420.742229  420.403357
## Iteration: 42 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 43 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 44 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 45 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 46 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 47 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 48 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 49 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 50 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 51 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 52 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 53 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 54 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 55 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 56 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 57 bestvalit: -837.963279 bestmemit:  420.968306  420.828103
## Iteration: 58 bestvalit: -837.964092 bestmemit:  420.854207  420.953980
## Iteration: 59 bestvalit: -837.964092 bestmemit:  420.854207  420.953980
## Iteration: 60 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 61 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 62 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 63 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 64 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 65 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 66 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 67 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 68 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 69 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 70 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 71 bestvalit: -837.965652 bestmemit:  420.952819  420.941985
## Iteration: 72 bestvalit: -837.965731 bestmemit:  420.952819  420.959080
## Iteration: 73 bestvalit: -837.965731 bestmemit:  420.952819  420.959080
## Iteration: 74 bestvalit: -837.965731 bestmemit:  420.952819  420.959080
## Iteration: 75 bestvalit: -837.965731 bestmemit:  420.952819  420.959080
## Iteration: 76 bestvalit: -837.965750 bestmemit:  420.976945  420.957453
## Iteration: 77 bestvalit: -837.965750 bestmemit:  420.976945  420.957453
## Iteration: 78 bestvalit: -837.965753 bestmemit:  420.979778  420.961552
## Iteration: 79 bestvalit: -837.965753 bestmemit:  420.979778  420.961552
## Iteration: 80 bestvalit: -837.965771 bestmemit:  420.966181  420.964549
## Iteration: 81 bestvalit: -837.965771 bestmemit:  420.966181  420.964549
## Iteration: 82 bestvalit: -837.965771 bestmemit:  420.966181  420.964549
## Iteration: 83 bestvalit: -837.965771 bestmemit:  420.966181  420.964549
## Iteration: 84 bestvalit: -837.965771 bestmemit:  420.966181  420.964549
## Iteration: 85 bestvalit: -837.965771 bestmemit:  420.966181  420.964549
## Iteration: 86 bestvalit: -837.965774 bestmemit:  420.971020  420.969944
## Iteration: 87 bestvalit: -837.965774 bestmemit:  420.971020  420.969944
## Iteration: 88 bestvalit: -837.965774 bestmemit:  420.971020  420.969944
## Iteration: 89 bestvalit: -837.965774 bestmemit:  420.967768  420.970897
## Iteration: 90 bestvalit: -837.965774 bestmemit:  420.967768  420.970897
## Iteration: 91 bestvalit: -837.965774 bestmemit:  420.967768  420.970897
## Iteration: 92 bestvalit: -837.965774 bestmemit:  420.967359  420.969522
## Iteration: 93 bestvalit: -837.965774 bestmemit:  420.967359  420.969522
## Iteration: 94 bestvalit: -837.965774 bestmemit:  420.967359  420.969522
## Iteration: 95 bestvalit: -837.965774 bestmemit:  420.969632  420.968164
## Iteration: 96 bestvalit: -837.965774 bestmemit:  420.969632  420.968164
## Iteration: 97 bestvalit: -837.965774 bestmemit:  420.969632  420.968164
## Iteration: 98 bestvalit: -837.965774 bestmemit:  420.968963  420.967939
## Iteration: 99 bestvalit: -837.965774 bestmemit:  420.968963  420.967939
## Iteration: 100 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 101 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 102 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 103 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 104 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 105 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 106 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 107 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 108 bestvalit: -837.965775 bestmemit:  420.968963  420.968553
## Iteration: 109 bestvalit: -837.965775 bestmemit:  420.968701  420.968893
## Iteration: 110 bestvalit: -837.965775 bestmemit:  420.968701  420.968893
## Iteration: 111 bestvalit: -837.965775 bestmemit:  420.968701  420.968893
## Iteration: 112 bestvalit: -837.965775 bestmemit:  420.968645  420.968736
## Iteration: 113 bestvalit: -837.965775 bestmemit:  420.968645  420.968736
## Iteration: 114 bestvalit: -837.965775 bestmemit:  420.968645  420.968736
## Iteration: 115 bestvalit: -837.965775 bestmemit:  420.968645  420.968736
## Iteration: 116 bestvalit: -837.965775 bestmemit:  420.968645  420.968736
## Iteration: 117 bestvalit: -837.965775 bestmemit:  420.968711  420.968678
## Iteration: 118 bestvalit: -837.965775 bestmemit:  420.968711  420.968678
## Iteration: 119 bestvalit: -837.965775 bestmemit:  420.968738  420.968686
## Iteration: 120 bestvalit: -837.965775 bestmemit:  420.968738  420.968686
## Iteration: 121 bestvalit: -837.965775 bestmemit:  420.968738  420.968686
## Iteration: 122 bestvalit: -837.965775 bestmemit:  420.968719  420.968758
## Iteration: 123 bestvalit: -837.965775 bestmemit:  420.968719  420.968758
## Iteration: 124 bestvalit: -837.965775 bestmemit:  420.968762  420.968758
## Iteration: 125 bestvalit: -837.965775 bestmemit:  420.968742  420.968735
## Iteration: 126 bestvalit: -837.965775 bestmemit:  420.968742  420.968735
## Iteration: 127 bestvalit: -837.965775 bestmemit:  420.968742  420.968735
## Iteration: 128 bestvalit: -837.965775 bestmemit:  420.968742  420.968738
## Iteration: 129 bestvalit: -837.965775 bestmemit:  420.968742  420.968738
## Iteration: 130 bestvalit: -837.965775 bestmemit:  420.968742  420.968738
## Iteration: 131 bestvalit: -837.965775 bestmemit:  420.968742  420.968738
## Iteration: 132 bestvalit: -837.965775 bestmemit:  420.968743  420.968738
## Iteration: 133 bestvalit: -837.965775 bestmemit:  420.968748  420.968753
## Iteration: 134 bestvalit: -837.965775 bestmemit:  420.968744  420.968747
## Iteration: 135 bestvalit: -837.965775 bestmemit:  420.968744  420.968747
## Iteration: 136 bestvalit: -837.965775 bestmemit:  420.968744  420.968747
## Iteration: 137 bestvalit: -837.965775 bestmemit:  420.968744  420.968747
## Iteration: 138 bestvalit: -837.965775 bestmemit:  420.968744  420.968746
## Iteration: 139 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 140 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 141 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 142 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 143 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 144 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 145 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 146 bestvalit: -837.965775 bestmemit:  420.968746  420.968745
## Iteration: 147 bestvalit: -837.965775 bestmemit:  420.968747  420.968746
## Iteration: 148 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 149 bestvalit: -837.965775 bestmemit:  420.968745  420.968746
## Iteration: 150 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 151 bestvalit: -837.965775 bestmemit:  420.968747  420.968745
## Iteration: 152 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 153 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 154 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 155 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 156 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 157 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 158 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 159 bestvalit: -837.965775 bestmemit:  420.968747  420.968746
## Iteration: 160 bestvalit: -837.965775 bestmemit:  420.968747  420.968746
## Iteration: 161 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 162 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 163 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 164 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 165 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 166 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 167 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 168 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 169 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 170 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 171 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 172 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 173 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 174 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 175 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 176 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 177 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 178 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 179 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 180 bestvalit: -837.965775 bestmemit:  420.968747  420.968747
## Iteration: 181 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 182 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 183 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 184 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 185 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 186 bestvalit: -837.965775 bestmemit:  420.968747  420.968746
## Iteration: 187 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 188 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 189 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 190 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 191 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 192 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 193 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 194 bestvalit: -837.965775 bestmemit:  420.968747  420.968746
## Iteration: 195 bestvalit: -837.965775 bestmemit:  420.968746  420.968747
## Iteration: 196 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 197 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 198 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 199 bestvalit: -837.965775 bestmemit:  420.968746  420.968746
## Iteration: 200 bestvalit: -837.965775 bestmemit:  420.968746  420.968746

Curvas de nivel de la funcion schwefel optimizada

  • b.) Función de Rastrigin: \(f(x_1,x_2)=20+\sum_{i=1}^{2}{x_i^2-10 \cos(2 \pi x_i)}\), \(x_i \in [-5.12,5.12]\), \(i=1,2\).. Alcanza su valor mínimo en \(x_1=0\) y \(x_2=0\).

Curvas de nivel de la función Rastrigin:

función Rastrigin en 3D

Definamos la función vectorizada:

f_rastrigin2d_vec <- function(x){
  x1 <- x[1]
  x2 <- x[2]
  y <- 20+x1^2-10*cos(2*pi*x1)+x2^2-10*cos(2*pi*x2)
  return(y)
}

optimización 1

Optimización por gradiente multivariado

Calculamos la derivada parcial y se obtiene cada una de las derivadas parciales de f en x:

partial_dev <- function(x,i,fun,h=0.01){
  e <- x*0 # crea un vector de ceros de la misma longitud de x
  e[i] <- h
  y <- (fun(x+e)-fun(x-e))/(2*h)
  return(y)
}

num_grad <- function(x,fun,h=0.01){
  d <- length(x)
  y <- mapply(FUN=partial_dev,i=1:d,MoreArgs=list(x=x,h=h,fun=fun))
  return(y)
}

Sin precondicinamiento:

determinamos la función para evaluar el gradiente de la función Rastrigin:

#pasamos rastrigin al optimizador iniciando desde (0.8,4.8)
opti_MDG_rastri <- optimizador_mult_numdev(f_rastrigin2d_vec,x0=c(0.8,4.8),eta=1)

Curvas de nivel función de rastrigin

ggplot(data= datos, aes(x = x1, y = x2, z = f_x)) +
    geom_contour(aes(colour = stat(level)), bins = 20) +
         # geom_point(
         #    aes(x = opti_MDG_schwefel[,1], y = opti_MDG_schwefel[,2]),
         #    color = "red",
         #    size = 1.8) +  
  annotate(geom = "point",
           x = opti_MDG_rastri[,1],
           y =  opti_MDG_rastri[,2],
           color = "red",
           size = 1.8) +
  labs(title = "curvas de nivel función de rastrigin:\n optimización GDM sin precondicinamiento") +
  theme_bw() +
  theme(legend.position = "none",
        title = element_text(size = 10))

Con preacondicionamiento:

se hace el precondicionamiento para la cantidad de dimensiones

#pasamos rastrigin al optimizador iniciando desde (0.8,4.8)
opti_MDG_rastri_precon <- optimizador_mult_precond(f_rastrigin2d_vec,x0=c(0.8,4.8),eta=1)

Curvas de nivel

Optimización 2

Otimización por métodos evolutivos utilizando la funcion ga

# Modifiquemos la función de Rastrigin para minimizarla usando un método de maximización:
f_rastrigin2d_vec_inv <- function(x){
  x1 <- x[1]
  x2 <- x[2]
  y <- 20+x1^2-10*cos(2*pi*x1)+x2^2-10*cos(2*pi*x2)
  return(-y)
}

opti_GA_rastri <- ga(type="real-valued", fitness = f_rastrigin2d_vec_inv, lower=c(-5.12,-5.12), upper = c(5.12,5.12), 
         popSize = 50, maxiter = 100,
         optim = TRUE, seed=5)

Resultados de la optimizacion

## -- Genetic Algorithm ------------------- 
## 
## GA settings: 
## Type                  =  real-valued 
## Population size       =  50 
## Number of generations =  100 
## Elitism               =  2 
## Crossover probability =  0.8 
## Mutation probability  =  0.1 
## Search domain = 
##          x1    x2
## lower -5.12 -5.12
## upper  5.12  5.12
## 
## GA results: 
## Iterations             = 100 
## Fitness function value = 0 
## Solution = 
##      x1 x2
## [1,]  0  0

Curvas de nivel optimización métodos evolutivos

Optimización 3

optimización de partículas utilizando la funcion psoptim

opti_pso_rastri <- psoptim(par=c(NA,NA), fn=f_rastrigin2d_vec, lower=c(-5.12,-5.12),
                             upper = c(5.12,5.12), control = list(trace.stats=TRUE,trace=1))
## S=12, K=3, p=0.2297, w0=0.7213, w1=0.7213, c.p=1.193, c.g=1.193
## v.max=NA, d=14.48, vectorize=FALSE, hybrid=off
## It 10: fitness=2.992
## It 20: fitness=0.9145
## It 30: fitness=0.08421
## It 40: fitness=0.03056
## It 50: fitness=0.0001636
## It 60: fitness=5.058e-05
## It 70: fitness=4.402e-05
## It 80: fitness=6.749e-06
## It 90: fitness=2.768e-06
## It 100: fitness=7.259e-07
## It 110: fitness=6.698e-08
## It 120: fitness=4.906e-11
## It 130: fitness=3.001e-11
## It 140: fitness=2.015e-11
## It 150: fitness=1.151e-12
## It 160: fitness=6.573e-14
## It 170: fitness=0
## It 180: fitness=0
## It 190: fitness=0
## It 200: fitness=0
## It 210: fitness=0
## It 220: fitness=0
## It 230: fitness=0
## It 240: fitness=0
## It 250: fitness=0
## It 260: fitness=0
## It 270: fitness=0
## It 280: fitness=0
## It 290: fitness=0
## It 300: fitness=0
## It 310: fitness=0
## It 320: fitness=0
## It 330: fitness=0
## It 340: fitness=0
## It 350: fitness=0
## It 360: fitness=0
## It 370: fitness=0
## It 380: fitness=0
## It 390: fitness=0
## It 400: fitness=0
## It 410: fitness=0
## It 420: fitness=0
## It 430: fitness=0
## It 440: fitness=0
## It 450: fitness=0
## It 460: fitness=0
## It 470: fitness=0
## It 480: fitness=0
## It 490: fitness=0
## It 500: fitness=0
## It 510: fitness=0
## It 520: fitness=0
## It 530: fitness=0
## It 540: fitness=0
## It 550: fitness=0
## It 560: fitness=0
## It 570: fitness=0
## It 580: fitness=0
## It 590: fitness=0
## It 600: fitness=0
## It 610: fitness=0
## It 620: fitness=0
## It 630: fitness=0
## It 640: fitness=0
## It 650: fitness=0
## It 660: fitness=0
## It 670: fitness=0
## It 680: fitness=0
## It 690: fitness=0
## It 700: fitness=0
## It 710: fitness=0
## It 720: fitness=0
## It 730: fitness=0
## It 740: fitness=0
## It 750: fitness=0
## It 760: fitness=0
## It 770: fitness=0
## It 780: fitness=0
## It 790: fitness=0
## It 800: fitness=0
## It 810: fitness=0
## It 820: fitness=0
## It 830: fitness=0
## It 840: fitness=0
## It 850: fitness=0
## It 860: fitness=0
## It 870: fitness=0
## It 880: fitness=0
## It 890: fitness=0
## It 900: fitness=0
## It 910: fitness=0
## It 920: fitness=0
## It 930: fitness=0
## It 940: fitness=0
## It 950: fitness=0
## It 960: fitness=0
## It 970: fitness=0
## It 980: fitness=0
## It 990: fitness=0
## It 1000: fitness=0
## Maximal number of iterations reached

Curvas de nivel

Optimización 4

Optimicemos la función con evolucion diferencial,con la ayuda de la función DEoptim:

opti_evo_dif_rastri <- DEoptim(fn=f_rastrigin2d_vec, lower=c(-5.12,-5.12), upper = c(5.12,5.12))
## Iteration: 1 bestvalit: 4.216295 bestmemit:   -0.090304    1.087068
## Iteration: 2 bestvalit: 4.216295 bestmemit:   -0.090304    1.087068
## Iteration: 3 bestvalit: 4.216295 bestmemit:   -0.090304    1.087068
## Iteration: 4 bestvalit: 4.216295 bestmemit:   -0.090304    1.087068
## Iteration: 5 bestvalit: 3.383692 bestmemit:   -1.075525   -0.969600
## Iteration: 6 bestvalit: 3.383692 bestmemit:   -1.075525   -0.969600
## Iteration: 7 bestvalit: 3.383692 bestmemit:   -1.075525   -0.969600
## Iteration: 8 bestvalit: 3.383692 bestmemit:   -1.075525   -0.969600
## Iteration: 9 bestvalit: 2.253671 bestmemit:    1.031023    0.989306
## Iteration: 10 bestvalit: 1.489356 bestmemit:   -0.944793    0.002371
## Iteration: 11 bestvalit: 1.489356 bestmemit:   -0.944793    0.002371
## Iteration: 12 bestvalit: 1.489356 bestmemit:   -0.944793    0.002371
## Iteration: 13 bestvalit: 1.489356 bestmemit:   -0.944793    0.002371
## Iteration: 14 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 15 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 16 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 17 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 18 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 19 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 20 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 21 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 22 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 23 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 24 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 25 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 26 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 27 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 28 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 29 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 30 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 31 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 32 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 33 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 34 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 35 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 36 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 37 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 38 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 39 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 40 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 41 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 42 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 43 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 44 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 45 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 46 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 47 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 48 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 49 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 50 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 51 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 52 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 53 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 54 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 55 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 56 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 57 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 58 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 59 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 60 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 61 bestvalit: 1.004771 bestmemit:   -0.988334    0.002371
## Iteration: 62 bestvalit: 1.000294 bestmemit:   -0.997404   -0.004573
## Iteration: 63 bestvalit: 1.000294 bestmemit:   -0.997404   -0.004573
## Iteration: 64 bestvalit: 1.000294 bestmemit:   -0.997404   -0.004573
## Iteration: 65 bestvalit: 1.000294 bestmemit:   -0.997404   -0.004573
## Iteration: 66 bestvalit: 1.000294 bestmemit:   -0.997404   -0.004573
## Iteration: 67 bestvalit: 1.000294 bestmemit:   -0.997404   -0.004573
## Iteration: 68 bestvalit: 0.008064 bestmemit:    0.005752   -0.002750
## Iteration: 69 bestvalit: 0.008064 bestmemit:    0.005752   -0.002750
## Iteration: 70 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 71 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 72 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 73 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 74 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 75 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 76 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 77 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 78 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 79 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 80 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 81 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 82 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 83 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 84 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 85 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 86 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 87 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 88 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 89 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 90 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 91 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 92 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 93 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 94 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 95 bestvalit: 0.003007 bestmemit:    0.001029   -0.003755
## Iteration: 96 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 97 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 98 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 99 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 100 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 101 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 102 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 103 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 104 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 105 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 106 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 107 bestvalit: 0.002099 bestmemit:    0.002847   -0.001574
## Iteration: 108 bestvalit: 0.001641 bestmemit:    0.002847    0.000406
## Iteration: 109 bestvalit: 0.001641 bestmemit:    0.002847    0.000406
## Iteration: 110 bestvalit: 0.001641 bestmemit:    0.002847    0.000406
## Iteration: 111 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 112 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 113 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 114 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 115 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 116 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 117 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 118 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 119 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 120 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 121 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 122 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 123 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 124 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 125 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 126 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 127 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 128 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 129 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 130 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 131 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 132 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 133 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 134 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 135 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 136 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 137 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 138 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 139 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 140 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 141 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 142 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 143 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 144 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 145 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 146 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 147 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 148 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 149 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 150 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 151 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 152 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 153 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 154 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 155 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 156 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 157 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 158 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 159 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 160 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 161 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 162 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 163 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 164 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 165 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 166 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 167 bestvalit: 0.000033 bestmemit:    0.000019    0.000406
## Iteration: 168 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 169 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 170 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 171 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 172 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 173 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 174 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 175 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 176 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 177 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 178 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 179 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 180 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 181 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 182 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 183 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 184 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 185 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 186 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 187 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 188 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 189 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 190 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 191 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 192 bestvalit: 0.000006 bestmemit:   -0.000115    0.000136
## Iteration: 193 bestvalit: 0.000004 bestmemit:    0.000102    0.000094
## Iteration: 194 bestvalit: 0.000004 bestmemit:    0.000102    0.000094
## Iteration: 195 bestvalit: 0.000004 bestmemit:    0.000102    0.000094
## Iteration: 196 bestvalit: 0.000004 bestmemit:    0.000102    0.000094
## Iteration: 197 bestvalit: 0.000004 bestmemit:    0.000102    0.000094
## Iteration: 198 bestvalit: 0.000004 bestmemit:    0.000102    0.000094
## Iteration: 199 bestvalit: 0.000004 bestmemit:    0.000102    0.000094
## Iteration: 200 bestvalit: 0.000004 bestmemit:    0.000102    0.000094

Curvas de nivel de la funcion optimizada

  1. Representación con un gif animado del proceso de optimización de descenso por gradiente y el proceso usando el método heurístico.

Algoritmo genetico(método heurístico)

Curvas de nivel función rastringin

Función rastringin 3D

## Loading required package: viridisLite

Optimización algoritmo genético

Función de optimización

optimizar_ga <- function(
                         funcion_objetivo,
                         n_variables,
                         optimizacion,
                         limite_inf         = NULL,
                         limite_sup         = NULL,
                         n_poblacion        = 20,
                         n_generaciones     = 50,
                         elitismo           = 0.1,
                         prob_mut           = 0.01,
                         distribucion       = "uniforme",
                         media_distribucion = 1,
                         sd_distribucion    = 1,
                         min_distribucion   = -1,
                         max_distribucion   = 1,
                         metodo_seleccion   = "tournament",
                         metodo_cruce       = "uniforme",
                         parada_temprana    = FALSE,
                         rondas_parada      = NULL,
                         tolerancia_parada  = NULL,
                         verbose            = 1,
                         ...) {

  # ARGUMENTOS
  # =============================================================================
  # funcion_objetivo: nombre de la función que se desea optimizar. Debe de haber
  #                   sido definida previamente.
  # n_variables:      longitud de los individuos.
  # optimizacion:     "maximizar" o "minimizar". Dependiendo de esto, la relación
  #                   del fitness es directamente o indirectamente proporcional al
  #                   valor de la función.
  # limite_inf:       vector con el límite inferior de cada variable. Si solo se
  #                   quiere imponer límites a algunas variables, emplear NA para
  #                   las que no se quiere acotar.
  # limite_sup:       vector con el límite superior de cada variable. Si solo se
  #                   quiere imponer límites a algunas variables, emplear NA para
  #                   las que no se quieren acotar.
  # n_poblacion:      número total de individuos de la población.
  # n_generaciones:   número total de generaciones creadas.
  # elitismo:         porcentaje de mejores individuos de la población actual que
  #                   pasan directamente a la siguiente población.
  # prob_mut:         probabilidad que tiene cada posición del individuo de mutar.
  # distribucion:     distribución de la que obtener el factor de mutación. Puede
  #                   ser: "normal", "uniforme" o "aleatoria".
  # media_distribucion: media de la distribución si se selecciona distribucion="normal".
  # sd_distribucion:  desviación estándar de la distribución si se selecciona
  #                   distribucion="normal".
  # min_distribucion: mínimo la distribución si se selecciona distribucion="uniforme".
  # max_distribucion: máximo la distribución si se selecciona distribucion="uniforme".
  # metodo_seleccion: método para establecer la probabilidad de selección. Puede
  #                   ser: "ruleta", "rank" o "tournament".
  # metodo_seleccion: método para cruzar los individuos. Puede ser: "uniforme",
  #                  "punto_simple".
  # parada_temprana:  si durante las últimas "rondas_parada" generaciones la diferencia
  #                   absoluta entre mejores individuos no es superior al valor de
  #                  "tolerancia_parada", se detiene el algoritmo y no se crean
  #                   nuevas generaciones.
  # rondas_parada:    número de generaciones consecutivas sin mejora mínima para que
  #                   se active la parada temprana.
  # tolerancia_parada: valor mínimo que debe tener la diferencia de generaciones
  #                    consecutivas para considerar que hay cambio.
  # verbose:          Nivel de detalle para que se imprima por pantalla el 
  #                   resultado de cada paso del algoritmo (0, 1, 2)

  # RETORNO
  # =============================================================================
  # La función devuelve una lista con 5 elementos:
  # fitness:            una lista con el fitness del mejor individuo de cada
  #                     generación.
  # mejores_individuos: una lista con la combinación de predictores del mejor
  #                     individuo de cada generación.
  # mejor_individuo:    combinación de predictores del mejor individuo encontrado
  #                     en todo el proceso.
  # diferencia_abs:     una lista con la diferencia absoluta entre el fitness
  #                     del mejor individuo de generaciones consecutivas.
  # df_resultados:      un dataframe con todos los resultados anteriores.
  
  start_time <- Sys.time()
  
  # COMPROBACIONES INICIALES
  # ----------------------------------------------------------------------------
  # Si se activa la parada temprana, hay que especificar los argumentos
  # rondas_parada y tolerancia_parada.
  if (isTRUE(parada_temprana) &
      (is.null(rondas_parada) | is.null(tolerancia_parada)) ) {
    stop(paste(
      "Para activar la parada temprana es necesario indicar un valor",
      "de rondas_parada y de tolerancia_parada."
    ))
  }

  # ESTABLECER LOS LÍMITES DE BÚSQUEDA SI EL USUARIO NO LO HA HECHO
  # ----------------------------------------------------------------------------
  if (is.null(limite_sup) | is.null(limite_inf)) {
    warning(paste(
      "Es altamente recomendable indicar los límites dentro de los",
      "cuales debe buscarse la solución de cada variable.",
      "Por defecto se emplea: [-10^3, 10^3]."
    ))
  }

  if (any(
        is.null(limite_sup), is.null(limite_inf), any(is.na(limite_sup)),
        any(is.na(limite_inf))
  )) {
    warning(paste(
      "Los límites empleados por defecto cuando no se han definido son:",
      " [-10^3, 10^3]."
    ))
    cat("\n")
  }

  # Si no se especifica limite_inf, el valor mínimo que pueden tomar las variables
  # es -10^3.
  if (is.null(limite_inf)) {
    limite_inf <- rep(x = -10^3, times = n_variables)
  }

  # Si no se especifica limite_sup, el valor máximo que pueden tomar las variables
  # es 10^3.
  if (is.null(limite_sup)) {
    limite_sup <- rep(x = 10^3, times = n_variables)
  }

  # Si los límites no son nulos, se reemplazan aquellas posiciones NA por el valor
  # por defecto -10^3 y 10^3.
  if (!is.null(limite_inf)) {
    limite_inf[is.na(limite_inf)] <- -10^3
  }

  if (!is.null(limite_sup)) {
    limite_sup[is.na(limite_sup)] <- 10^3
  }


  # ALMACENAMIENTO DE RESULTADOS
  # ----------------------------------------------------------------------------
  # Por cada generación se almacena, la población, el mejor individuo, su fitness,
  # y la diferencia absoluta respecto a la última generación.
  poblaciones          <- vector(mode = "list", length = n_generaciones)
  resultados_fitness   <- vector(mode = "list", length = n_generaciones)
  resultados_individuo <- vector(mode = "list", length = n_generaciones)
  diferencia_abs       <- vector(mode = "list", length = n_generaciones)

  # ITERACIÓN DE POBLACIONES
  # ----------------------------------------------------------------------------
  for (i in 1:n_generaciones) {
    if (verbose %in% c(1,2)) {
      cat("-------------------", "\n")
      cat("Generación:", paste0(i, "\\", n_generaciones), "\n")
      cat("-------------------", "\n")
    }
    
    if (i == 1) {
      # CREACIÓN DE LA POBLACIÓN INICIAL
      # ------------------------------------------------------------------------
      poblacion <- crear_poblacion(
                    n_poblacion = n_poblacion,
                    n_variables = n_variables,
                    limite_inf  = limite_inf,
                    limite_sup  = limite_sup,
                    verbose     = verbose %in% c(2)
                  )
    }

    # CALCULAR FITNESS DE LOS INDIVIDUOS DE LA POBLACIÓN
    # --------------------------------------------------------------------------
    fitness_ind_poblacion <- calcular_fitness_poblacion(
                                poblacion        = poblacion,
                                funcion_objetivo = funcion_objetivo,
                                optimizacion     = optimizacion,
                                verbose          = verbose %in% c(2)
                              )

    # SE ALMACENA LA POBLACIÓN Y SU MEJOR INDIVIDUO
    # --------------------------------------------------------------------------
    poblaciones[[i]]          <- poblacion
    fitness_mejor_individuo   <- max(fitness_ind_poblacion)
    mejor_individuo           <- poblacion[which.max(fitness_ind_poblacion), ]
    resultados_fitness[[i]]   <- fitness_mejor_individuo
    resultados_individuo[[i]] <- mejor_individuo

    # SE CALCULA LA DIFERENCIA ABSOLUTA RESPECTO A LA GENERACIÓN ANTERIOR
    # --------------------------------------------------------------------------
    # La diferencia solo puede calcularse a partir de la segunda generación.
    if (i > 1) {
      diferencia_abs[[i]] <- abs(resultados_fitness[[i - 1]] - resultados_fitness[[i]])
    }

    # NUEVA POBLACIÓN
    # --------------------------------------------------------------------------
    nueva_poblacion <- matrix(
                        data = NA,
                        nrow = nrow(poblacion),
                        ncol = ncol(poblacion)
                      )

    # ELITISMO
    # --------------------------------------------------------------------------
    # El elitismo indica el porcentaje de mejores individuos de la población
    # actual que pasan directamente a la siguiente población. De esta forma, se
    # asegura que, la siguiente generación, no sea nunca inferior.

    if (elitismo > 0) {
      n_elitismo         <- ceiling(nrow(poblacion) * elitismo)
      posicion_n_mejores <- order(fitness_ind_poblacion, decreasing = TRUE)
      posicion_n_mejores <- posicion_n_mejores[1:n_elitismo]
      nueva_poblacion[1:n_elitismo, ] <- poblacion[posicion_n_mejores, ]
    } else {
      n_elitismo <- 0
    }

    # CREACIÓN DE NUEVOS INDIVIDUOS POR CRUCES
    # --------------------------------------------------------------------------
    for (j in (n_elitismo + 1):nrow(nueva_poblacion)) {
      # Seleccionar parentales
      indice_parental_1 <- seleccionar_individuo(
                              vector_fitness   = fitness_ind_poblacion,
                              metodo_seleccion = metodo_seleccion,
                              verbose          = verbose %in% c(2)
                            )
      indice_parental_2 <- seleccionar_individuo(
                              vector_fitness   = fitness_ind_poblacion,
                              metodo_seleccion = metodo_seleccion,
                              verbose          = verbose %in% c(2)
                            )
      parental_1 <- poblacion[indice_parental_1, ]
      parental_2 <- poblacion[indice_parental_2, ]

      # Cruzar parentales para obtener la descendencia
      descendencia <- cruzar_individuos(
                        parental_1   = parental_1,
                        parental_2   = parental_2,
                        metodo_cruce = metodo_cruce,
                        verbose      = verbose %in% c(2)
                      )
      # Mutar la descendencia
      descendencia <- mutar_individuo(
                        individuo    = descendencia,
                        prob_mut     = prob_mut,
                        limite_inf   = limite_inf,
                        limite_sup   = limite_sup,
                        distribucion = distribucion,
                        media_distribucion = media_distribucion,
                        sd_distribucion    = sd_distribucion,
                        min_distribucion   = min_distribucion,
                        max_distribucion   = max_distribucion,
                        verbose            = verbose %in% c(2)
                      )

      nueva_poblacion[j, ] <- descendencia
    }
    poblacion <- nueva_poblacion

    # CRITERIO DE PARADA
    # --------------------------------------------------------------------------
    # Si durante las últimas n generaciones, la diferencia absoluta entre mejores
    # individuos no es superior al valor de tolerancia_parada, se detiene el
    # algoritmo y no se crean nuevas generaciones.

    if (parada_temprana && (i > rondas_parada)) {
      ultimos_n <- tail(unlist(diferencia_abs), n = rondas_parada)
      if (all(ultimos_n < tolerancia_parada)) {
        cat(
          "Algoritmo detenido en la generacion", i,
          "por falta cambio mínimo de", tolerancia_parada,
          "durante", rondas_parada,
          "generaciones consecutivas.",
          "\n"
        )
        break()
      }
    }
  }

  # IDENTIFICACIÓN DEL MEJOR INDIVIDUO DE TODO EL PROCESO
  # ----------------------------------------------------------------------------
  indice_mejor_individuo_global <- which.max(unlist(resultados_fitness))
  mejor_fitness_global   <- resultados_fitness[[indice_mejor_individuo_global]]
  mejor_individuo_global <- resultados_individuo[[indice_mejor_individuo_global]]
  
  # Se identifica el valor de la función objetivo para el mejor individuo.
  if (optimizacion == "maximizar") {
    mejor_valor_global <- mejor_fitness_global
  } else {
    mejor_valor_global <- -1*mejor_fitness_global
  }

  # RESULTADOS
  # ----------------------------------------------------------------------------
  # Para crear el dataframe se convierten las listas a vectores del mismo tamaño.
  resultados_fitness <- unlist(resultados_fitness)
  diferencia_abs     <- c(NA, unlist(diferencia_abs))
  
  # Si hay parada temprana, algunas generaciones no se alcanzan: Se eliminan sus
  # posiciones de las listas de resultados
  resultados_individuo <- resultados_individuo[!sapply(resultados_individuo, is.null)]
  poblaciones          <- poblaciones[!sapply(poblaciones, is.null)]


  # Para poder añadir al dataframe la secuencia variables, se concatenan.
  variables <- sapply(
                  X = resultados_individuo,
                  FUN = function(x) {
                          paste(x, collapse = ", ")
                        }
                )
  
  df_resultados <- data.frame(
                      generacion        = seq_along(resultados_fitness),
                      fitness           = resultados_fitness,
                      predictores       = variables,
                      diferencia_abs    = diferencia_abs
                    )

  resultados <- list(
                  mejor_individuo_global = mejor_individuo_global,
                  mejor_valor_global     = mejor_valor_global,
                  mejor_fitness_por_generacion   = resultados_fitness,
                  mejor_individuo_por_generacion = resultados_individuo,
                  diferencia_abs         = diferencia_abs,
                  df_resultados          = df_resultados,
                  poblaciones            = poblaciones,
                  funcion_objetivo       = funcion_objetivo
                )
  
  end_time <- Sys.time()
  
  # INFORMACIÓN ALMACENADA EN LOS ATRIBUTOS
  # ----------------------------------------------------------------------------
  attr(resultados, "class") <- "optimizacion_ga"
  attr(resultados, 'fecha_creacion')        <- end_time
  attr(resultados, 'duracion_optimizacion') <- paste(
                                                difftime(end_time, start_time, "secs"),
                                                "secs"
                                               )
  attr(resultados, 'optimizacion')          <- optimizacion
  attr(resultados, 'lim_inf')               <- limite_inf
  attr(resultados, 'lim_sup')               <- limite_sup
  attr(resultados, 'n_poblacion')           <- n_poblacion
  attr(resultados, 'generaciones')          <- i 
  attr(resultados, 'valor_variables')       <- mejor_individuo_global
  attr(resultados, 'mejor_fitness')         <- mejor_fitness_global 
  attr(resultados, 'optimo_encontrado')     <- mejor_valor_global 
  attr(resultados, 'n_poblacion')           <- n_poblacion 
  attr(resultados, 'elitismo')              <- elitismo
  attr(resultados, 'prob_mut')              <- prob_mut
  attr(resultados, 'metodo_seleccion')      <- metodo_seleccion
  attr(resultados, 'metodo_cruce')          <- metodo_cruce
  attr(resultados, 'parada_temprana')       <- parada_temprana
  attr(resultados, 'rondas_parada')         <- rondas_parada
  attr(resultados, 'tolerancia_parada')     <- tolerancia_parada

  
  # INFORMACIÓN DEL PROCESO (VERBOSE)
  # ----------------------------------------------------------------------------
  if (verbose %in% c(1,2)) {
    cat("-----------------------", "\n")
    cat("Optimización finalizada", "\n")
    cat("-----------------------", "\n")
    cat("Fecha finalización  =", as.character(Sys.time()), "\n")
    cat("Duración selección  = ")
    print(difftime(end_time, start_time))
    cat("Número generaciones =", i, "\n")
    cat("Límite inferior     =", paste(limite_inf, collapse = ", "), "\n")
    cat("Límite superior     =", paste(limite_sup, collapse = ", "), "\n")
    cat("Optimización        =", optimizacion,"\n")
    cat("Óptimo encontrado   =", mejor_valor_global,"\n")
    cat("Valor variables     =", mejor_individuo_global, "\n")
    cat("\n")
  }
  return(resultados)
}

print.optimizacion_ga <- function(obj){
  # Función print para objetos optimizacion_ga
    cat("----------------------------------------------", "\n")
    cat("Resultados optimización por algoritmo genético", "\n")
    cat("----------------------------------------------", "\n")
    cat("Fecha creación      =", attr(obj, 'fecha_creacion'), "\n")
    cat("Duración selección  = ", attr(obj, 'duracion_optimizacion'), "\n")
    cat("Número generaciones =", attr(obj, 'generaciones'), "\n")
    cat("Límite inferior     =", attr(obj, 'lim_inf'), "\n")
    cat("Límite superior     =", attr(obj, 'lim_sup'), "\n")
    cat("Optimización        =", attr(obj, 'optimizacion'), "\n")
    cat("Óptimo encontrado   =", attr(obj, 'optimo_encontrado'), "\n")
    cat("Valor variables     =", attr(obj, 'valor_variables'), "\n")
    cat("Función objetivo    =", "\n")
    cat("\n")
    print(obj$funcion_objetivo)
  }

optimización función algoritmo genético

# Ejemplo ilustrativo, en un caso real, emplear como mínimo 100 individuos por
# generación.
resultados_ga <- optimizar_ga(
                   funcion_objetivo = f_rastrigin2d,
                   n_variables      = 2,
                   optimizacion     = "maximizar",
                   limite_inf       = c(-5.12, -5.12),
                   limite_sup       = c(5.12, 5.12),
                   n_poblacion      = 30,
                   n_generaciones   = 500,
                   elitismo         = 0.01,
                   prob_mut         = 0.1,
                   distribucion     = "uniforme",
                   min_distribucion = -1,
                   max_distribucion = 1,
                   metodo_seleccion = "tournament",
                   parada_temprana  = TRUE,
                   rondas_parada    = 10,
                   tolerancia_parada = 10^-8,
                   verbose          = 0
                 )
## Algoritmo detenido en la generacion 37 por falta cambio mínimo de 1e-08 durante 10 generaciones consecutivas.

resultados:

## ---------------------------------------------- 
## Resultados optimización por algoritmo genético 
## ---------------------------------------------- 
## Fecha creación      = 1627709508 
## Duración selección  =  0.855688095092773 secs 
## Número generaciones = 37 
## Límite inferior     = -5.12 -5.12 
## Límite superior     = 5.12 5.12 
## Optimización        = maximizar 
## Óptimo encontrado   = 80.60354 
## Valor variables     = -4.521458 4.54607 
## Función objetivo    = 
## 
## function(x1,x2){
##   y <- 20+x1^2-10*cos(2*pi*x1)+x2^2-10*cos(2*pi*x2)
##   return(y)
## }
## <bytecode: 0x0000000021a8b488>

Mejor individuo

Evolución fitness

Animación del proceso usando el método heurístico de cómo avanza la búsqueda del valor óptimo:

# Se extrae la posición de cada partícula en cada iteración.
extraer_individuos <- function(optimizacion_ga){
                        individuos <- do.call(rbind, optimizacion_ga$poblaciones)
                        individuos <- as.data.frame(individuos)
                        individuos$generacion <- rep(
                                        x = 1:attr(optimizacion_ga, 'generaciones'),
                                        each = attr(optimizacion_ga, 'n_poblacion')
                                      )
                        return(individuos)
                     } 

historico_inidviduos <- extraer_individuos(resultados_ga)

animation::saveGIF(
  for (i in 1:max(historico_inidviduos$generacion)) {
    p <- ggplot() +
         geom_contour(data = datos,
            aes(x = x1, y = x2, z = f_x, colour = stat(level)),
            bins = 20) +
         geom_point(data = historico_inidviduos %>% filter(generacion == i),
            aes(x = V1, y = V2),
            color = "red",
            size = 1.8) +
         labs(title = paste("Generación:", i)) +
         theme_bw() +
         theme(legend.position = "none")
    
    # Al ser un gráfico ggplot2 es necesario hacer el print()
    print(p)
  },
  # Nombre del gif
  movie.name = "gift_ggplot_2.gif",
  # Dimensiones
  ani.width  = 500,
  ani.height = 300,
  # Tiempo de duración de cada frame (segundos)
  interval = 0.5
)
## Output at: gift_ggplot_2.gif
## [1] TRUE

Solución problema del vendedor viajero:

Utilice colonias de hormigas y algoritmos genéticos para encontrar el orden óptimo. El costo de desplazamiento entre ciudades es la suma del valor de la hora del vendedor (es un parámetro que debe estudiarse), el costo de los peajes y el costo del combustible. Cada equipo debe definir en qué carro hace el recorrido el vendedor y de allí extraer el costo del combustible.

Base de datos coordenadas municipios DANE

Para la construcción de la matriz de costos, se identificó el costos entre trayectos(peajes),tiempo estimado entre ciudades en minutos y kilometros entre ciudades, para estandarizar la matriz de costos en pesos se multiplicaron por constantes como el precio de la gasolina por galon, rendimiento de vehiculo, el cual se va transportar en un volkswagen gol 1600 cc, sueldo por hora del vendedor.

  • Precio del combustible promedio en colombia $8.525 /Galon

  • Rendimiento del vehiculo 56 km/Galon

  • Sueldo basico del vendedor $11.903 pesos/hora

Dando como resultado la siguiente matriz de costos en miles:

Optimizacón del problema utilizando Algoritmos Geneticos (GA) y generamos el GiF del proceso

coordenadas <- data.frame( long = c(-76.30361, -77.28111, -76.19536, -74.08175, -75.69611, -75.68111, -75.51738, -73.25322, -75.88143, -74.76459,
                                    -75.51444, -74.78132, -75.56359, -73.1198, -72.50782),
                           lat= c(3.53944, 1.21361, 4.08466, 4.60971, 4.81333, 4.53389, 5.06889, 10.46314, 8.74798, 10.91843,
                                  10.39972, 10.96854, 6.25184, 7.12539, 7.89391),
                           stringsAsFactors = F)
#Funcion para calcular la longitud del tour
tourLength <- function(tour, distMatrix) {
  tour <- c(tour, tour[1])
  route <- embed(tour, 2)[,2:1]
  sum(distMatrix[route])
}

#Fitness function to be maximized
tspFitness <- function(tour, ...) 1/tourLength(tour, ...)

# Iteracciones maximas
Maxter <- c(25, 50, 75, 100, 200, 400, 600, 800, 1000,  1200)

# Ciudades
x <- coordenadas[,1]
y <- coordenadas[,2]

# Utilizamos la funcion saveGIF para crear el gif durante la iteraccion de la funcion de
# GA, es decir, durante cada entranamiento guardamos cada resultado en forma de grafica
# y por ultimos entregamos el GIF.
animation::saveGIF(
  
  # For para cambiar el numero maximo de iteraciones en la funcion GA
  for (i in 1:length(Maxter)) {
    set.seed(124)

    GAiter <- ga(type = "permutation", fitness = tspFitness, distMatrix = cost_ciud,
                 lower = 1, upper = 15, popSize = 50, maxiter = Maxter[i],
                 run = 500, pmutation = 0.2)

    tour <- GAiter@solution[1, ]
    tour <- c(tour, tour[1])
    n <- length(tour)

    ## Grafica
    map <- ggplot() +
      borders("world", "colombia") +
      geom_point(data = coordenadas,
                 aes(x = long,
                     y = lat), colour = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15))

    map_vectors <- map +
      ggtitle(paste( paste("Max. numero de iteraciones:", Maxter[i]), paste(" - Iteracciones realizadas: ", GAiter@iter))) +
      theme(plot.title = element_text(hjust = 0.5)) +
      geom_segment(aes(x = x[tour[-n]], y = y[tour[-n]], xend = x[tour[-1]], yend = y[tour[-1]]),
                   arrow = arrow(length = unit(0.25, "cm")), col = "steelblue", lwd= 0.6) +
      geom_text_repel(aes(x, y, label = rownames(cost_ciud)),
                      size = 3.5)+
         labs(title = paste("Generación:", i))+
      theme_bw() +
      theme(legend.position = "none")
    
     print(map_vectors)

  },
  # Nombre del gif
  movie.name = "gift_opti_viajero.gif",
  # Dimensiones
  ani.width  = 500,
  ani.height = 300,
  # Tiempo de duración de cada frame (segundos)
  interval = 0.5)

generamos el GiF del proceso de optimización por Algoritmos Geneticos (GA)

Colonia de Hormigas

Utilizamos python para la implementación del algoritmo de optimización de hormigas.


import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import time
import warnings

warnings.filterwarnings("ignore")


class AntColonyOptimizer:
    def __init__(self, ants, evaporation_rate, intensification, alpha=1.0, beta=0.0, beta_evaporation_rate=0,
                 choose_best=.1):
        """
        Ant colony optimizer.  Traverses a graph and finds either the max or min distance between nodes.
        :param ants: number of ants to traverse the graph
        :param evaporation_rate: rate at which pheromone evaporates
        :param intensification: constant added to the best path
        :param alpha: weighting of pheromone
        :param beta: weighting of heuristic (1/distance)
        :param beta_evaporation_rate: rate at which beta decays (optional)
        :param choose_best: probability to choose the best route
        """
        # Parameters
        self.ants = ants
        self.evaporation_rate = evaporation_rate
        self.pheromone_intensification = intensification
        self.heuristic_alpha = alpha
        self.heuristic_beta = beta
        self.beta_evaporation_rate = beta_evaporation_rate
        self.choose_best = choose_best

        # Internal representations
        self.pheromone_matrix = None
        self.heuristic_matrix = None
        self.probability_matrix = None

        self.map = None
        self.set_of_available_nodes = None

        # Internal stats
        self.best_series = []
        self.best = None
        self.fitted = False
        self.best_path = None
        self.fit_time = None

        # Plotting values
        self.stopped_early = False

    def __str__(self):
        string = "Ant Colony Optimizer"
        string += "\n--------------------"
        string += "\nDesigned to optimize either the minimum or maximum distance between nodes in a square matrix that behaves like a distance matrix."
        string += "\n--------------------"
        string += "\nNumber of ants:\t\t\t\t{}".format(self.ants)
        string += "\nEvaporation rate:\t\t\t{}".format(self.evaporation_rate)
        string += "\nIntensification factor:\t\t{}".format(self.pheromone_intensification)
        string += "\nAlpha Heuristic:\t\t\t{}".format(self.heuristic_alpha)
        string += "\nBeta Heuristic:\t\t\t\t{}".format(self.heuristic_beta)
        string += "\nBeta Evaporation Rate:\t\t{}".format(self.beta_evaporation_rate)
        string += "\nChoose Best Percentage:\t\t{}".format(self.choose_best)
        string += "\n--------------------"
        string += "\nUSAGE:"
        string += "\nNumber of ants influences how many paths are explored each iteration."
        string += "\nThe alpha and beta heuristics affect how much influence the pheromones or the distance heuristic weigh an ants' decisions."
        string += "\nBeta evaporation reduces the influence of the heuristic over time."
        string += "\nChoose best is a percentage of how often an ant will choose the best route over probabilistically choosing a route based on pheromones."
        string += "\n--------------------"
        if self.fitted:
            string += "\n\nThis optimizer has been fitted."
        else:
            string += "\n\nThis optimizer has NOT been fitted."
        return string

    def _initialize(self):
        """
        Initializes the model by creating the various matrices and generating the list of available nodes
        """
        assert self.map.shape[0] == self.map.shape[1], "Map is not a distance matrix!"
        num_nodes = self.map.shape[0]
        self.pheromone_matrix = np.ones((num_nodes, num_nodes))
        # Remove the diagonal since there is no pheromone from node i to itself
        self.pheromone_matrix[np.eye(num_nodes) == 1] = 0
        self.heuristic_matrix = 1 / self.map
        self.probability_matrix = (self.pheromone_matrix ** self.heuristic_alpha) * (
                self.heuristic_matrix ** self.heuristic_beta)  # element by element multiplcation
        self.set_of_available_nodes = list(range(num_nodes))

    def _reinstate_nodes(self):
        """
        Resets available nodes to all nodes for the next iteration
        """
        self.set_of_available_nodes = list(range(self.map.shape[0]))

    def _update_probabilities(self):
        """
        After evaporation and intensification, the probability matrix needs to be updated.  This function
        does that.
        """
        self.probability_matrix = (self.pheromone_matrix ** self.heuristic_alpha) * (
                self.heuristic_matrix ** self.heuristic_beta)

    def _choose_next_node(self, from_node):
        """
        Chooses the next node based on probabilities.  If p < p_choose_best, then the best path is chosen, otherwise
        it is selected from a probability distribution weighted by the pheromone.
        :param from_node: the node the ant is coming from
        :return: index of the node the ant is going to
        """
        numerator = self.probability_matrix[from_node, self.set_of_available_nodes]
        if np.random.random() < self.choose_best:
            next_node = np.argmax(numerator)
        else:
            denominator = np.sum(numerator)
            probabilities = numerator / denominator
            next_node = np.random.choice(range(len(probabilities)), p=probabilities)
        return next_node

    def _remove_node(self, node):
        self.set_of_available_nodes.remove(node)

    def _evaluate(self, paths, mode):
        """
        Evaluates the solutions of the ants by adding up the distances between nodes.
        :param paths: solutions from the ants
        :param mode: max or min
        :return: x and y coordinates of the best path as a tuple, the best path, and the best score
        """
        scores = np.zeros(len(paths))
        coordinates_i = []
        coordinates_j = []
        for index, path in enumerate(paths):
            score = 0
            coords_i = []
            coords_j = []
            for i in range(len(path) - 1):
                coords_i.append(path[i])
                coords_j.append(path[i + 1])
                score += self.map[path[i], path[i + 1]]
            scores[index] = score
            coordinates_i.append(coords_i)
            coordinates_j.append(coords_j)
        if mode == 'min':
            best = np.argmin(scores)
        elif mode == 'max':
            best = np.argmax(scores)
        return (coordinates_i[best], coordinates_j[best]), paths[best], scores[best]

    def _evaporation(self):
        """
        Evaporate some pheromone as the inverse of the evaporation rate.  Also evaporates beta if desired.
        """
        self.pheromone_matrix *= (1 - self.evaporation_rate)
        self.heuristic_beta *= (1 - self.beta_evaporation_rate)

    def _intensify(self, best_coords):
        """
        Increases the pheromone by some scalar for the best route.
        :param best_coords: x and y (i and j) coordinates of the best route
        """
        i = best_coords[0]
        j = best_coords[1]
        self.pheromone_matrix[i, j] += self.pheromone_intensification

    def fit(self, map_matrix, iterations=100, mode='min', early_stopping_count=20, verbose=True):
        """
        Fits the ACO to a specific map.  This was designed with the Traveling Salesman problem in mind.
        :param map_matrix: Distance matrix or some other matrix with similar properties
        :param iterations: number of iterations
        :param mode: whether to get the minimum path or maximum path
        :param early_stopping_count: how many iterations of the same score to make the algorithm stop early
        :return: the best score
        """
        if verbose: print("Beginning ACO Optimization with {} iterations...".format(iterations))
        self.map = map_matrix
        start = time.time()
        self._initialize()
        num_equal = 0

        for i in range(iterations):
            start_iter = time.time()
            paths = []
            path = []

            for ant in range(self.ants):
                current_node = self.set_of_available_nodes[np.random.randint(0, len(self.set_of_available_nodes))]
                start_node = current_node
                while True:
                    path.append(current_node)
                    self._remove_node(current_node)
                    if len(self.set_of_available_nodes) != 0:
                        current_node_index = self._choose_next_node(current_node)
                        current_node = self.set_of_available_nodes[current_node_index]
                    else:
                        break

                path.append(start_node)  # go back to start
                self._reinstate_nodes()
                paths.append(path)
                path = []

            best_path_coords, best_path, best_score = self._evaluate(paths, mode)

            if i == 0:
                best_score_so_far = best_score
            else:
                if mode == 'min':
                    if best_score < best_score_so_far:
                        best_score_so_far = best_score
                        self.best_path = best_path
                elif mode == 'max':
                    if best_score > best_score_so_far:
                        best_score_so_far = best_score
                        self.best_path = best_path

            if best_score == best_score_so_far:
                num_equal += 1
            else:
                num_equal = 0

            self.best_series.append(best_score)
            self._evaporation()
            self._intensify(best_path_coords)
            self._update_probabilities()

            if verbose: print("Best score at iteration {}: {}; overall: {} ({}s)"
                              "".format(i, round(best_score, 2), round(best_score_so_far, 2),
                                        round(time.time() - start_iter)))

            if best_score == best_score_so_far and num_equal == early_stopping_count:
                self.stopped_early = True
                print("Stopping early due to {} iterations of the same score.".format(early_stopping_count))
                break

        self.fit_time = round(time.time() - start)
        self.fitted = True

        if mode == 'min':
            self.best = self.best_series[np.argmin(self.best_series)]
            if verbose: print(
                "ACO fitted.  Runtime: {} minutes.  Best score: {}".format(self.fit_time // 60, self.best))
            return self.best
        elif mode == 'max':
            self.best = self.best_series[np.argmax(self.best_series)]
            if verbose: print(
                "ACO fitted.  Runtime: {} minutes.  Best score: {}".format(self.fit_time // 60, self.best))
            return self.best
        else:
            raise ValueError("Invalid mode!  Choose 'min' or 'max'.")

    def plot(self):
        """
        Plots the score over time after the model has been fitted.
        :return: None if the model isn't fitted yet
        """
        if not self.fitted:
            print("Ant Colony Optimizer not fitted!  There exists nothing to plot.")
            return None
        else:
            fig, ax = plt.subplots(figsize=(20, 15))
            ax.plot(self.best_series, label="Best Run")
            ax.set_xlabel("Iteration")
            ax.set_ylabel("Performance")
            ax.text(.8, .6,
                    'Ants: {}\nEvap Rate: {}\nIntensify: {}\nAlpha: {}\nBeta: {}\nBeta Evap: {}\nChoose Best: {}\n\nFit Time: {}m{}'.format(
                        self.ants, self.evaporation_rate, self.pheromone_intensification, self.heuristic_alpha,
                        self.heuristic_beta, self.beta_evaporation_rate, self.choose_best, self.fit_time // 60,
                        ["\nStopped Early!" if self.stopped_early else ""][0]),
                    bbox={'facecolor': 'gray', 'alpha': 0.8, 'pad': 10}, transform=ax.transAxes)
            ax.legend()
            plt.title("Ant Colony Optimization Results (best: {})".format(np.round(self.best, 2)))
            plt.show()
            
            
            
data = pd.read_csv('Datas/costos_ciudad.csv', sep=",", index_col=0) 

data_matrix = data.to_numpy()

## CHANGE ITERATIONS

# Create a dataframe to save iterations results
#best_pa = DataFrame(columns = ['iter5','iter10', 'iter15', 'iter20', 'iter25', 'iter30', 'iter35', 'iter40', 'iter45', 'iter50'])
best_pa = pd.DataFrame(columns = ['iter4','iter6', 'iter8', 'iter10', 'iter12', 'iter14', 'iter16', 'iter18', 'iter20', 'iter22'])

# Ants array                               
#Iter = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]
Iter = [4, 6, 8, 10, 12, 14, 16, 18, 20, 22]

for n in range(10):

    # Create the Antiptimizer using differents parameteres
    optimizer = AntColonyOptimizer(ants=100, evaporation_rate=.1, intensification=2, alpha=1, beta=1,
                                   beta_evaporation_rate=0, choose_best=.1)

    # Traing the model and save tha best result                             
    optimizer.fit(data_matrix, Iter[n])
    
    # Save the values in a dataframe
    best_pa[best_pa.columns[n]] = optimizer.best_path

## SAVE RESULTS

# Show the best tour in terminal
## Beginning ACO Optimization with 4 iterations...
## Best score at iteration 0: 2375.11; overall: 2375.11 (0s)
## Best score at iteration 1: 2288.49; overall: 2288.49 (0s)
## Best score at iteration 2: 2303.62; overall: 2288.49 (0s)
## Best score at iteration 3: 2289.25; overall: 2288.49 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2288.488
## 2288.488
## Beginning ACO Optimization with 6 iterations...
## Best score at iteration 0: 2572.27; overall: 2572.27 (0s)
## Best score at iteration 1: 2449.41; overall: 2449.41 (0s)
## Best score at iteration 2: 2371.89; overall: 2371.89 (0s)
## Best score at iteration 3: 2398.68; overall: 2371.89 (0s)
## Best score at iteration 4: 2311.95; overall: 2311.95 (0s)
## Best score at iteration 5: 2206.22; overall: 2206.22 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2206.2219999999998
## 2206.2219999999998
## Beginning ACO Optimization with 8 iterations...
## Best score at iteration 0: 2608.27; overall: 2608.27 (0s)
## Best score at iteration 1: 2575.47; overall: 2575.47 (0s)
## Best score at iteration 2: 2410.5; overall: 2410.5 (0s)
## Best score at iteration 3: 2410.5; overall: 2410.5 (0s)
## Best score at iteration 4: 2410.5; overall: 2410.5 (0s)
## Best score at iteration 5: 2410.5; overall: 2410.5 (0s)
## Best score at iteration 6: 2255.64; overall: 2255.64 (0s)
## Best score at iteration 7: 2255.64; overall: 2255.64 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2255.6369999999997
## 2255.6369999999997
## Beginning ACO Optimization with 10 iterations...
## Best score at iteration 0: 2530.95; overall: 2530.95 (0s)
## Best score at iteration 1: 2489.45; overall: 2489.45 (0s)
## Best score at iteration 2: 2309.78; overall: 2309.78 (0s)
## Best score at iteration 3: 2158.02; overall: 2158.02 (0s)
## Best score at iteration 4: 2109.34; overall: 2109.34 (0s)
## Best score at iteration 5: 2110.1; overall: 2109.34 (0s)
## Best score at iteration 6: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 7: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 8: 2048.39; overall: 2047.63 (0s)
## Best score at iteration 9: 2047.63; overall: 2047.63 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2047.6329999999998
## 2047.6329999999998
## Beginning ACO Optimization with 12 iterations...
## Best score at iteration 0: 2589.08; overall: 2589.08 (0s)
## Best score at iteration 1: 2405.88; overall: 2405.88 (0s)
## Best score at iteration 2: 2428.21; overall: 2405.88 (0s)
## Best score at iteration 3: 2203.19; overall: 2203.19 (0s)
## Best score at iteration 4: 2128.38; overall: 2128.38 (0s)
## Best score at iteration 5: 2128.38; overall: 2128.38 (0s)
## Best score at iteration 6: 2121.82; overall: 2121.82 (0s)
## Best score at iteration 7: 2124.08; overall: 2121.82 (0s)
## Best score at iteration 8: 2101.62; overall: 2101.62 (0s)
## Best score at iteration 9: 2081.47; overall: 2081.47 (0s)
## Best score at iteration 10: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 11: 2048.39; overall: 2048.39 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2048.394
## 2048.394
## Beginning ACO Optimization with 14 iterations...
## Best score at iteration 0: 2455.18; overall: 2455.18 (0s)
## Best score at iteration 1: 2549.5; overall: 2455.18 (0s)
## Best score at iteration 2: 2501.84; overall: 2455.18 (0s)
## Best score at iteration 3: 2330.8; overall: 2330.8 (0s)
## Best score at iteration 4: 2315.61; overall: 2315.61 (0s)
## Best score at iteration 5: 2261.38; overall: 2261.38 (0s)
## Best score at iteration 6: 2315.58; overall: 2261.38 (0s)
## Best score at iteration 7: 2194.74; overall: 2194.74 (0s)
## Best score at iteration 8: 2192.76; overall: 2192.76 (0s)
## Best score at iteration 9: 2131.06; overall: 2131.06 (0s)
## Best score at iteration 10: 2131.06; overall: 2131.06 (0s)
## Best score at iteration 11: 2131.06; overall: 2131.06 (0s)
## Best score at iteration 12: 2131.06; overall: 2131.06 (0s)
## Best score at iteration 13: 2131.06; overall: 2131.06 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2131.0559999999996
## 2131.0559999999996
## Beginning ACO Optimization with 16 iterations...
## Best score at iteration 0: 2696.24; overall: 2696.24 (0s)
## Best score at iteration 1: 2540.81; overall: 2540.81 (0s)
## Best score at iteration 2: 2359.35; overall: 2359.35 (0s)
## Best score at iteration 3: 2310.92; overall: 2310.92 (0s)
## Best score at iteration 4: 2109.34; overall: 2109.34 (0s)
## Best score at iteration 5: 2404.29; overall: 2109.34 (0s)
## Best score at iteration 6: 2155.05; overall: 2109.34 (0s)
## Best score at iteration 7: 2155.05; overall: 2109.34 (0s)
## Best score at iteration 8: 2128.38; overall: 2109.34 (0s)
## Best score at iteration 9: 2107.0; overall: 2107.0 (0s)
## Best score at iteration 10: 2061.67; overall: 2061.67 (0s)
## Best score at iteration 11: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 12: 2061.67; overall: 2048.39 (0s)
## Best score at iteration 13: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 14: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 15: 2048.39; overall: 2048.39 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2048.394
## 2048.394
## Beginning ACO Optimization with 18 iterations...
## Best score at iteration 0: 2519.04; overall: 2519.04 (0s)
## Best score at iteration 1: 2502.98; overall: 2502.98 (0s)
## Best score at iteration 2: 2256.52; overall: 2256.52 (0s)
## Best score at iteration 3: 2335.86; overall: 2256.52 (0s)
## Best score at iteration 4: 2209.81; overall: 2209.81 (0s)
## Best score at iteration 5: 2256.52; overall: 2209.81 (0s)
## Best score at iteration 6: 2179.07; overall: 2179.07 (0s)
## Best score at iteration 7: 2179.07; overall: 2179.07 (0s)
## Best score at iteration 8: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 9: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 10: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 11: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 12: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 13: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 14: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 15: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 16: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 17: 2131.82; overall: 2131.82 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2131.8169999999996
## 2131.8169999999996
## Beginning ACO Optimization with 20 iterations...
## Best score at iteration 0: 2590.64; overall: 2590.64 (0s)
## Best score at iteration 1: 2648.54; overall: 2590.64 (0s)
## Best score at iteration 2: 2518.39; overall: 2518.39 (0s)
## Best score at iteration 3: 2395.19; overall: 2395.19 (0s)
## Best score at iteration 4: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 5: 2286.79; overall: 2131.82 (0s)
## Best score at iteration 6: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 7: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 8: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 9: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 10: 2131.82; overall: 2131.82 (0s)
## Best score at iteration 11: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 12: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 13: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 14: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 15: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 16: 2048.39; overall: 2048.39 (0s)
## Best score at iteration 17: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 18: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 19: 2047.63; overall: 2047.63 (0s)
## ACO fitted.  Runtime: 0 minutes.  Best score: 2047.6329999999998
## 2047.6329999999998
## Beginning ACO Optimization with 22 iterations...
## Best score at iteration 0: 2300.95; overall: 2300.95 (0s)
## Best score at iteration 1: 2535.5; overall: 2300.95 (0s)
## Best score at iteration 2: 2157.94; overall: 2157.94 (0s)
## Best score at iteration 3: 2154.28; overall: 2154.28 (0s)
## Best score at iteration 4: 2104.68; overall: 2104.68 (0s)
## Best score at iteration 5: 2060.91; overall: 2060.91 (0s)
## Best score at iteration 6: 2060.91; overall: 2060.91 (0s)
## Best score at iteration 7: 2060.91; overall: 2060.91 (0s)
## Best score at iteration 8: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 9: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 10: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 11: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 12: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 13: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 14: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 15: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 16: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 17: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 18: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 19: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 20: 2047.63; overall: 2047.63 (0s)
## Best score at iteration 21: 2047.63; overall: 2047.63 (0s)
## Stopping early due to 20 iterations of the same score.
## ACO fitted.  Runtime: 0 minutes.  Best score: 2047.6329999999998
## 2047.6329999999998
print("\nTours table results: \n")
## 
## Tours table results:
print(best_pa)

# saving the dataframe in a csv
##     iter4  iter6  iter8  iter10  iter12  iter14  iter16  iter18  iter20  iter22
## 0       8      6      9       2       0       5       2       5       9       2
## 1      12      4     11       5       2       4       5       4      11       5
## 2       6      5     10       4       5       6       4       6      10       4
## 3       4      2      8       6       4      12       6      12       8       6
## 4       5      0     12      12       6       8      12       8      12      12
## 5       0      1      6       8      12      10       8      10       6       8
## 6       2     13      2      10       8      11      10       9       4      10
## 7       1     14      5      11      10       9       9      11       5      11
## 8       3      7      4       9       9       7      11       7       2       9
## 9      13      9      0       7      11      13       7      13       0       7
## 10     14     11      1      14       7      14      14      14       1      14
## 11     10     10      3      13      14       3      13       3       3      13
## 12     11      8     14       3      13       1       3       1      13       3
## 13      9     12     13       1       3       0       1       0      14       1
## 14      7      3      7       0       1       2       0       2       7       0
## 15      8      6      9       2       0       5       2       5       9       2
print("\n Saving results in AntColonyOptimizer_results.csv")
## 
##  Saving results in AntColonyOptimizer_results.csv
best_pa.to_csv('Datas/AntColonyOptimizer_results.csv')
            
            
            

Optimización del algoritmo colonia de hormigas

Resultado_AntColonyOptimizer <- read.csv("Datas/AntColonyOptimizer_results.csv", sep = ",", row.name = 1)
iter <- c(4,6,8,10,12,14,16,18,20,22)

coordenadas <- data.frame( long = c(-76.30361, -77.28111, -76.19536, -74.08175, -75.69611, -75.68111, -75.51738, -73.25322, -75.88143, -74.76459,
                                    -75.51444, -74.78132, -75.56359, -73.1198, -72.50782),
                           lat= c(3.53944, 1.21361, 4.08466, 4.60971, 4.81333, 4.53389, 5.06889, 10.46314, 8.74798, 10.91843,
                                  10.39972, 10.96854, 6.25184, 7.12539, 7.89391),
                           stringsAsFactors = F)

# Ciudades
x <- coordenadas[,1]
y <- coordenadas[,2]

animation::saveGIF(
  # For para cambiar los datos que vamos a graficar
  for (i in c(1,2,3,4,5,6,7,8,9,10)) {

    tour <- as.matrix(Resultado_AntColonyOptimizer[i])
    tour <- tour + 1
    n <- length(tour)
    
    ## Grafica
    map <- ggplot() +
      borders("world", "colombia") +
      geom_point(data = coordenadas,
                 aes(x = long,
                     y = lat), colour = c(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15))
    
    map_vectors <- map +
      ggtitle(paste( paste("Numero de iteraciones:", iter[i]), "Hormigas: 100")) +
      theme(plot.title = element_text(hjust = 0.5)) +
      geom_segment(aes(x = x[tour[-n]], y = y[tour[-n]], xend = x[tour[-1]], yend = y[tour[-1]]),
                   arrow = arrow(length = unit(0.25, "cm")), col = "steelblue", lwd= 0.6) +
      geom_text_repel(aes(x, y, label = rownames(cost_ciud)),
                      size = 3.5)+
         labs(title = paste("Generación:", i))+
      theme_bw() +
      theme(legend.position = "none")
    
     print(map_vectors)

  },
  # Nombre del gif
  movie.name = "gift_opti_Hormigas.gif",
  # Dimensiones
  ani.width  = 500,
  ani.height = 300,
  # Tiempo de duración de cada frame (segundos)
  interval = 0.5)
## Output at: gift_opti_Hormigas.gif
## [1] TRUE

generamos el GiF del proceso de optimización por el algoritmo de optimización de colonias de hormigas

Bibliografia

[1] Optimización con algoritmo genético y Nelder-Mead by Joaquín Amat Rodrigo, available under a Attribution 4.0 International (CC BY 4.0) at https://www.cienciadedatos.net/documentos/48_optimizacion_con_algoritmo_genetico

[2] https://github.com/johnberroa/Ant-Colony-Optimization/blob/master/AntColonyOptimizer.py

[3] Siamak Talatahari, in Metaheuristic Applications in Structures and Infrastructures, 2013,"Optimum Performance-Based Seismic Design of Frames Using Metaheuristic Optimization Algorithms", en linea: https://www.sciencedirect.com/topics/engineering/ant-colony-optimization

[4] Manish Mandloi, Vimal Bhatia, in Handbook of Neural Computation, 2017,"Symbol Detection in Multiple Antenna Wireless Systems via Ant Colony Optimization", en linea: ciencedirect.com/topics/engineering/ant-colony-optimization