粒子群优化算法的实践
flyfish
粒子群优化算法(Particle Swarm Optimization,PSO)或者粒子群算法
红叉的地方是理想之地,这些粒子都想去,总结8个字是信息共享,个人决策。
上完图之后,上代码,再解释
纯纯的PSO实现
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
np.set_printoptions(precision = 4)
# 目标函数
def f(x,y):
return (x-3.14)**2 + (y-3.14)**2 + np.sin(3*x+3.14) + np.cos(4*y-3.14)
# 绘制三维函数
x, y = np.array(np.meshgrid(np.linspace(0,5,100), np.linspace(0,5,100)))
z = f(x, y)
# 求出全局最小值
x_min = np.around(x.ravel()[z.argmin()],4)
y_min = np.around(y.ravel()[z.argmin()],4)
print("x_min:",x_min)
print("y_min:",y_min)
# 算法的超参数
c1 = c2 = 0.1 #个体记忆 #集体记忆
w = 0.8 #惯性权重 inertia weight constant.
# 创建 particles
n_particles = 20 # 20个粒子
np.random.seed(100)
X = np.random.rand(2, n_particles) * 5
V = np.random.randn(2, n_particles) * 0.1
print("X:",X)
print("V:",V)
# 初始化数据
pbest = X # p= personal = cognitive
pbest_obj = f(X[0], X[1])
gbest = pbest[:, pbest_obj.argmin()] #g= global = social
gbest_obj = pbest_obj.min()
# 迭代一次粒子群优化
def update():
global V, X, pbest, pbest_obj, gbest, gbest_obj
# 更新参数
r1, r2 = np.random.rand(2)
V = w * V + c1*r1*(pbest - X) + c2*r2*(gbest.reshape(-1,1)-X)
X = X + V
obj = f(X[0], X[1])
pbest[:, (pbest_obj >= obj)] = X[:, (pbest_obj >= obj)]
pbest_obj = np.array([pbest_obj, obj]).min(axis=0)
gbest = pbest[:, pbest_obj.argmin()]
gbest_obj = pbest_obj.min()
# 等高线图
fig, ax = plt.subplots(figsize=(8,6))
fig.set_tight_layout(True)
img = ax.imshow(z, extent=[0, 5, 0, 5], origin='lower', cmap='viridis', alpha=0.5)
fig.colorbar(img, ax=ax)
ax.plot([x_min], [y_min], marker='x', markersize=5, color="red")
contours = ax.contour(x, y, z, 10, colors='black', alpha=0.4)
ax.clabel(contours, inline=True, fontsize=8, fmt="%.0f")
pbest_plot = ax.scatter(pbest[0], pbest[1], marker='o', color='black', alpha=0.5)
p_plot = ax.scatter(X[0], X[1], marker='o', color='blue', alpha=0.5)
p_arrow = ax.quiver(X[0], X[1], V[0], V[1], color='blue', width=0.005, angles='xy', scale_units='xy', scale=1)
gbest_plot = plt.scatter([gbest[0]], [gbest[1]], marker='*', s=100, color='black', alpha=0.4)
ax.set_xlim([0,5])
ax.set_ylim([0,5])
# 粒子群算法的步骤:算法更新和图形显示
def animate(i):
title = 'Iteration {:02d}'.format(i)
# 更新参数
update()
# 绘图
ax.set_title(title)
pbest_plot.set_offsets(pbest.T)
p_plot.set_offsets(X.T)
p_arrow.set_offsets(X.T)
p_arrow.set_UVC(V[0], V[1])
gbest_plot.set_offsets(gbest.reshape(1,-1))
return ax, pbest_plot, p_plot, p_arrow, gbest_plot
anim = FuncAnimation(fig, animate, frames=list(range(1,20)), interval=200, blit=False, repeat=True)
anim.save("PSO_1.gif", dpi=120, writer="imagemagick")
print("PSO found best solution at f({})={}".format(gbest, ((gbest_obj))))
print("Global optimal at f({})={}".format([x_min,y_min], (f(x_min,y_min))))
输出
x_min: 2.7273
y_min: 3.1313
X: [[2.717 1.3918 2.1226 4.2239 0.0236 0.6078 3.3537 4.1293 0.6835 2.8755
4.4566 1.046 0.9266 0.5419 1.0985 4.8931 4.0584 0.8597 4.0811 1.3704]
[2.1585 4.7001 4.0882 1.6806 0.8771 1.8642 0.0284 1.2621 3.9783 0.0763
2.9942 3.019 0.5257 1.9097 0.1824 4.4521 4.9046 0.2997 4.4527 2.8845]]
V: [[ 0.0731 0.1362 -0.0326 0.0056 0.0222 -0.1443 -0.0756 0.0816 0.075
-0.0456 0.119 -0.1691 -0.1356 -0.1232 -0.0544 -0.0668 0.0007 -0.0613
0.13 -0.1733]
[-0.0983 0.0358 -0.1614 0.1471 -0.1188 -0.055 -0.094 -0.0828 0.0109
0.0508 -0.0862 0.1249 -0.008 -0.089 -0.0882 0.0019 0.0238 0.0014
-0.1636 -0.1044]]
PSO found best solution at f([2.7157 3.1329])=-1.7771745572809252
Global optimal at f([2.7273, 3.1313])=-1.7760464968972247
目标函数
f
(
x
,
y
)
=
(
x
−
3.14
)
2
+
(
y
−
3.14
)
2
+
sin
(
3
x
+
1.41
)
+
cos
(
4
y
−
3.14
)
f(x,y) = (x-3.14)^2 + (y-3.14)^2 + \sin(3x+1.41) + \cos(4y-3.14)
f(x,y)=(x−3.14)2+(y−3.14)2+sin(3x+1.41)+cos(4y−3.14)
那么,我们如何才能找到这个函数的极小点呢?当然,我们可以采取穷举式搜索:如果我们检查面上每个点的值,我们就可以找到最小点。或者在面上随机找到一些样本点,看看哪个样本点给出的值最低。。然而从形状上也可以注意到,如果找到一个值较小的点,则更容易在其附近找到更小值的点。
假设我们有粒子,我们将粒子
i
i
i在迭代中的位置表示为
X
i
(
t
)
X^i(t)
Xi(t),
在上面的示例中,我们将其作为坐标
X
i
(
t
)
=
(
x
i
(
t
)
,
y
i
(
t
)
)
.
X^i(t) = (x^i(t), y^i(t)).
Xi(t)=(xi(t),yi(t)).。
除了位置,我们还有每个粒子的速度,表示为
V
i
(
t
)
=
(
v
x
i
(
t
)
,
v
y
i
(
t
)
)
V^i(t)=(v_x^i(t), v_y^i(t))
Vi(t)=(vxi(t),vyi(t))。
在下一次迭代中,每个粒子的位置将更新为
X
i
(
t
+
1
)
=
X
i
(
t
)
+
V
i
(
t
+
1
)
X^i(t+1) = X^i(t)+V^i(t+1)
Xi(t+1)=Xi(t)+Vi(t+1)
将大写X,拆成两个小写坐标下x,y表示如下
x
i
(
t
+
1
)
=
x
i
(
t
)
+
v
x
i
(
t
+
1
)
y
i
(
t
+
1
)
=
y
i
(
t
)
+
v
y
i
(
t
+
1
)
\begin{aligned} x^i(t+1) &= x^i(t) + v_x^i(t+1) \\ y^i(t+1) &= y^i(t) + v_y^i(t+1) \end{aligned}
xi(t+1)yi(t+1)=xi(t)+vxi(t+1)=yi(t)+vyi(t+1)
同时,速度也根据规则进行更新
V
i
(
t
+
1
)
=
w
V
i
(
t
)
+
c
1
r
1
(
p
b
e
s
t
i
–
X
i
(
t
)
)
+
c
2
r
2
(
g
b
e
s
t
–
X
i
(
t
)
)
V^i(t+1) = w V^i(t) + c_1r_1(pbest^i – X^i(t)) + c_2r_2(gbest – X^i(t))
Vi(t+1)=wVi(t)+c1r1(pbesti–Xi(t))+c2r2(gbest–Xi(t))
同样的公式还有如下
方式1
v
i
j
(
t
+
1
)
=
w
∗
v
i
j
(
t
)
+
c
p
r
1
j
(
t
)
[
y
i
j
(
t
)
−
x
i
j
(
t
)
]
+
c
g
r
2
j
(
t
)
[
y
^
j
(
t
)
−
x
i
j
(
t
)
]
v_{ij}(t + 1) = w * v_{ij}(t) + c_{p}r_{1j}(t)[y_{ij}(t) − x_{ij}(t)] + c_{g}r_{2j}(t)[\hat{y}_{j}(t) − x_{ij}(t)]
vij(t+1)=w∗vij(t)+cpr1j(t)[yij(t)−xij(t)]+cgr2j(t)[y^j(t)−xij(t)]
方式2
r1和r2是介于0和1之间的随机数,w,r1,r2是PSO算法的参数
X
X
X在上述中拆成了小写的
x
,
y
x,y
x,y,在代码中就是
X
[
0
]
,
X
[
1
]
X[0], X[1]
X[0],X[1]
p
b
e
s
t
i
pbest^i
pbesti的位置是给出粒子
i
i
i探索的最好
f
(
X
)
f(X)
f(X)值,Cognitive(personal ) ,代码中的
p
b
e
s
t
pbest
pbest
g
b
e
s
t
gbest
gbest是由群体中的所有粒子探索的,Social(global),代码中的
g
b
e
s
t
gbest
gbest
请注意, p b e s t i pbest^i pbesti和 X i ( t ) = ( x i ( t ) , y i ( t ) ) . X^i(t) = (x^i(t), y^i(t)). Xi(t)=(xi(t),yi(t)).是两个位置向量
X i ( t ) = ( x i ( t ) , y i ( t ) ) X^i(t) = (x^i(t), y^i(t)) Xi(t)=(xi(t),yi(t))
p b e s t i = ( x i ( t ) , y i ( t ) ) pbest^i=(x^i(t), y^i(t)) pbesti=(xi(t),yi(t)) 粒子 i i i最好的
差异 p b e s t i – X i ( t ) pbest^i – X^i(t) pbesti–Xi(t)是向量减法。
粒子群优化算法的实践 - 向量减法
使用scikit-opt库实现
pip install scikit-opt
该库封装了7种启发式算法(差分进化算法、遗传算法、粒子群算法、模拟退火算法、蚁群算法、鱼群算法、免疫优化算法)
源码地址
https://github.com/guofei9987/scikit-opt/
输入
入参 默认值 意义
func - 目标函数
n_dim - 目标函数的维度
size_pop 50 种群规模
max_iter 200 最大迭代次数
lb None 每个参数的最小值
ub None 每个参数的最大值
w 0.8 惯性权重
c1 0.5 个体记忆
c2 0.5 集体记忆
constraint_ueq 空元组 不等式约束
输出
pso.record_value 每一代的粒子位置、粒子速度、对应的函数值。pso.record_mode = True 才开启记录
pso.gbest_y_hist 历史最优函数值
pso.best_y 最优函数值 (迭代中使用的是 pso.gbest_x, pso.gbest_y)
pso.best_x 最优函数值对应的输入值
import numpy as np
from sko.PSO import PSO
def demo_func(X):
x, y = X
return (x-3.14)**2 + (y-3.14)**2 + np.sin(3*x+3.14) + np.cos(4*y-3.14)
max_iter = 30
# lb None 每个参数的最小值
# ub None 每个参数的最大值
pso = PSO(func=demo_func, n_dim=2, pop=40, max_iter=max_iter, lb=[-1, -1], ub=[5, 5])
pso.record_mode = True
pso.run()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y)
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
record_value = pso.record_value
X_list, V_list = record_value['X'], record_value['V']
fig, ax = plt.subplots(1, 1)
ax.set_title('title', loc='center')
line = ax.plot([], [], 'b.')
X_grid, Y_grid = np.meshgrid(np.linspace(0,5,100), np.linspace(0,5,100))
Z_grid = demo_func((X_grid, Y_grid))
ax.contour(X_grid, Y_grid, Z_grid, 10)
ax.set_xlim([0,5])
ax.set_ylim([0,5])
plt.ion()
p = plt.show()
def update_scatter(frame):
i, j = frame // 10, frame % 10
ax.set_title('iter = ' + str(i))
X_tmp = X_list[i] + V_list[i] * j / 10.0
plt.setp(line, 'xdata', X_tmp[:, 0], 'ydata', X_tmp[:, 1])
return line
ani = FuncAnimation(fig, update_scatter, blit=True, interval=25, frames=max_iter * 10)
plt.show()
ani.save('PSO_2.gif', writer='pillow')
结果
best_x is [2.71374964 3.14188032] best_y is [-1.77777509]
增加约束条件实现
在图上画个红色的紧箍,表示理想之地在红色圈内,去那里吧
import numpy as np
from sko.PSO import PSO
def demo_func(X):
x, y = X
return (x-3.14)**2 + (y-3.14)**2 + np.sin(3*x+3.14) + np.cos(4*y-3.14)
#非线性约束 (x[0] - 1) ** 2 + (x[1] - 1) ** 2 - 1<=0
constraint_ueq = (
lambda x: (x[0] - 1) ** 2 + (x[1] - 1) ** 2-1
,
)
max_iter = 30
# lb None 每个参数的最小值
# ub None 每个参数的最大值
pso = PSO(func=demo_func, n_dim=2, pop=40, max_iter=max_iter, lb=[-1, -1], ub=[5, 5],constraint_ueq=constraint_ueq)
pso.record_mode = True
pso.run()
print('best_x is ', pso.gbest_x, 'best_y is', pso.gbest_y)
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
record_value = pso.record_value
X_list, V_list = record_value['X'], record_value['V']
fig, ax = plt.subplots(1, 1)
ax.set_title('title', loc='center')
line = ax.plot([], [], 'b.')
X_grid, Y_grid = np.meshgrid(np.linspace(0,5,100), np.linspace(0,5,100))
Z_grid = demo_func((X_grid, Y_grid))
ax.contour(X_grid, Y_grid, Z_grid, 10)
ax.set_xlim([0,5])
ax.set_ylim([0,5])
draw_circle=plt.Circle((1.8, 1.5), 0.5,fill=False,color='r')
ax.add_artist(draw_circle)
plt.ion()
p = plt.show()
def update_scatter(frame):
i, j = frame // 10, frame % 10
ax.set_title('iter = ' + str(i))
X_tmp = X_list[i] + V_list[i] * j / 10.0
plt.setp(line, 'xdata', X_tmp[:, 0], 'ydata', X_tmp[:, 1])
return line
ani = FuncAnimation(fig, update_scatter, blit=True, interval=25, frames=max_iter * 10)
plt.show()
ani.save('PSO_2.gif', writer='pillow')