2020年NCA CCF-C,改进灰狼算法RSMGWO+大规模函数优化,深度解析+性能实测
目录
- 1.摘要
- 2.灰狼算法GWO原理
- 3.改进策略
- 4.结果展示
- 5.参考文献
- 6.代码获取
- 7.读者交流
1.摘要
灰狼优化算法(GWO)是一种新型自然启发式算法,具备较强的局部搜索能力,但在处理大规模问题时全局搜索能力较弱。本文提出了改进灰狼算法(RSMGWO),RSMGWO算法结合了随机对立学习、增强灰狼层级结构和修改进化种群动态(EPD)。
2.灰狼算法GWO原理
【智能算法】灰狼算法(GWO)原理及实现
3.改进策略
基于增强灰狼层级结构
受粒子群优化算法(PSO)启发,本文提出了基于增强狼群层级结构的灰狼优化算法(GWOSH)。作为经典的群体智能算法,PSO通过学习个体和群体中最佳位置来有效地在复杂空间中搜索全局最优解。在GWOSH中,每只狼有两种更新模式:一种是基于随机维度的全局最优搜索方式,增强狼群的层级结构;另一种是采用传统GWO的更新方式。每只灰狼根据其社会层级选择适合的更新模式,从而提高搜索效率和优化性能。 S i ( t ) S_i(t) Si(t)来记录每个个体的层级,从 α , β , δ , ω \alpha,\beta,\delta,\omega α,β,δ,ω分别为1-4,引入决策因子 D F i ( t ) DF_i(t) DFi(t)使每只灰狼决定选择哪种更新模式来更新它们的位置:
D F i ( t ) = ( L − S i ( t ) ) / ( L − 1 ) DF_i(t)=(L-S_i(t))/(L-1) DFi(t)=(L−Si(t))/(L−1)
其中, L L L是狼群层级总数,狼群有4个层级 L = 4 L = 4 L=4。
改进EPD算子
EPD(演化种群动态)机制源自自组织临界性(SOC)理论,该理论指出动态系统最有效的状态是临界状态。在SOC理论中,平衡种群中的微小扰动能够维持平衡而不依赖外部干扰。在生物进化过程中,去除最差个体对种群的整体演化有显著影响。去除最差个体并重新定位其位置的过程称为EPD。EPD能够有效提高种群的中位数,并且在优化基于种群的元启发式算法时具有较高的效率和较低的成本。RSMGWO使用EPD首先去除一些表现较差的个体,然后将它们重新定位到最佳解附近或搜索空间中的随机位置。
(a)领导者周围更新最差候选解的位置:
X ( t + 1 ) = { X p − r 1 ∗ ( X p − l b ) , r < = 0.5 X p + r 1 ∗ ( u b − X p ) , r > 0.5 \boldsymbol{X}(t+1)=\left\{ \begin{array} {cc}\boldsymbol{X}_p-r_1*(\boldsymbol{X}_p-\boldsymbol{l}\boldsymbol{b}), & \quad r<=0.5 \\ \boldsymbol{X}_p+r_1*(\boldsymbol{u}\boldsymbol{b}-\boldsymbol{X}_p), & \quad r>0.5 \end{array}\right. X(t+1)={Xp−r1∗(Xp−lb),Xp+r1∗(ub−Xp),r<=0.5r>0.5
(b)更新最差候选解在搜索空间中随机位置的位置:
X ( t + 1 ) = l b + r ∗ ( u b − l b ) \boldsymbol{X}(t+1)=\boldsymbol{lb}+r*(\boldsymbol{ub}-\boldsymbol{lb}) X(t+1)=lb+r∗(ub−lb)
随机对立学习策略
为进一步提高探索能力,本文引入了一种随机对立学习策略,随机选择一只灰狼并使用该策略更新其位置:
X r s = l b + ( u b − X α ) X_{rs}=lb+(ub-X_\alpha) Xrs=lb+(ub−Xα)
流程图
4.结果展示
5.参考文献
[1] Zhang X, Wang X, Chen H, et al. Improved GWO for large-scale function optimization and MLP optimization in cancer identification[J]. Neural computing and applications, 2020, 32: 1305-1325.
6.代码获取
function [Alpha_score,Cbest,Alpha_pos]=RSMGWO(f,lb,ub,d,M,N)
%Improved GWO for large-scale function optimization and MLP optimization in cancer identification(RSMGWO)
%Programmer: Xinming Zhang
%Date:2018-6-26
% initialize alpha, beta, and delta_pos //初始化α、β和δ狼
Alpha_pos=zeros(1,d);
Alpha_score=inf; %change this to -inf for maximization problems
Beta_pos=zeros(1,d);
Beta_score=inf; %change this to -inf for maximization problems
Delta_pos=zeros(1,d);
Delta_score=inf; %change this to -inf for maximization problems
%Initialize the positions of search agents
aa=repmat(lb,N,1);bb=repmat(ub,N,1);
Positions=aa+(bb-aa).*rand(N,d);%fitval=zeros(1,size(Positions,1));
X=Positions;
X(:,d+1)=feval(f, X(:,1:d));
X = PopSort(X);
Positions=X(:,1:d);s1=1;s2=N+1;s3=N+1;
fit=X(:,d+1);
for i=1:Nif fit(i)<Alpha_scoreAlpha_score=fit(i); % Update alphaAlpha_pos=Positions(i,:);s1=i;endif fit(i)>Alpha_score && fit(i)<Beta_scoreBeta_score=fit(i); % Update betaBeta_pos=Positions(i,:);s2=i;endif fit(i)>Alpha_score && fit(i)>Beta_score && fit(i)<Delta_scoreDelta_score=fit(i); % Update deltaDelta_pos=Positions(i,:);s3=i;end
end
Cbest=zeros(1,M);
Cbest(1)=Alpha_score;
a1=2;
l=0;% Loop counter
% Main loop
while l<Ma=2-l*((2)/M); % a decreases linearly fron 2 to 0% EPD[~,index]=sort(fit);Positions=Positions(index,:);%Num=round(N/4);for i=1:Num%采用EPDpa=rand;if pa<=0.25temp=Alpha_pos;elseif pa<=0.5temp=Beta_pos;elseif pa<=0.75temp=Delta_pos;elsetemp=[];endif isempty(temp)Positions(N-Num+i,1:d)=lb+(ub-lb).*rand(1,d);elseif rand<0.5Positions(N-Num+i,1:d)=temp+(ub-temp).*rand(1,d)*ba;elsePositions(N-Num+i,1:d)=temp-(temp-lb).*rand(1,d)*ba;endendendNum=ceil(N*rand);while Num==s1||Num==s2||Num==s3,Num=ceil(N*rand);end%随机选择一个灰狼for i=1:Nif i==NumPositions(i,:)=lb+(ub-Alpha_pos);%反向学习elseif i==s1%对于alpha狼for j=1:dcnum=ceil(d*rand);if cnum~=jPositions(i,j)=Alpha_pos(cnum);elsernum=ceil(N*rand);while rnum==i,rnum=ceil(N*rand);end;rnum1=ceil(N*rand);while rnum1==i || rnum1==rnum,rnum1=ceil(N*rand);end;if rand<=0.5Positions(i,j)=Positions(i,j)+3.5*a*rand*(Positions(rnum,j)-Positions(rnum1,j));elsePositions(i,j)=Beta_pos(j)-a*(2*rand-1)*abs(a1*rand*Beta_pos(j)-Positions(i,j));endendendelseif i==s2%对于beta狼cc=rand(1,d)>=0.67;snum=sum(cc);Positions(i,cc)=Alpha_pos(cc)-a*(2*rand(1,snum)-1).*abs(a1*rand(1,snum).*Alpha_pos(cc)-Positions(i,cc));snum=d-snum;cnum=ceil(rand(1,snum)*d);Positions(i,~cc)=(Alpha_pos(cnum)+Beta_pos(cnum))/2;elseif i==s3%对于delta狼cc=rand(1,d)>=0.33;snum=sum(cc);X1=Alpha_pos(cc)-a*(2*rand(1,snum)-1).*abs(a1*rand(1,snum).*Alpha_pos(cc)-Positions(i,cc));X2=Beta_pos(cc)-a*(2*rand(1,snum)-1).*abs(a1*rand(1,snum).*Beta_pos(cc)-Positions(i,cc));Positions(i,cc)=(X1+X2)/2;snum=d-snum;cnum=ceil(d*rand(1,snum));Positions(i,~cc)=(Alpha_pos(cnum)+Beta_pos(cnum)+Delta_pos(cnum))/3;Else%对于其它狼X1=Alpha_pos-a*(2*rand(1,d)-1).*abs(a1*rand(1,d).*Alpha_pos-Positions(i,:));X2=Beta_pos-a*(2*rand(1,d)-1).*abs(a1*rand(1,d).*Beta_pos-Positions(i,:));X3=Delta_pos-a*(2*rand(1,d)-1).*abs(a1*rand(1,d).*Delta_pos-Positions(i,:));Positions(i,:)=(X1+X2+X3)/3;endendPositions=ControlBound(Positions,aa,bb);%限定位置fit=feval(f,Positions);%求适应度值for i=1:N% Calculate objective function for each search agentfitness=fit(i); % Update Alpha, Beta, and Deltaif fitness<Alpha_scoreAlpha_score=fitness; % Update alphaAlpha_pos=Positions(i,:);s1=i;endif fitness>Alpha_score && fitness<Beta_scoreBeta_score=fitness; % Update betaBeta_pos=Positions(i,:);s2=i;endif fitness>Alpha_score && fitness>Beta_score && fitness<Delta_scoreDelta_score=fitness; % Update deltaDelta_pos=Positions(i,:);s3=i;endendl=l+1;Cbest(l)=Alpha_score;
end