Artificial gorilla troops optimizer based on double random disturbance and its application of engineering problem
-
摘要:
针对人工大猩猩部队优化算法(GTO)存在易陷入局部最优、收敛速度慢、寻优精度低等问题,提出了基于双重随机扰动策略的人工大猩猩部队优化算法(DGTO)。引入Halton序列初始化种群,增加种群的多样性;在算法寻优阶段使用多维随机数策略,并在探索阶段提出自适应位置搜索机制,提高算法的收敛速度;提出双重随机扰动策略,解决大猩猩的群居效应,增强算法跳出局部最优的能力;采用逐维更新策略更新个体位置,提升算法的收敛精度。通过10个基准测试函数寻优结果及Wilcoxon秩和检验对比可知,改进算法在寻优精度、收敛速度上有较大提升。同时,通过工程优化问题的实验对比分析,进一步验证了改进算法在处理现实工程问题上的优越性。
-
关键词:
- 人工大猩猩部队优化算法 /
- Halton序列 /
- 自适应位置搜索 /
- 双重随机扰动策略 /
- 逐维更新
Abstract:Traditional artificial gorilla troops optimizer (GTO) has the drawbacks of easily falling into local optimum, slow convergence speed, and low optimization accuracy. Aiming at these problems, an artificial gorilla troops optimizer based on a double random disturbance strategy (DGTO) was proposed. Firstly, the Halton sequence was introduced to initialize the population to increase the diversity of the population. Secondly, the method’s convergence speed was increased by using the multi-dimensional random number technique during the algorithm optimization stage and proposing an adaptive position exploration mechanism. Thirdly, a double random disturbance strategy was proposed, which solved the group effect of gorillas and enhanced the ability of the algorithm to jump out of the local optimum. Finally, the individual position was updated by a dimension-by-dimension update strategy, which improved the convergence accuracy of the algorithm. It is evident that the enhanced technique has a greater improvement in optimization accuracy and convergence speed when comparing the Wilcoxon rank sum test results with the optimization results of ten benchmark test functions. In addition, through the experimental comparative analysis of one practical engineering optimization problem, the superiority of the proposed algorithm in dealing with practical engineering problems is further verified.
-
表 1 元学习与其他学习方法的潜在关联
Table 1. Potential correlation between meta-learning and other learning methods
学习方法 与元学习关联 对比学习 对比学习的核心是将正样本和负样本在特征空间进行对比,使得模型能够学习到样本的重要内在特征。而元学习在学习特征表示时,有时也会通过正样本和负样本比对的方式实现 迁移学习 迁移学习的核心是找到已有知识和新知识之间的相似性,通过这种相似性的迁移达到迁移学习的目的。元学习基于任务展开学习,然而生物体及其一生,学习的永远不止一个任务,因而迁移学习可以将元学习在某一任务的学习方式迁移到相似任务中 表 2 参数设置
Table 2. Parameters setting
算法 主要参数 GTO s=0.03,w=0.8,$\beta $=3 GWO DE F=0.5,RC=0.3 WOA b=1 DGTO s=0.03,w=0.8,$\beta $=3 表 3 基准测试函数
Table 3. Benchmark test functions
编号 函数名 定义域 维度 最优值 f1 Sphere [−100,100] 10/30 0 f2 Schwefel’ problem 2.22 [−10,10] 10/30 0 f3 Schwefel’ problem 1.2 [−100,100] 10/30 0 f4 Schwefel’ problem 2.21 [−100,100] 10/30 0 f5 Generalized Rosenbrock’s function [−30,30] 10/30 0 f6 Step function [−100,100] 10/30 0 f7 Generalized Schwefel’s problem 2.26 [−500,500] 10/30 −418.98×维度 f8 Generalized penalized function 2 [−50,50] 10/30 0 f9 Kowalik’s function [−5,5] 4 0.000 3 f10 Hatman’s function 2 [0,1] 6 −3.32 表 4 DGTO算法与其他改进GTO算法寻优结果对比
Table 4. Comparison of optimization results of DGTO and other improved GTO algorithms
参数 算法 平均值 标准差 f1 GTO 0 0 IGTO 0 0 MGTO 0 0 DGTO 0 0 f2 GTO 6.67×10−195 0 IGTO 0 0 MGTO 0 0 DGTO 0 0 f3 GTO 0 0 IGTO 0 0 MGTO 0 0 DGTO 0 0 f4 GTO 1.01×10−192 0 IGTO 0 0 MGTO 0 0 DGTO 0 0 f5 GTO 3.17×100 8.22×100 IGTO 7.46×10−5 8.57×10−5 MGTO 2.54×10−5 3.94×10−5 DGTO 7.42×10−6 3.39×10−5 f6 GTO 2.56×10−7 4.19×10−7 IGTO 1.01×10−7 1.15×10−7 MGTO 3.37×10−14 3.76×10−14 DGTO 2.03×10−32 1.66×10−32 f7 GTO −1.26×104 3.05×10−5 IGTO −1.26×104 3.02×10−5 MGTO −1.26×104 1.03×10−6 DGTO −1.26×104 4.64×10−12 f8 GTO 3.30×10−3 5.12×10−3 IGTO 1.14×10−7 3.34×10−7 MGTO 1.00×10−7 1.43×10−7 DGTO 7.24×10−32 2.30×10−31 f9 GTO 4.17×10−4 3.01×10−4 IGTO 3.07×10−4 4.06×10−19 MGTO 3.07×10−4 3.29×10−19 DGTO 3.07×10−4 1.74×10−19 f10 GTO −3.29×100 5.39×10−2 IGTO −3.31×100 3.63×10−2 MGTO −3.30×100 4.51×10−2 DGTO −3.32×100 0 表 5 各算法寻优结果对比
Table 5. Comparison of optimization results of each algorithm
函数 算法 最优值 最差值 平均值 标准差 f1 DGTO 0 0 0 0 GTO 0 0 0 0 GWO 2.62×10−69 6.96×10−63 3.93×10−64 1.28×10−63 DE 6.09×10−20 1.86×10−18 5.05×10−19 4.44×10−19 WOA 2.04×10−94 5.80×10−81 1.23×10−82 8.19×10−82 f2 DGTO 0 0 0 0 GTO 7.39×10−217 3.67×10−197 7.69×10−199 0 GWO 3.25×10−39 8.12×10−36 5.90×10−37 1.30×10−36 DE 1.91×10−12 2.93×10−11 9.68×10−12 4.55×10−12 WOA 1.10×10−61 5.26×10−52 1.12×10−53 7.43×10−53 f3 DGTO 0 0 0 0 GTO 0 0 0 0 GWO 3.82×10−37 3.02×10−27 8.81×10−29 4.33×10−28 DE 2.63×100 2.74×101 8.17×100 5.41×100 WOA 2.73×10−8 6.89×102 1.16×102 1.46×102 f4 DGTO 0 0 0 0 GTO 2.01×10−218 4.03×10−200 1.18×10−201 0 GWO 9.71×10−24 2.86×10−19 1.79×10−20 4.48×10−20 DE 2.60×10−5 1.31×10−4 6.72×10−5 2.38×10−5 WOA 2.21×10−5 2.20×101 1.35×100 4.37×100 f5 DGTO 7.46×10−30 2.25×10−11 6.34×10−13 3.28×10−12 GTO 4.91×10−15 1.41×100 1.07×10−1 3.06×10−1 GWO 5.32×100 9.54×100 6.70×100 7.69×10−1 DE 1.74×100 1.75×101 7.58×100 2.78×100 WOA 3.49×100 8.95×100 6.58×100 6.38×10−1 f6 DGTO 0 1.23×10−32 5.55×10−34 1.94×10−33 GTO 9.24×10−32 9.98×10−23 2.18×10−24 1.14×10−23 GWO 1.11×10−6 6.33×10−6 2.97×10−6 1.13×10−6 DE 5.02×10−20 2.93×10−18 4.00×10−19 4.42×10−19 WOA 2.62×10−5 1.17×10−3 3.12×10−4 2.70×10−4 f7 DGTO −4.19×103 −4.19×103 −4.19×103 2.13×10−12 GTO −4.19×103 −4.19×103 −4.19×103 5.29×10−12 GWO −3.56×103 −2.08×103 −2.84×103 3.65×102 DE −7.94×107 −8.23×1014 −2.09×1013 1.20×1014 WOA −4.19×103 −2.49×103 −3.46×103 5.61×102 f8 DGTO 1.35×10−32 1.35×10−32 1.35×10−32 1.11×10−47 GTO 7.26×10−30 7.46×10−2 5.34×10−3 1.25×10−2 GWO 1.15×10−6 1.04×10−1 1.00×10−2 3.04×10−2 DE 4.37×10−21 4.46×10−19 8.94×10−20 8.94×10−20 WOA 2.57×10−4 1.10×10−1 1.44×10−2 2.57×10−2 f9 DGTO 3.07×10−4 3.07×10−4 3.07×10−4 1.74×10−19 GTO 3.07×10−4 1.22×10−3 4.17×10−4 3.01×10−4 GWO 3.07×10−4 2.04×10−2 2.78×10−3 6.56×10−3 DE 5.16×10−4 1.30×10−3 8.97×10−4 1.48×10−4 WOA 3.08×10−4 2.17×10−3 6.93×10−4 4.89×10−4 f10 DGTO −3.32×100 −3.32×100 −3.32×100 0 GTO −3.32×100 −3.20×100 −3.29×100 5.39×10−2 GWO −3.32×100 −3.09×100 −3.27×100 7.57×10−2 DE −3.32×100 −3.20×100 −3.31×100 3.17×10−2 WOA −3.32×100 −2.43×100 −3.22×100 1.45×10−1 表 6 不同算法的平均绝对值误差
Table 6. Mean absolute error of different algorithms
算法 平均绝对值误差 DGTO 3.09×10−3 GTO 1.73×10−2 GWO 1.36×102 DE 2.09×1012 WOA 8.54×101 表 7 Wilcoxon秩和检验结果
Table 7. Wilcoxon rank sum test results
函数 p值 GTO GWO DE WOA f1 NAN 3.31×10−20 3.31×10−20 3.31×10−20 f2 3.31×10−20 3.31×10−20 3.31×10−20 3.31×10−20 f3 NAN 3.31×10−20 3.31×10−20 3.31×10−20 f4 3.31×10−20 3.31×10−20 3.31×10−20 3.31×10−20 f5 7.97×10−18 7.07×10−18 7.07×10−18 7.07×10−18 f6 6.35×10−19 6.35×10−19 6.35×10−19 6.35×10−19 f7 4.32×10−12 3.43×10−19 3.43×10−19 3.43×10−19 f8 1.30×10−18 1.30×10−18 1.30×10−18 1.30×10−18 f9 2.79×10−16 5.92×10−18 5.92×10−18 5.92×10−18 f10 2.71×10−7 4.73×10−20 6.69×10−11 4.73×10−20 注:参数p小于5×10−2表示DGTO与对比算法有显著差异,NAN表示无法进行显著性判断。 表 8 弹簧设计问题各算法最优解
Table 8. Optimal solution of each algorithm for spring design problem
算法 d Da P 最优解 DGTO 0.05172 0.35750 11.24294 0.012665252 GTO 0.05162 0.35504 11.38775 0.012665321 GWO 0.05120 0.34507 12.00899 0.012672677 DE 0.05161 0.35479 11.40314 0.012665876 WOA 0.05148 0.35182 11.58231 0.012666117 SHO[30] 0.05114 0.34375 12.09550 0.012674000 MVO[30] 0.05000 0.31596 14.22623 0.012816930 SCA[30] 0.05078 0.33478 12.72269 0.012709667 GSA[30] 0.05000 0.31731 14.22867 0.012873881 -
[1] EBERHART R, KENNEDY J. A new optimizer using particle swarm theory[C]//Proceedings of the 6th International Symposium on Micro Machine and Human Science. Piscataway: IEEE Press, 2002: 39-43. [2] CHEN D, ZHANG S, YANG Y, et al. Optimization of character image matching based on artificial bee colony algorithm[J]. Journal of Physics: Conference Series, 2021, 2035(1): 012034. [3] MORIN M, ABI-ZEID I, QUIMPER C G. Ant colony optimization for path planning in search and rescue operations[J]. European Journal of Operational Research, 2023, 305(1): 53-63. [4] AL-IBRAHIM A M H. Solving travelling salesman problem (TSP) by hybrid genetic algorithm (HGA)[J]. International Journal of Advanced Computer Science and Applications, 2020, 11(6): 376-384. [5] LI J H, LEI Y S, YANG S H. Mid-long term load forecasting model based on support vector machine optimized by improved sparrow search algorithm[J]. Energy Reports, 2022, 8: 491-497. [6] YAN Z P, ZHANG J Z, ZENG J, et al. Three-dimensional path planning for autonomous underwater vehicles based on a whale optimization algorithm[J]. Ocean Engineering, 2022, 250: 111070. [7] GUHA D, ROY P K, BANERJEE S. Load frequency control of interconnected power system using grey wolf optimization[J]. Swarm and Evolutionary Computation, 2016, 27: 97-115. [8] ABDOLLAHZADEH B, GHAREHCHOPOGH F S, MIRJALILI S. Artificial gorilla troops optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems[J]. International Journal of Intelligent Systems, 2021, 36(10): 5887-5958. [9] XIAO Y N, SUN X, GUO Y L, et al. An improved gorilla troops optimizer based on lens opposition-based learning and adaptive β-hill climbing for global optimization[J]. Computer Modeling in Engineering & Sciences, 2022, 131(2): 815-850. [10] LIANG Q W, CHU S C, YANG Q Y, et al. Multi-group gorilla troops optimizer with multi-strategies for 3D node localization of wireless sensor networks[J]. Sensors, 2022, 22(11): 4275. [11] WU T Y, WU D, JIA H M, et al. A modified gorilla troops optimizer for global optimization problem[J]. Applied Sciences, 2022, 12(19): 10144. [12] ALSOLAI H, ALZAHRANI J S, MARAY M, et al. Enhanced artificial gorilla troops optimizer based clustering protocol for UAV-assisted intelligent vehicular network[J]. Drones, 2022, 6(11): 358. [13] MOSTAFA R R, GAHEEN M A, ABD ELAZIZ M, et al. An improved gorilla troops optimizer for global optimization problems and feature selection[J]. Knowledge-Based Systems, 2023, 269: 110462. [14] BANGYAL W H, TAYYAB H, BATOOL H, et al. An improved particle swarm optimization algorithm with Chi-square mutation strategy[J]. International Journal of Advanced Computer Science and Applications, 2019, 10(3): 481-491. [15] 宋立钦, 陈文杰, 陈伟海, 等. 基于混合策略的麻雀搜索算法改进及应用[J]. 北京航空航天大学学报, 2023, 49(8): 2187-2199.SONG L Q, CHEN W J, CHEN W H, et al. Improvement and application of hybrid strategy-based sparrow search algorithm[J]. Journal of Beijing University of Aeronautics and Astronautics, 2023, 49(8): 2187-2199(in Chinese). [16] 周理, 朱红求. 基于自适应步长果蝇算法的爬行机器人足端轨迹规划[J]. 机械设计与研究, 2021, 37(3): 60-63.ZHOU L, ZHU H Q. Foot trajectory planning of creeping robot based on adaptive step fruit fly optimization algorithm[J]. Machine Design & Research, 2021, 37(3): 60-63(in Chinese). [17] 宋阿妮, 包贤哲, 权轶. 基于混沌自适应萤火虫算法的UAVs分配策略[J]. 计算机应用与软件, 2022, 39(2): 300-306.SONG A N, BAO X Z, QUAN Y. Uavs scheduling strategy based on chaotic adaptive firefly algorithm[J]. Computer Applications and Software, 2022, 39(2): 300-306(in Chinese). [18] 李凡长, 刘洋, 吴鹏翔, 等. 元学习研究综述[J]. 计算机学报, 2021, 44(2): 422-446.LI F C, LIU Y, WU P X, et al. A survey on recent advances in meta-learning[J]. Chinese Journal of Computers, 2021, 44(2): 422-446 (in Chinese). [19] LI K W, LI S H, HUANG Z C, et al. Grey wolf optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy[J]. Scientific Reports, 2022, 12: 18961. [20] 刘薇, 赵剑锟, 刘义保, 等. 基于改进型灰狼算法的γ能谱解析应用研究[J]. 核技术, 2021, 44(4): 31-36.LIU W, ZHAO J K, LIU Y B, et al. Application research of γ energy spectrum analysis based on improved grey wolf algorithm[J]. Nuclear Techniques, 2021, 44(4): 31-36(in Chinese). [21] KOHLI M, ARORA S. Chaotic grey wolf optimization algorithm for constrained optimization problems[J]. Journal of Computational Design and Engineering, 2018, 5(4): 458-472. [22] STORN R, PRICE K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces[J]. Journal of Global Optimization, 1997, 11(4): 341-359. [23] MIRJALILI S, LEWIS A. The whale optimization algorithm[J]. Advances in Engineering Software, 2016, 95: 51-67. [24] NABIL E. A modified flower pollination algorithm for global optimization[J]. Expert Systems with Applications, 2016, 57: 192-203. [25] 张新明, 王霞, 康强. 改进的灰狼优化算法及其高维函数和FCM优化[J]. 控制与决策, 2019, 34(10): 2073-2084.ZHANG X M, WANG X, KANG Q. Improved grey wolf optimizer and its application to high-dimensional function and FCM optimization[J]. Control and Decision, 2019, 34(10): 2073-2084(in Chinese). [26] LUO Q F, LI J, ZHOU Y Q, et al. Using spotted hyena optimizer for training feedforward neural networks[J]. Cognitive Systems Research, 2021, 65: 1-16. [27] ABUALIGAH L. Multi-verse optimizer algorithm: a comprehensive survey of its results, variants, and applications[J]. Neural Computing and Applications, 2020, 32(16): 12381-12401. [28] MIRJALILI S. SCA: a sine cosine algorithm for solving optimization problems[J]. Knowledge-Based Systems, 2016, 96: 120-133. [29] YAZDANI S, NEZAMABADI-POUR H, KAMYAB S. A gravitational search algorithm for multimodal optimization[J]. Swarm and Evolutionary Computation, 2014, 14: 1-14. [30] KAUR S, AWASTHI L K, SANGAL A L, et al. Tunicate swarm algorithm: a new bio-inspired based metaheuristic paradigm for global optimization[J]. Engineering Applications of Artificial Intelligence, 2020, 90: 103541. -