Sort:
Open Access Issue
Evolutionary Experience-Driven Particle Swarm Optimization with Dynamic Searching
Complex System Modeling and Simulation 2023, 3 (4): 307-326
Published: 07 December 2023
Abstract PDF (2.3 MB) Collect
Downloads:76

Particle swarm optimization (PSO) algorithms have been successfully used for various complex optimization problems. However, balancing the diversity and convergence is still a problem that requires continuous research. Therefore, an evolutionary experience-driven particle swarm optimization with dynamic searching (EEDSPSO) is proposed in this paper. For purpose of extracting the effective information during population evolution, an adaptive framework of evolutionary experience is presented. And based on this framework, an experience-based neighborhood topology adjustment (ENT) is used to control the size of the neighborhood range, thereby effectively keeping the diversity of population. Meanwhile, experience-based elite archive mechanism (EEA) adjusts the weights of elite particles in the late evolutionary stage, thus enhancing the convergence of the algorithm. In addition, a Gaussian crisscross learning strategy (GCL) adopts cross-learning method to further balance the diversity and convergence. Finally, extensive experiments use the CEC2013 and CEC2017. The experiment results show that EEDSPSO outperforms current excellent PSO variants.

Open Access Issue
Dual-Stage Hybrid Learning Particle Swarm Optimization Algorithm for Global Optimization Problems
Complex System Modeling and Simulation 2022, 2 (4): 288-306
Published: 30 December 2022
Abstract PDF (2.9 MB) Collect
Downloads:52

Particle swarm optimization (PSO) is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation. However, PSO still has certain deficiencies, such as a poor trade-off between exploration and exploitation and premature convergence. Hence, this paper proposes a dual-stage hybrid learning particle swarm optimization (DHLPSO). In the algorithm, the iterative process is partitioned into two stages. The learning strategy used at each stage emphasizes exploration and exploitation, respectively. In the first stage, to increase population variety, a Manhattan distance based learning strategy is proposed. In this strategy, each particle chooses the furthest Manhattan distance particle and a better particle for learning. In the second stage, an excellent example learning strategy is adopted to perform local optimization operations on the population, in which each particle learns from the global optimal particle and a better particle. Utilizing the Gaussian mutation strategy, the algorithm’s searchability in particular multimodal functions is significantly enhanced. On benchmark functions from CEC 2013, DHLPSO is evaluated alongside other PSO variants already in existence. The comparison results clearly demonstrate that, compared to other cutting-edge PSO variations, DHLPSO implements highly competitive performance in handling global optimization problems.

Total 2