Skip to main content

A novel improvement of particle swarm optimization using an improved velocity update function based on local best murmuration particle

Abstract

Improvement of particle swarm optimization (PSO) is relevant to solving the inherent local optima and premature convergence problem of the PSO. In this paper, a novel improvement of the particle swarm optimization is provided to curb the problem of the classical PSO. The proposed improvement modifies the updating velocity function of the PSO, and it uses a local best murmuration particle which is found using the k-means clustering technique. In this contribution, each particle moves towards the global best position by not only using the personal best and global best, but particles are modelled to move in murmuration towards the global best using the personal best, global best and a local best particle known as the local best murmuration particle. The improved model was tested against the traditional PSO and two other variants of the PSO and genetic algorithm (GA) using 18 benchmark test functions. The proposed improvement demonstrated superior exploration abilities by achieving the best optimum values in 15 out of 18 functions, particularly in the multimodal functions, where it achieved the best optimum value in all 6 cases. It also achieved the best worst-case values in 12 out of 18 functions, especially in the variable-dimension functions, where other algorithms showed significant escalation, indicating the proposed improvement’s reliability and robustness. In terms of convergence, the proposed improvement exhibited the best convergence rate in all 18 functions. These findings highlight the impressive ability of the proposed improvement to converge swiftly without compromising accuracy.

Introduction

The use of the collective behaviour of animals as evolutionary optimization algorithms in finding optimal solution to real-life problems has increased over the past decades [1, 2]. These heuristic algorithms provide simple step-by-step approach in solving problems that traditional linear and nonlinear programming techniques would find difficulty in solving. Over the years, new metaheuristic techniques have been developed using the behaviour of several animals as additions to the popular particle swarm optimization and genetic algorithm [2,3,4,5,6,7]. Though new algorithms have come in over the years, the efficiency of the particle swarm optimization cannot be overemphasized. PSO is well known and is a widely used algorithm because of its simplicity, high efficiency and accuracy [8]. The superiority of PSO over other algorithms has been shown in [8,9,10]. Notwithstanding, the efficacy and simplicity of the PSO algorithm as compared to other techniques, the PSO has a major drawback of sometimes converging at local optima and premature convergence. Many researchers have over the years provided various variants of the PSO to address these challenges. The work in [11] made the acceleration constants of PSO adaptive to optimize the two acceleration constants in the updating velocity function. Xu et al. [12] also used an adaptive weight to optimize the inertia weight and other variants to simultaneously remodel the inertia weight and acceleration constants [13, 14]. An intensive study has shown that the focus of researchers has mainly been on modifying either the inertia weight or the acceleration constants of the velocity function of the PSO in order to solve the local optima and premature convergence problem of the PSO.

The velocity function for the updating of particles positions of the PSO was modelled based on the social behaviour of a flock of birds when searching for food, where each particle (bird) moves towards the best position where there is food (global best) by looking at the position of the global best and its personal knowledge of where there is food (personal best). However, recent study and models have shown that birds also move to their global best position in murmuration. Murmuration of birds refers to the pattern-like movement of birds towards an area in other to keep warmth, stay away from predators and also create beauty [15]. According to [16] there is a ‘sound of low murmur’ birds at an area make from several wingbeats and soft flight calls in other to move in murmuration towards their proposed place of more food. The work in [17] who modelled the behaviour of birds in murmuration discovered that flock of birds in murmuration towards a new position or place adapts their direction of flight and speed or velocity to about seven of the birds that fly closest to it. Richardson and Chemero [18] also revealed that birds in a flock can influence one another to account for their speed variability within groups inside of the flock. Smaldino [19] also discovered that when one bird changes its motion, its neighbours imitate it and change spreads within the subgroup of the flock in other to change speed and lead to a collective beautiful display called murmuration. Evidence from biological data reiterates that there is a group-to-group speed variability among birds in search for food [20]. In a flock with many birds, it is clear that not every bird will be able to keep track of the other birds but rather move in sub groups [21]. Moreover, [22, 23] has shown that a ‘cheerleader’ bird always serves as the cornerstone for the self-organized speed variation of each bird in each subgroup inside of a flock. Cavagna et al. [24] recommended that we need to consider the velocity of the murmuration as a whole, in modelling the velocities of individual birds.

The above shows that each bird in an area does not only move towards the global best position by just looking at its personal best position but also imitate the movement of a leading bird in its locality. Inspired by these phenomenal behaviours of birds in choosing a local leader to murmurate towards the global best position, this contribution remodels the velocity updating function of the particle swarm optimization to include a local best murmuration particle in other to solve the local optima problem and also ensure fast convergence of the PSO.

The remaining sections of this paper are organized as follows: In section “Introduction”, the concept of particle swarm optimization and k-means clustering that was used to cluster particles into groups is discussed. The novel approach in modifying the velocity function of the PSO using a local best murmuration particle is detailed in section “Methodology”. Section “Methodology” presents the testing functions used to test the efficacy of the improved PSO algorithm against the classical PSO and another variant of the PSO. The results and analysis are detailed in sections “Results and discussion” and “Conclusion” concludes the paper.

Particle swarm optimization

The particle swarm optimization algorithm is one of the most widely used optimization algorithms. It has its applications in Engineering, Medical Science, Business, and Social Science, among others. The technique was put forward by two American scholars, Eberhart and Kennedy, in the year 1995. According to the researchers, there is an invisible communication within a flock of birds that are seen dispersed in their search for food making them able to find the best place in their search space. Inspired by this phenomenon, Eberhart and Kennedy simulated birds’ foraging behaviour and proposed particle swarm optimization. After their proposition, the PSO has seen exponential growth because of its simplicity and accuracy in determining optimal solutions. The PSO as a heuristic algorithm optimizes a problem by iteratively providing candidate solutions known as particles. At the initialization phase of the PSO, these particles are randomly chosen within specified boundary conditions together with two random values (r1 and r2), two acceleration constants (c1 and c2), and inertia weight (w) that helps in updating the velocity of each particle. The updating velocity function is shown in Eq. (1). Before each particle’s velocity is updated, the best position each particle knows as the best candidate solution (personal best) and the overall best position in the swarm of particles known as the global best are chosen when each particle runs through the modelled function of the given problem (objective function). The updated velocity is used to update each particle position using Eq. (2). This process is continued until the best position is found among the particles or until a stopping criterion is met (e.g., total number of iterations).

$$V_{i} \left( {t + 1} \right) = wV_{i} \left( t \right) + r_{1} \,c_{1} \left( {P_{{i{\text{best}}}} - X_{i} (t)} \right) + r_{2} \,c_{2} \left( {G_{{{\text{best}}}} - X_{i} (t)} \right)$$
(1)
$$X_{i} \left( {t + 1} \right) = X_{i} \left( t \right) + V_{i} \left( {t + 1} \right)$$
(2)

where

  • Vi(t) is the velocity of particle i at time t.

  • w is the inertia weight.

  • c1, c2 and c3 are acceleration coefficients.

  • r1, r2 and r3 are random numbers uniformly distributed between 0 and 1.

  • Xi(t) is the position of particle i at time t.

  • Pibest, and Gbest are the personal best and global best, respectively.

Pseudocode for PSO

figure a

K-means clustering algorithm

The k-means clustering algorithm is a hard computing or crisp clustering technique, widely used in pattern recognition and data analysis [25]. It was first introduced by MacQueen in 1967. Unlike soft computing clustering techniques such as the fuzzy c-means (FCM) algorithm, which allows each data point to belong to multiple clusters with varying degrees of membership, the k-means algorithm assigns each data point to exactly one cluster.

Given a set of d-dimensional data points, the objective function for the k-means algorithm is to minimize the sum of squared distances (SSD), which is defined as the sum of the squared Euclidean distances between each data point and its assigned cluster centre. The algorithm follows an iterative process to find optimal cluster centres and assign data points to the nearest clusters. The steps involved in the k-means clustering algorithm are as follows.

Initialization Choose the number of clusters, k and randomly initialize k cluster centres.

Assignment Step For each data point xi, calculate the squared Euclidean distance to each cluster centre (Eq. 3) and assign the data point to the cluster with the nearest centre.

$$d(p,q) = \sum\limits_{i = 1}^{n} {\left( {p_{i} - q_{i} } \right)^{2} }$$
(3)

where \({ }d\left( {p,q} \right){ } = {\text{squared Euclidean distance between point }}p{\text{ and centroid }}q\). \(p_{i}\) = ith dimension of point p. \(q_{i}\) = ith dimension of centroid q. \(n\) = number of dimensions.

Update Step Recalculate the cluster centres by taking the mean of all data points assigned to each cluster.

Iteration Repeat the assignment and update steps until convergence or a termination criterion is met. Convergence can be determined by checking if the cluster centres remain unchanged or if the objective function value falls below a certain threshold.

The termination criterion typically used is when the change in cluster centres or the SSD between iterations is below a predefined threshold. The k-means clustering algorithm is computationally efficient and widely applicable for various clustering tasks. The process is summarized in the pseudocode below:

  • Initialize the number of clusters ‘k’ and maximum number of iterations ‘MaxIt’.

  • Randomly initialize the centroids for each cluster.

Repeat the following steps until convergence or maximum iterations reached:

  • Assign each data point to the nearest centroid based on squared Euclidean distance

  • Recalculate the centroids by taking the mean of all data points assigned to each centroid.

  • If the centroids have not changed significantly or the maximum iterations have been reached, exit the loop.

  • Otherwise, go back to first step in this loop.

Return the final centroids and the cluster assignments for each data point.

Methodology

Testing benchmark functions and PSO parameters

Eighteen benchmark functions were used to evaluate the new PSO variant [26]. To test the algorithm’s ability to exploit the search space, unimodal functions (F1 to F6) were utilized. Multimodal functions (F7 to F12) were employed to assess diversity and exploration. Noisy functions with Gaussian distribution noise (F13 to F16) were used to evaluate adaptiveness and robustness. Finally, variable dimension functions (F17 & F18) were applied to test the scalability of the algorithm. These benchmark functions are standard benchmark functions for assessing the performance of optimization algorithms. A comparative analysis was performed involving the enhanced PSO, the original PSO [27], PSO-AWDV (which incorporates an adaptive weighted delay strategy) [12], PSOEA (a variant combining Evolutionary Algorithm with PSO) [28], and Genetic Algorithm (GA) [29]. Table 1 shows the equations for the functions, while Table 2 shows the other specific details of the functions:

Table 1 Function equations
Table 2 Details of benchmark functions

The parameters used at the initialization phase of the PSO and comparison between the algorithms are shown in Table 3. Also, the random values used r1, r2 and r3 used in the velocity updating function were generated using the random generator in MATLAB.

Table 3 PSO parameters

All five algorithms were executed within a MATLAB R2021a environment on a consistent computer setup: an Intel® Core™ i7-7500U with CPU running at 2.70 GHz to 2.90 GHz and equipped with 12 GB RAM. The comparative analysis was conducted using the following parameters: Optimum Cost, Mean Absolute Error (MAE), and Standard Deviation (SD). The calculations for these parameters are as follows.

$$MAE = \frac{1}{S}\mathop \sum \limits_{i = 1}^{S} |X_{oi} - X_{i} |$$
(4)

S represents the total number of cost samples. \(X_{oi}\) corresponds to the benchmark value of the test function, while \(X_{i}\) denotes the computed optimum value.

$${\text{SD}} = \sqrt {\frac{{\sum \left( {X_{i} - \mu } \right)^{2} }}{S}}$$
(5)

\(\mu\) is the mean of the total number of cost samples.

Proposed local best murmuration particle PSO variant

Rationale behind the proposed approach

The inspiration for the proposed variant comes from the natural phenomenon of bird murmuration. Birds move in coordinated patterns, maintaining a balance between individual movement and group dynamics by following a leader within their vicinity. This behaviour allows the flock of birds to respond rapidly to changes in direction or threats, faster than if each bird moved on its own. Similarly, in the particle swarm optimization algorithm, particles (analogous to birds) can benefit from not only following their personal best (pbest) and the global best (gbest) but also by considering a local best leader within a subgroup or “murmuration cluster.”

Theoretical foundation

The original particle swarm optimization (PSO) algorithm relies on particles updating their velocity based on pbest and gbest. However, this approach can sometimes lead to slower convergence or getting trapped in local optima, especially in complex, high-dimensional search spaces. By introducing a local best murmuration particle (Mbest) within each cluster, the proposed method enhances the exploration–exploitation balance, allowing particles to follow a more dynamic path towards the global optimum. This approach aligns with swarm intelligence principles, where the collective behaviour of decentralized systems leads to improved problem-solving capabilities.

Algorithmic design

The proposed variant modifies the velocity update function to incorporate three components: personal best (pbest), global best (gbest), and local best murmuration (Mbest). The Mbest is determined using the k-means clustering technique, where particles are grouped into clusters, and the best-performing particle in each cluster is identified as the local leader. This clustering is performed at each iteration to adapt to the changing search space, ensuring that the Mbest is dynamically updated.

The modified velocity update function can be expressed as:

$$V_{i} \left( {t + 1} \right) = wV_{i} \left( t \right) + r_{1} c_{1} \left( {P_{{i{\text{best}}}} - X_{i} (t)} \right) + r_{2} c_{2} \left( {G_{{{\text{best}}}} - X_{i} (t)} \right) + r_{3} c_{3} \left( {M_{{{\text{best}}}} - X_{i} (t)} \right)$$
(6)
$$X_{i} \left( {t + 1} \right) = X_{i} \left( t \right) + V_{i} \left( {t + 1} \right)$$
(7)

where

  • Vi(t) is the velocity of particle i at time t.

  • w is the inertia weight.

  • c1, c2 and c3 are acceleration coefficients.

  • r1, r2 and r3 are random numbers uniformly distributed between 0 and 1.

  • Xi(t) is the position of particle i at time t.

  • Pibest, Gbest and Mbest are the personal best, global best, and local best positions, respectively.

In Eq. (6), the updating velocity function of each particle is proposed to move according to a new velocity update function. In this equation, the personal best, global best, and local murmuration best are determined to update the velocity of each particle before their positions are updated using Eq. (7). The local best murmuration particle is found using the k-means clustering technique to classify each particle into a cluster known in this work as the murmuration cluster and the best particle in the cluster will be chosen as the local best murmuration particle. Thus, each particle in each murmuration cluster will move towards the global best using their personal best, their cluster local murmuration best particle, and the global best for the entire swarm. This modification is based on the idea that birds always move in murmuration by looking at a leader in its vicinity. As a result, the ‘ripple’ through a flock is three times faster than could be explained if the birds were just moving individually. Scientists have shown several reasons why birds would always not fly alone but look at a cheerleader to move in murmurations. Birds murmur together for safety, and warmth among others. According to this contribution, a particle will not only look at its personal best position and the global best to move towards the global best position of the swarm but also consider a local murmuration’s best position in other to murmur together towards the global best position. The introduction of the Mbest component is to achieve faster convergence and avoid premature convergence by enhancing the swarm’s ability to explore the search space more effectively. The dynamic clustering ensures that particles adapt to local changes, reducing the likelihood of being trapped in suboptimal regions. Additionally, this approach maintains diversity within the swarm, as each cluster’s particles follow a different Mbest, contributing to a more robust search process. Compared to the original PSO, the proposed local best murmuration PSO variant is expected to perform better in complex, multimodal optimization problems. The clustering-based approach helps in maintaining a good balance between exploration and exploitation, which is often a challenge in conventional PSO algorithms. The flowchart of the proposed model is shown in figure below (Fig. 1).

  1. 1.

    Start: Start of the algorithm.

  2. 2.

    Enter the Objective Function and set PSO parameters: The objective function f(x) represents the problem to be optimized. This step defines the function and its parameters, which will be used to evaluate the fitness of each particle. Parameters for the particle swarm optimization (PSO) algorithm is also defined in this step. This includes defining the number of particles, the maximum number of iterations, inertia weight, cognitive and social coefficients, and any constraints on the decision vector.

  3. 3.

    Initialize particles, evaluate solutions and determine the gbest:

    • Each particle’s position is initialized randomly within the feasible solution space.

    • Each particle’s velocity is initialized randomly. This velocity dictates how a particle moves in the search space.

    • Each particle’s initial position is evaluated using the objective function, determining their fitness. The particle with the best fitness is set as the initial global best (gbest).

  4. 4.

    Set Iteration Counter: Initialize the iteration counter to 1. This counter controls the number of times the main loop will run.

  5. 5.

    While iter <  = maxIter: This loop iterates as long as the iteration counter is less than or equal to the maximum number of iterations. Within this loop:

    • Cluster particles and find best in each group: Use k-means clustering to group particles into ‘n’ clusters. This step helps identify diverse solutions within different regions of the search space. For each cluster, identify the particle with the best fitness value. These particles represent the best solutions found within each cluster.

    • Update Velocities using modified velocity equation and update positions: Update each particle’s velocity using a modified PSO velocity update equation. This equation considers each particle’s personal best position, the best position in its cluster, and the global best position. Update each particle's position based on its new velocity.

    • Apply Limits and evaluate solutions: Enforce position and velocity limits to be within the defined search space. Re-evaluate the fitness of each particle at its new position.

    • Update Pbest and Gbest: Update each particle’s personal best if its new position is better. Also, update the global best if the best particle in any cluster has a better fitness than the current global best.

    • Apply Weight Damping: Gradually decrease the inertia weight by multiplying damping factor with the current inertia weight.

    • Increment Iteration Counter: Increase iter by 1 to continue to the next iteration.

  6. 6.

    End While Loop: While loop ends

  7. 7.

    Display Results: After completing the iterations, display the best solution found and its corresponding fitness value.

Fig. 1
figure 1

Flowchart of proposed model

Results and discussion

Unimodal functions (F1 to F6)

Unimodal functions test the algorithm’s ability to exploit the search space effectively. PSOMbest achieved the optimal solution in 3 out of the 6 unimodal functions (F4, F5, and F6), outperforming PSO, which achieved optimal values in 2 functions, and PSO-AWDV, PSOEA, and GA, which each achieved optimal solutions in 1, 2, and 1 function, respectively (see Table 4). PSOMbest also had the best optimal solution in all six functions. This superior performance in unimodal functions indicates that the proposed variant is highly effective at local search and convergence, thus addressing to some degree the issue of local entrapment.

Table 4 Optimum values (Unimodal functions)

Furthermore, in terms of MAE, PSOMbest recorded the lowest values in 3 of the 6 functions, demonstrating its consistency and precision in finding the optimal solution. Additionally, PSOMbest exhibited better standard deviation (SD) values in 3 out of the 6 unimodal functions, indicating less variability in the solutions across multiple runs (Table 5). This consistency further highlights the algorithm’s reliability in simpler landscapes where exploitation is crucial.

Table 5 MAE and SD (unimodal functions)

Multimodal functions (F7 to F12)

Multimodal functions present a greater challenge due to the presence of multiple local optima. From Table 6, PSOMbest achieved the best optimum values in all 6 multimodal functions. In comparison, PSO achieved the best results in 2 functions, PSO-AWDV in 1, PSOEA in 2, and GA in 2 functions. This gives an indication that PSOMbest’s local best murmuration mechanism enhances its ability to escape local optima, leading to superior exploration capabilities.

Table 6 Optimum values (Multimodal functions)

Moreover, in terms of MAE, PSOMbest outperformed the other algorithms in 5 of the 6 multimodal functions, which further emphasizes its effectiveness in navigating complex landscapes. PSOMbest also demonstrated better SD values in 5 out of the 6 multimodal functions (F8, F9, F10, F11, and F12) (see Table 7), which indicates a higher degree of consistency in finding the global optimum across multiple runs, reducing the risk of suboptimal convergence.

Table 7 MAE and SD (Multimodal functions)

Noisy functions (F13 to F16)

Noisy functions, characterized by Gaussian distribution noise, evaluate the robustness and adaptiveness of algorithms. PSOMbest outperformed the competing algorithms by achieving the best results in all 4 noisy functions (see Table 8).

Table 8 Optimum values (Noisy functions)

The MAE values further confirm PSOMbest’s superiority, as it achieved the lowest MAE in 3 of the 4 noisy functions, demonstrating its ability to effectively handle noisy environments. This attests to the robustness of the proposed variant. Additionally, PSOMbest exhibited the lowest SD in 3 noisy functions (F14, F15, F16), further indicating its consistent performance even when faced with noise, as well as its ability to maintain a stable search process across different runs (see Table 9).

Table 9 MAE and SD (Noisy functions)

Variable dimension functions (F17 & F18)

Variable dimension functions assess an algorithm’s scalability by testing it across different dimensional spaces. From Table 10, it can be seen that PSOMbest outperformed the other algorithms in all selected dimensions (2, 30, and 100) for both functions F17 and F18. The ability of PSOMbest to scale effectively is evident in its superior performance in both low (2D) and high-dimensional (100D) settings. Specifically, PSOMbest achieved the best optimum values in both functions at all dimensional levels, surpassing PSO, PSO-AWDV, PSOEA, and GA, which exhibited diminished performance as dimensionality increased.

Table 10 Optimum values (Variable dimension functions)

The MAE values further validate this (see Table 11), with PSOMbest achieving the lowest MAE in all dimensions, particularly in the high-dimensional cases where other algorithms struggled significantly. Moreover, PSOMbest maintained the lowest SD in both functions across all dimensions except F18 (30D), highlighting its consistency and adaptability, especially in high-dimensional spaces where the search process becomes more complex.

Table 11 MAE and SD (Variable dimension functions)

Analysis of worst-case performances

The worst values in Table 12 provide insight into the reliability of an algorithm, particularly in scenarios where it may fail to find optimal or near optimal solutions.

Table 12 Worst values

Analysing the table, PSOMbest consistently demonstrated lower worst-case values across most benchmark functions, indicating that even in the least favourable runs, it managed to avoid significantly poor solutions. The breakdown is as follows:

  • PSOMbest achieved the lowest worst values in 3 out of 6 unimodal functions (F4, F5, and F6). This indicates the competitive consistent performance of PSOMbest.

  • PSOMbest achieved the lowest worst values compared to the other algorithms in 4 out of 6 multimodal functions (F8, F10, F11, and F12). This confirms PSOMbest’s ability to maintain exploration without falling into deep local minima.

  • PSOMbest achieved the lowest worst values in 3 out of 4 noisy functions (F13, F14, and F15), demonstrating its robustness in noisy environments where traditional PSO and GA struggled, leading to higher worst-case errors.

  • PSOMbest achieved the lowest worst values in both variable dimension functions (F17 and F18) across all dimensions, especially in high-dimensional spaces (100D), where the worst-case values of other algorithms escalated significantly, indicating their difficulties in maintaining consistent performance as the problem complexity increased.

The ability of PSOMbest to maintain lower worst-case values across these diverse functions highlights its reliability and robustness.

Computational time

Table 13 shows the average computational time the various algorithms used in evaluating all 18 functions per run.

Table 13 Computational time

Table 13 reveals that PSOMbest has a longer average computational time per run (150.29 s) compared to other algorithms like GA (67.74 s), PSO (84.52 s), PSO-AWDV (91.86 s), and PSOEA (93.32 s). While this might seem like a disadvantage at first glance, the increased time can be viewed as a necessary trade-off for the superior accuracy, robustness, and scalability that PSOMbest offers.

In practical scenarios, especially in critical applications where the quality of the solution is paramount, the slightly longer computational time is often justified. For example, in engineering design optimization or complex system simulations, achieving the best possible solution can significantly outweigh the cost of additional computation time. Moreover, with advancements in computational power and parallel processing, the difference in time becomes less significant.

Convergence rate

Figures 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 show the convergence curves of the algorithms on each function. It can be observed that PSOMbest consistently achieved superior convergence rates across all 18 functions, even in cases where all algorithms converged to optimum values (as shown in Figs. 6 and 9). PSOMbest exhibited rapid convergence before the 50th iteration in functions where it attained optimum cost (functions F4, F5, F6, F7, F8, F11, F12, F17 & F18), except for F4, where convergence was achieved just after the 400th iteration. Functions F2, F9, and F10, PSOMbest also demonstrated convergence within the first 50 iterations. These findings highlight the unique ability of PSOMbest to converge swiftly without compromising accuracy.

Fig. 2
figure 2

Convergence curves for rotated hyper-ellipsoid (F1)

Fig. 3
figure 3

Convergence curves for Dixon-price (F2)

Fig. 4
figure 4

Convergence curves for Zakharov (F3)

Fig. 5
figure 5

Convergence curves for sum of different powers (F4)

Fig. 6
figure 6

Convergence curves for Bohachevsky 1 (F5)

Fig. 7
figure 7

Convergence curves for Matyas (F6)

Fig. 8
figure 8

Convergence curves for dropwave (F7)

Fig. 9
figure 9

Convergence curves for Rastrigin (F8)

Fig. 10
figure 10

Convergence curves for Ackley (F9)

Fig. 11
figure 11

Convergence curves for levy (F10)

Fig. 12
figure 12

Convergence curves for Griewank (F11)

Fig. 13
figure 13

Convergence curves for Styblinski-Tang (F12)

Fig. 14
figure 14

Convergence curves for noisy sphere (F13)

Fig. 15
figure 15

Convergence curves for noisy Rastrigin (F14)

Fig. 16
figure 16

Convergence curves for noisy Ackley (F15)

Fig. 17
figure 17

Convergence curves for noisy Griewank (F16)

Fig. 18
figure 18

Convergence curves for Rosenbrock (F17, 2D)

Fig. 19
figure 19

Convergence curves for Rosenbrock (F17, 30D)

Fig. 20
figure 20

Convergence curves for Rosenbrock (F17, 100D)

Fig. 21
figure 21

Convergence curves for Powell’s sum function (F18, 2D)

Fig. 22
figure 22

Convergence curves for Powell’s sum function (F18, 30D)

Fig. 23
figure 23

Convergence curves for Rosenbrock (F18, 100D)

Conclusion

An improvement to the traditional PSO called PSOMbest has been made, the proposed improvement modifies the updating velocity function of the PSO, and it uses a local best murmuration particle which is found using the k-means clustering technique. A comprehensive analysis of the proposed PSOMbest variant across 18 benchmark functions reveals its superiority in terms of both optimal performance and reliability compared to traditional PSO, PSO-AWDV, PSOEA, and GA. The proposed improvement demonstrated superior exploration abilities by achieving the best optimum values in 15 out of 18 functions, particularly in the multimodal functions, where it achieved the best optimum value in all 6 cases. It also achieved the best worst-case values in 12 out of 18 functions, especially in the variable-dimension functions, where other algorithms showed significant escalation, indicating the proposed improvement’s reliability and robustness. In terms of convergence, the proposed improvement exhibited the best convergence rate in all 18 functions. These findings highlight the impressive ability of the proposed improvement to converge swiftly without compromising accuracy. PSOMbest consistently achieved the best or near-best optimum values, demonstrating superior exploitation capabilities in unimodal functions, robust exploration in multimodal landscapes, adaptability in noisy environments, and scalability across varying dimensionalities. The statistical analysis, including MAE, SD, and worst-case values, further reinforces PSOMbest’s robustness and consistency. It exhibited the lowest MAE and SD in a majority of the functions, alongside lower worst-case values, indicating its ability to deliver reliable performance across different optimization scenarios.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. All data generated or analysed during this study are included in this published article.

References

  1. Mohapatra S, Mohapatra P (2023) American zebra optimization algorithm for global optimization problems. Sci Rep 13(1):5211

    Article  Google Scholar 

  2. Bäck TH, Kononova AV, van Stein B, Wang H, Antonov KA, Kalkreuth RT, Ye F (2023) Evolutionary algorithms for parameter optimization—thirty years later. Evol Comput 31(2):81–122

    Article  Google Scholar 

  3. Ghasemi M, Kadkhoda Mohammadi S, Zare M, Mirjalili S, Gil M, Hemmati R (2022) A new firefly algorithm with improved global exploration and convergence with application to engineering optimization. Decis Anal J 5:100125

    Article  Google Scholar 

  4. Prah II NK, Frimpong EA, Twumasi E (2022) Modified individual experience mayfly algorithm. Carpathian J Electr Eng 16(1)

  5. Gad AG (2022) Particle swarm optimization algorithm and its applications: a systematic review. Arch Comput Methods Eng 29(5):2531–2561

    Article  MathSciNet  Google Scholar 

  6. Thomas J, Chaudhari NS (2014) A new metaheuristic genetic-based placement algorithm for 2D strip packing. J Ind Eng Int 10:47. https://doi.org/10.1007/s40092-014-0047-9

    Article  Google Scholar 

  7. Haris PA, Gopinathan E, Ali CK (2010) Performance of some metaheuristic algorithms for multiuser detection in TTCM-assisted rank-deficient SDMA-OFDM system. J Wirel Com Netw 2010:473435. https://doi.org/10.1155/2010/473435

    Article  Google Scholar 

  8. Fang J, Liu W, Chen L, Lauria S, Miron A, Liu X (2023) A survey of algorithms, applications and trends for particle swarm optimization. Int J Netw Dyn Intell 24–50

  9. Wihartiko FD, Wijayanti H, Virgantari F (2018) Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem. In: IOP conference series: materials science and engineering, vol 332, p 012020, IOP Publishing

  10. Zhang E, Nie Z, Yang Q, Wang Y, Liu D, Jeon SW, Zhang J (2023) Heterogeneous cognitive learning particle swarm optimization for large-scale optimization problems. Inf Sci 633:321–342

    Article  Google Scholar 

  11. Vasanthi SVS, Babulal CBDC (2016) PSO with time varying acceleration coefficients for solving optimal power flow problem. J Electr Eng 16(3):10–10

    Google Scholar 

  12. Xu L, Song B, Cao M (2021) An improved particle swarm optimization algorithm with adaptive weighted delay velocity. Syst Sci Control Eng 9(1):188–197. https://doi.org/10.1080/21642583.2021.1891153

    Article  Google Scholar 

  13. Kiani AT, Nadeem MF, Ahmed A, Khan IA, Alkhammash HI, Sajjad IA, Hussain B (2021) An improved particle swarm optimization with chaotic inertia weight and acceleration coefficients for optimal extraction of PV models parameters. Energies 14(11):2980

    Article  Google Scholar 

  14. Yang W, Zhou X, Luo Y (2021) Simultaneously optimizing inertia weight and acceleration coefficients via introducing new functions into PSO algorithm. In: Journal of physics: conference series, vol 1754, No 1, p 012195, IOP Publishing

  15. Farine DR (2022) Collective action in birds. Curr Biol 32(20):R1140–R1144

    Article  Google Scholar 

  16. Vallee M (2021) Animal, body, data: starling murmurations and the dynamic of becoming in-formation. Body Soc 27(2):83–106

    Article  Google Scholar 

  17. Azvolinsky A (2013) Birds of a feather… Track seven neighbours to flock together. News at Princeton

  18. Richardson MJ, Chemero A (2014) Complex dynamical systems and embodiment. The Routledge handbook of embodied cognition, pp 39–50

  19. Smaldino P (2023) Modeling social behaviour: mathematical and agent-based models of social dynamics and cultural evolution. Princeton University Press, Princeton

    Google Scholar 

  20. Papageorgiou D, Farine DR (2020) Group size and composition influence collective movement in a highly social terrestrial bird. Elife 9:e59902

    Article  Google Scholar 

  21. Perinot E, Fritz J, Fusani L, Voelkl B, Nobile MS (2021) Characterizing the flying behaviour of bird flocks with fuzzy reasoning. In: WILF

  22. Bajec IL, Heppner FH (2009) Organized flight in birds. Anim Behav 78(4):777–789

    Article  Google Scholar 

  23. Benjamin D, Komlos D (2020) What leaders can learn from a flock of birds about the balance between leading and following. https://www.forbes.com/sites/benjaminkomlos/2020/04/29/what-leaders-can-learn-from-a-flock-of-birds-about-the-balance-between-leading-and-following/?sh=3ca81a61c8a2

  24. Cavagna A, Cimarelli A, Giardina I, Parisi G, Santagati R, Stefanini F, Viale M (2010) Scale-free correlations in starling flocks. Proc Natl Acad Sci 107(26):11865–11870

    Article  Google Scholar 

  25. Hamdan Ali H, Emad Kadhum L (2017) K-means clustering algorithm applications in data mining and pattern recognition. Int J Sci Res ISSN 6(8):1577–1584. https://doi.org/10.21275/ART20176024

    Article  Google Scholar 

  26. Optimization Test Functions and Datasets. https://www.sfu.ca/~ssurjano/optimization.html (accessed Jun. 28, 2023)

  27. Slowik A (2011) Particle swarm optimization. In: The industrial electronics handbook-five volume set, pp 1942–1948. https://doi.org/10.1007/978-3-319-46173-1_2

  28. Chansamorn S, Somgiat W (2022) Improved particle swarm optimization using evolutionary algorithm. In: 2022 19th international joint conference on computer science and software engineering (JCSSE), pp 1–5. https://doi.org/10.1109/JCSSE54890.2022.9836238

  29. Thengade A, Dondal R (2012) Genetic algorithm–survey paper. MPGI national multi conference, pp 975–8887

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Elvis Twumasi conceptualized the research idea, wrote the paper, and was involved in the simulation and analysis of the results. Emmanuel Asuming Frimpong was involved in the writing, reviewing, and editing of the work and fine-tuning the methodology. Nicholas Kwesi Prah II did the MATLAB simulation of the algorithm on the benchmark functions using the improved algorithms. David Boah Gyasi was involved in the coding of the improved particle swarm optimization algorithm.

Corresponding author

Correspondence to Elvis Twumasi.

Ethics declarations

Competing interests

The authors declare that they have no known competing financial or non-financial interests that could have appeared to influence the work reported in this research.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Twumasi, E., Frimpong, E.A., Prah, N.K. et al. A novel improvement of particle swarm optimization using an improved velocity update function based on local best murmuration particle. Journal of Electrical Systems and Inf Technol 11, 42 (2024). https://doi.org/10.1186/s43067-024-00168-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43067-024-00168-8

Keywords