Hunter–Prey Optimization Algorithm: a review

The Hunter–Prey Optimization Algorithm (HPO) is a nature-inspired optimization technique influenced by the predator–prey relationships observed in nature. Over the years, HPO has gained attention as a promising method for solving complex optimization problems. This review article provides a comprehensive analysis and a bibliographic study of the Hunter–Prey Optimization Algorithm. It explores its origins, underlying principles, applications, strengths, weaknesses, and recent developments in detail. By delving into various facets of HPO, this review aims to shed light on its effectiveness and potential, inspiring the researchers to address real-world optimization challenges.


Introduction
Low-emission technologies like distribution generation and electric vehicles have increased adoption in the distribution networks.These technologies are widely promoted to meet the growing power demand and economic and environmental factors.Various control schemes have been embraced in the active distribution networks to overcome the drawback of power quality issues from these penetrations and determine the suitable site and size of penetration.Mathematical optimization is the observed and broadly implemented key technology in validating control scheme technologies.Optimization has also been continuing in the limelight and is most focused on by researchers in various fields of engineering problems.Rapid depletion of existing sources and profit maximization could be the reasons for the same [1].This work highlights the role of one such optimization algorithm, the Hunter-Prey Optimization Algorithm (HPO), in addressing the challenges posed by these low-emission technologies.

Background classification of optimization methods
The types of optimization are studied under four types, and some of the developed optimization techniques to address the earlier discussed issues [2], Fig. 1 depicting the same, are as follows: • Multi-objective optimization: This approach simultaneously considers two or more divergent objectives, such as power loss reduction and voltage stability *Correspondence: pappusoundarya.lahari@res.christuniversity.inenhancement.Multi-objective optimization can yield a series of optimal solutions that constitute a deal between the individual objectives, letting decisionmakers opt for the solution that aptly fits their needs.• Hybrid optimization: This technique involves combinations of various meta-heuristic algorithms to overcome individual deficiencies and improve the algorithm's strength, convergence speed, and accuracy.• Machine learning optimization: This technique is widely used for large-scale systems to extract data and predict a much more accurate optimal solution at a faster rate.• Robust optimization: This technique considers the sensitivity of system parameters.It deals with uncertainties in the optimization problem like renewable energy introduction, sudden load demand, etc.
Sensitive index methods involve the calculation of sensitivity index factors for optimization solutions.LFI (Loss Factor Index), PLI (Power Loss Index), LSI (Loss Sensitivity Index), and voltage indices like VSI (Voltage Stability Index) and AVDI (Average Voltage Deviation Index) are a few of them that aid in finding an optimization solution considering the sensitivity parameters.
Mixed methods combine traditional mathematical and meta-heuristic methods with a sensitivity indices approach for a better solution than individual performances.Hybrid works of PSO-GSA [10], GA-GSA [11], fuzzy-GA [12], and fuzzy-DE [13] are a few of them.
The meta-heuristic methods of optimization approach can be classified under the categories of:

Biology-based optimization
These types of methods, which are inspired by biological and natural activities [14], can be further classified into: a. Evolutionary algorithms b.Swarm-based algorithms

(a) Evolutionary algorithms
A global adaptive search optimization algorithm named Genetic Algorithm (GA) based on natural selection [15], another technique called Evolutionary Programming (EP) put forward by D. B. Fogel in 1990 [16,17], Evolution Strategy (ES) inspired by the biological principles of evolution [18], developed by Storny et al. [19], Differential Evolutionary (DE) algorithm, a widely accepted algorithm to handle nonlinear, multimodal, non-differentiable cost functions, swiftly [20] come under evolutionary algorithms.GA employs binary coding to represent the problem parameters, while DE uses float point, making it more accurate with more adaptive and control parameters [21,22].

Physics-based optimization algorithms
These optimization algorithms are inspired by the laws of physics, the physical behavior of matter, or its physical properties.The Gravitational Search Algorithm (GSA), rooted in the gravitational law and physics laws of motion by Rashedi et al. [44], Simulated Annealing (SA) follows the physical process of annealing that is used for crystallization [45,46], Magnetic Optimization Algorithm (MOA) based on the attraction and repulsion principle of magnets by Tayarani et al. [47], Intelligent Water Drop (IWD) algorithm inspired from the river flow introduced by Hosseini [48] can be classified under the physics-based algorithms.Multiverse Optimization (MVO) [49], Atom Search Optimization (ASO) algorithm [50], Curved Space Optimization (CuSO) [51], Galaxy-Based Search Algorithm (GBSA) [52], Water Cycle Algorithm (WCA) [53], Black Hole (BH) algorithm [54], Harmony Search (HS) Algorithm [55] add up to the list.

Geography-based algorithms
The Imperialistic Competition Algorithm (ICA), pioneered by Gargari et al. [56], involves countries' colonies and imperialists as population; Tabu Search (TS) following a search-escape pattern put forward by Glover [57] comes under the geography-based algorithms.
Figure 2a is on the classification of algorithms and 2b gives another classification based on the nature of inspiration.The algorithms can also be classified based on their inspiration.Since the nature-inspired optimization algorithms are inspired by a group or flock of animals' behavior, they can conveniently be classified into a) animal inspired, b) bird inspired, c) insect inspired, d) plant inspired, and e) human inspired.

Literature on Hunter-Prey Optimization
HPO is a nature-inspired population-based optimization algorithm by Naurei et al. [60] 2022 to address optimization problems in different engineering fields.Many researchers have employed the algorithm to solve various issues, as in [68], the authors applied the HPO algorithm for the optimal positioning of PV-STATCOM with energy loss depreciation and improving the voltage profile.Active power loss minimization, greenhouse gas emission reduction, and improving the hosting capacity of PV and the voltage profile are the objectives in [69] applying the HPO algorithm.Article [70] studies the algorithm for optimal PV placement with actual power loss reduction and voltage profile enhancement.A combined algorithm of HPO-HDL is put forward in [71] for fake news detection; HDL stands for Hybrid Deep Learning, an AI technique.HPO is used to identify the parameters of solar PV cells of R.T.C. France and STM-6/120 models [72].A tabularized representation of the same is shown in Table 1.
There are articles on improvised versions of the standard HPO (IHPO) [73] defined an enhanced HPO algorithm for optimal FACTS and wind DG placement with power loss reduction, cost reduction, and voltage enhancement as objectives.IHPO with extreme machine learning to predict the wind power output and accuracy is studied in [74].A hybrid combination of IHPO and Convolution Neural Networks (CNN) to speculate the structural damages in buildings and construction is proposed in [75].[76] uses the IHPO to plan a robot path finding algorithm in unknown surroundings.A table representing the discussed works is depicted in Table 2.

HPO motivation
HPO's elegance lies in its relative simplicity.It uses a small set of intuitive rules to traverse and exploit the search space effectively, making it computationally efficient and potentially applicable to many optimization problems.Like other algorithms, HPO also has exploration and exploitation phases after initialization, and there is a difference in effectively balancing the exploration and exploitation phases.On the other hand, reactive power can be optimally dispatched by finding the prime locations of the desired devices using the proposed algorithm.Hence, applying the newly emerging algorithm in the engineering field to handle the optimization problem motivates this article.

Research gap and challenges
Several optimization algorithms have been proposed recently, and many are successfully in use, while many are in the developing and testing stages.At this point, developing a new It applies to other optimizations in power systems, such as optimal power flow, synchronous motor designs, AC-DC power system operation, combined heat, and power dispatch algorithm to showcase its supremacy could be challenging.However, as an answer to the question of 'What is the need for a new algorithm?' the NFL theory has been put forward by [77].According to the No Free Launch (NFL), no one can present an algorithm that can solve all problems of optimization, and researchers are allowed to suggest new optimization algorithms or improve the existing techniques to solve a subset of problems in various fields.The new algorithm HPO can handle optimization problems in different engineering and non-engineering fields.

Contribution of the article
• This article reviews the HPO algorithm, demonstrating its working phases: initialization, exploration, and exploitation, along with the parameters deciding the algorithm.• The paper also details the newer versions of the algorithm, explaining the improvements made to the standard algorithm.• The application of HPO in electrical engineering for optimal DG and capacitor bank placement is showcased.Overall, the paper briefs the new nature-inspired HPO algorithm, its variants, and applications.
Further, the paper is categorized, detailing the 2. Standard HPO, 3. Improvised versions of HPO, 4. Discussions, and 5. Conclusions.The IHPO is used along with an obstacle-detection strategy and a search strategy, and it is found to be more effective Application to a real-time robot is the paper's proposed future work

Inspiration
Hunter-Prey Optimization (HPO) takes a captivating approach to problem-solving, drawing inspiration from the dynamic world of predator-prey interactions.It mimics predators' strategies to hunt and capture their prey.The scenario of a hunter searching for prey, and since prey is usually grouped, the hunter chooses prey far from the flock (average herd position).After the hunter finds his prey, he chases and hunts it.At the same time, the prey searches for food, escapes the predator's attack, and reaches a safe place, which is the fitness function.HPO is a class of swarm intelligence algorithms and falls under the broader category of meta-heuristic algorithms used for optimization problems.

Mathematical model of the algorithm
Naruei et al. [60] proposed HPO, a new intelligent optimization algorithm with fast convergence and a higher optimization potentiality.The general structure of any algorithm begins with the population in initialization, For every member, the objective function is computed.The positions of the hunter and the prey are updated at every iteration, evaluating the objective function till the algorithm stops.The initial position of the member is given by Eq. ( 1) from [60], where x i is the initial position of hunter or prey, rand (1,d) is any random number between [0,1], u and l are the upper and lower boundaries, and d is the dimension of the problem.The objective function is then evaluated.OF = f � x .The exploration and exploitation are the following stages after initialization.These stages involve a search mechanism that pilots the search agents toward the optimal solution.Equation (2) defines the hunter's position as where x i,j (t) defines the current position of the hunter and x i.j (t + 1) for the next iteration.P pos represents the position of the prey, and C and Z are the balance and adaptive parameters, respectively.μ is the mean of the locations and is evaluated using Eq. ( 3) n is the number of iterations, and x t gives the position at the iteration t.
For a random vector P with 0 and 1 values, For R 1 being a random vector, IDX defines the index value of the R 1 vector at P = = 0.The adaptive parameter Z can be evaluated from Eq. ( 5) with R 2 and R 3 as random vectors between [0,1]; ⊗ denotes the element-wise multiplication.
(1) The balance parameter between the stages of exploration and exploitation, C, can be obtained from Eq. ( 6) Itr is the current iteration, and itr max is the maximum iteration count.During iterations, the value of C decreases from 1 to 0.02.The prey's position P pos is calculated using the average value μ from Eq. ( 3) and the Euclidean distance obtained from Eq. ( 7).
From Eq. ( 8), the member at the maximum distance is considered prey.
The hunter easily mocks down the animal away from the group, then goes for its next, and this continues.Also, there would be a late convergence if the search agent is considered at a longer distance from the mean every time.So, to avoid this situation, Eq. ( 9) is defined for N number of search agents.

Now, the new prey position can be given by
The kbest value is N at the start but gradually decreases with iterations and gets its value equal to the first member.This is because the hunter chooses the prey at a farther distance each time, and thus, the kbest values decrease at each iteration.In the hunting scenario, the prey tries to escape from the attacker and reach its herd; hence, it can be said that the safe position of the prey is the optimal solution.The prey escape phase equation can be given as Here, in Eq. ( 11), x i,j (t) and x i,j (t + 1) are the current and the next prey's positions.Z and C are calculated from Eqs. 5 and 6, respectively.R 4 is the random vector between [− 1,1], and T pos is the global optimum position.The significance of the cos function is that it decides the next prey's location from the global maximum using the input parameters at various radii and angles.To distinguish the hunter and prey mathematically, by combining equations 2 and 11 and defining another random vector R 5 ranging in [0,1], a regulating parameter β = 0.1, (5) Equation 12a gives the next position of the hunter for R 5 < β, and Eq.12b updates the prey's position for R 5 > β.The flow chart of HPO is shown in Fig. 3.
The following are the suppositions made by the author [60] for the HPO algorithm to provide appropriate solutions: • Random selection of hunter and prey assures the search for space exploration.
Also, there will be a low probability of getting stuck in the local maximum. ( x i,j (t • Search space exploration by selecting the farthest member as prey and the mechanism of the mean distance reduction with every iteration ensure the convergence of the algorithm and its exploitation.• The adaptive parameter escalates the population divergence, reduces the severity of the hunter and prey position, and guarantees the algorithm's convergence.• The adjustment parameters of the HPO algorithm are smaller in number and are non-gradient algorithms.

Parameters of HPO
The parameters of HPO can be identified as population-related and movement-related parameters.Population size and maximum iterations (itr max ) are population-based parameters, and regulating parameter β and balance parameter C are the movementrelated parameters.The fitness function can be regarded as the other parameter of HPO.Table 3 summarizes the parameters of a few referred algorithms:

Complexity analysis of HPO
The complexity of an optimization algorithm refers to the computational resources it requires to solve a problem.There are two main aspects to consider:

Time complexity
This measures the time it takes for the algorithm to execute, typically expressed in terms of the number of basic operations it performs.Here are some common factors influencing time complexity:

Space Complexity
This refers to the amount of memory required by the algorithm to run.Here's what typically contributes to space complexity: • Storing solutions: The algorithm needs to store information about each solution in the population, including its position in the search space.This memory usage scales with the population size (N) and problem dimension (D).• Additional data structures: Some algorithms might use additional data structures like sorting mechanisms or temporary variables.These can contribute to the overall complexity of space, but their impact is usually smaller than that of storing solutions.
The space complexity is almost similarly computed for most algorithms, such as O(N*D), where N is the populace size and D is the dimension.O(N), the most commonly used notation of complexity, denotes linear complexity, meaning the execution time grows linearly with the population size or number of iterations.
Typically, the complexity of the HPO algorithm depends on four components: initialization, updating of the hunter, updating prey, and fitness evaluation.Note that the initialization process's computational complexity with N search agents is O (N).The computational complexity of the updated process can be given as , T denotes the maximum number of iterations (earlier referred to as iter max ), D denotes the number of problem variables, and β is a regulatory parameter with a value of 0.1 [60].Hence, the total complexity is O(N × (T + (1 − β)TD + βTN + 1)).
The complexity of PSO is O (D * N * iter_max).The computation time grows linearly with the problem dimensionality D, swarm size N, and maximum iterations iter_max.Here, T represents the total number of iterations the algorithm runs for and is the population size.Compared to HHO, SCA avoids the additional complexity of interactions between solutions (like hunting behavior), making it slightly more efficient.In general, Tabu Search is considered not to have a polynomial time complexity as determining a specific time complexity for Tabu Search is observed to be complicated.

Improved versions of the Hunter-Prey Optimization Algorithm
This section discusses various recent works with improvised versions of the HPO algorithm in varied fields of engineering.
Improvising regulating parameter [76] proposes Improved HPO for robot path planning with an upgraded adjusting (regulating) parameter β and introducing a new parameter called changing parameter (CP).

The new β is given as
The changing parameter addresses the absence of a transfer parameter in the standard HPO.It increases the exploration speed and thus a faster pace for exploitation.Equation (14) gives the CP value and is followed by the refined position-defining equations.
In this version of IHPO, the author upgrades the regulatory parameter for randomization and prevents early convergence, which also helps alter the search direction.The proposed algorithm is juxtaposed with other contemporary algorithms of Particle Swarm Optimization (PSO), Salp Swarm Algorithm (SSA), Fitness-Dependent Optimizer (FDO), conventional COOT and HPO algorithms for 13 benchmark criteria, and 30-dimension functions that are widely used by the researchers.The IHPO performs its best when compared with other algorithms.The work uses the nature-inspired (13) x i,j (t algorithm and local search and block detection strategies for robot path planning in an unrecognized environment.The author gives a future scope for real Turtlebot robot testing.Also, there is a reach to implement the algorithm in the electrical distribution networks with renewable source integration.
Improvising initialization phase [75] presents a developed version of HPO, which will be upgraded at the initialization stage.Instead of random initialization in the conventional method, this work uses tent chaotic mapping for initialization and Cauchy distribution for random variables to clear the periodic points.
Equations ( 16) and ( 17) give the tent mapping and Cauchy's distribution with Ƞ = 2, called the chaotic parameter.Now, initialization Eq. ( 1) is modified to The author also suggests a linear combination of prey position, global optimum, and mean position, updating Eq. ( 12) as The performance of the IHPO is compared to Differential Evolution (DE), Cuckoo Search Algorithm (CSA), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), and Moth-Flame Optimization (MFO), and the convergence speed of the improved HPO is much faster than the rest.It also surpasses the rest of the algorithms' convergence efficiency, accuracy, and optimization ability.In the referred article, the algorithm and Convolutional Neural Network (CNN) are implemented to identify the structural damages in buildings and constructions.
[74] Also, the standard HPO can be advanced to IHPO by upgrading the initialization phase.To increase population diversity, the stochastic reverse learning technique has been introduced as l and u are the lower and upper boundaries, r is an arbitrary value between [0,1], X belongs to [l,u], and X rand gives the random reverse solution.Stochastic reverse learning focuses on producing a random inverse response from the present iteration during the population search, collating the objective function values within the two solutions, and then picking out the prime solution to proceed to the next iteration.(16) Also, the paper introduces a weighing function, ω, to improvise the prey's position equation; Eq. ( 11) is updated as Eq.(20).
Itr max gives the maximum number of iterations, t is the current iteration, c is the adjustment parameter, and ω min and ω max are the weight regulating parameters.It can be observed that a similar parameter has been introduced in earlier referred work [74] in improvising the prey's position equation.The IHPO is compounded with Extreme Machine Learning (ELM) to estimate wind power and is found effective, providing scope to improve wind power prediction accuracy.The scope can be further extended for a hybrid DG placement.

Updating the step size
The step size is upgraded in two stages, high-velocity and low-velocity ratios inspired by the Marine Predator Algorithm (MPA) [73,80], to avoid the optimum values trapped in local maxima.The first stage is defined for a higher step size, reflecting Brownian motion mathematically: For P = 0.5, R B is the vector representing Brownian motion, and E is the fittest solution matrix; it is the number of iterations and maxit: maximum iterations.For a lower step size, the equations considered are The optimum solution matrix E is given as (20) x i,j (t For n search agents, d dimensions X b is the best solution. The vector R L implements the Levy method in the second stage.The optimal power flow problem is addressed for wind energy and FACTS-integrated distribution systems using the Enhanced HPO (EHPO).The works conclude that EHPO is an efficient tool to solve real-world complex power system problems, with scope for analyzing the same for large-scale systems and systems incorporating PV_DG and EV technologies.
Improved HPO [81] proposes an improved version of HPO, IHPO, to overcome the local maximum trap and improve the accuracy of the conventional algorithm.Two steps are involved in studying the IHPO; firstly, the initial population set is generated using the tent chaotic mapping like in [75], followed by an Enhanced Sine Cosine Algorithm (ESCA) adaption and Cauchy's strategy of mutation for the exploration and exploitation stages.

Improvised initialization phase
The initialization stage is reformed using chaotic mapping, which is characterized by an increase in population diversity.Mathematically, Eq. ( 16) follows in the other way, u = 0.5.The next equation denotes the Tent mapping for controllable randomness of variables to avoid unstable fixed points and trapping into the small periods during iterations.For d dimensions and N search agents, Hence, the initialization equation updates as follows:

Adaption of ESCA
The better global search ability of the SCA is implemented to the standard HPO considering P v gives the population position, with t being the ongoing iteration number and T being the maximum number of iterations.μ is the conversion factor.For rand < P v , the population position is updated using the ESCA; if rand > P v , the position is updated using HPO.μ = 0.01 by [82], and in [81], μ = − 10.The standard SCA is enhanced by initiating hyperbolic sine regulating factor and dynamic cosine wave weight coefficient [82].(29)

Cauchy's strategy
Cauchy's mutation strategy is introduced to improve the convergence speed and convergence accuracy and avoid local maximum trapping.The updated population is given as is the position after Cauchy mutation, r 1 is a random value ranging in [0,1].The Cauchy mutation is applied to avoid the early occurrence of optimization.It is referred from [83], where the condition for Cauchy mutation is defined for stdY (t) > Cstd.The algorithm converges at a good rate, and for stdY (t) ≤ Cstd , there is a chance of a local maxima trap and Eq. ( 33) coming into the picture.In general, Cstd, the maximum value of the variation coefficient is 0.  mutation strategy is applied at T/5 iterations (T is the maximum iteration count).stdY(t) is the standard deviation of three consecutive iterations.The efficacy of the proposed algorithm is tested for eight classic test functions with Wild Horse Optimization (WHO), Gray Wolf Optimization (GWO), Sine Cosine Algorithm (SCA), and HPO.The practical application of the algorithm is the proposed scope of the referred article.Figure 4 illustrates the flow chart of improved HPO.

Inferences and discussions
From the above literature, HPO is a new nature-inspired meta-heuristic algorithm widely implemented in different engineering fields as an optimization technique.HPOA is applicable in complex systems, construction fields, and image processing because it handles multimodal and non-convex optimization problems.Figure 5 gives a pictorial representation of applications of the HPO algorithm.

Electrical engineering applications
The proposed algorithm HPO finds its application in electrical distribution systems to find the optimal placements of capacitor banks, custom power devices, and renewable energy devices for the system's loss reduction and voltage enhancement.The following figures accentuate the supremacy of HPO in determining the optimal locations of preferred devices with active loss and energy loss redemption.[84] studies the HPO for optimal capacitor banks in IEEE-33 and 69 bus systems.The comparative graph for power loss reduction for various algorithms highlighting HPO is depicted below: Figures 6 and 7 portray the active loss reduction comparison using different algorithms for IEEE-33 and 69 bus systems with an optimal capacitor bank placement.[69] address the optimal PV placement for active loss reduction; Fig. 8 shows the same for prime locations of PV in an IEEE-33 bus system.From the graphical analysis in the above figures, the put-forward algorithm HPO outperforms the other algorithms and is considered effective in tackling electrical engineering problems.Some referred works focus on enlightening the HPO in various established fields other than electrical engineering.Journalism [71] applies HPO in journalism for fake news detection.The authors proposed HPO with hybrid deep learning (HDL) to identify the false news from the obtained information.

Civil engineering
An improved version of HPO is used to identify the multi-storey building damages in [75].The process of damage detection by HPO is followed by the neural network technique, CNN.The two-stage approach is found effective in the respective field.[76] put forward an improvised HPO for robot path planning.The work highlights using the proposed algorithm combined with a search strategy to help robots search an obstacle-free path in unknown surroundings.

Robotics
Rotor dynamics [85] uses HPO to optimize the squeeze film damper's key parameters to reduce the rotor's vibration performance.The paper highlights the HPO over PSO in convergence speed and accuracy.

Cloud security
HPO finds its application in cloud security in the article [86], where the algorithm is used to enhance cloud data security by optimizing the multi-objective function with the preservation of information ratio, the ratio of hiding, and the modification degree performed at the critical generation stage as objective functions.

Medicine
A hybrid combination of HPO with ladybug beetle optimization algorithm is used in ophthalmoscopy to identify Diabetic Retinopathy (DR) [87].
HPO, along with other methodologies or its improved versions, is used in vast fields for the betterment of the desired output.Many works continue to explore the algorithm's efficacy; the advantages and disadvantages of the algorithm are discussed below.

Highlights of HPO
• Diverse Exploration: The predator-prey dynamics encourage the algorithm to cover multiple regions of the solution space simultaneously, aiding in fleeing from local optima.• Flexibility: HPOA can quickly adapt to different problem domains by tuning its parameters and mechanisms.• Convergence Speed: The algorithm often demonstrates faster convergence due to the dynamic interactions between hunters and prey than the other existing algorithms [60].

Drawbacks of HPO
• Parameter Sensitivity: Effective parameter tuning is crucial for optimal performance, and improper settings can lead to premature convergence or stagnation.This problem is addressed in [75] work and introduces Cauchy's distribution to avoid the same.• Hem in local maxima: Before getting entrapped in the local maximum and improvised versions, inscribing the same is referred from the literature.

Conclusion
The Hunter-Prey Optimization Algorithm presents an intriguing approach to optimization inspired by nature's predator-prey interactions.While it has demonstrated success in various applications, it is essential to recognize its strengths and weaknesses when considering its implementation.As research in this field continues to evolve, a deeper understanding of the algorithm's behavior and its potential to solve complex optimization problems will emerge.In summary, this comprehensive analysis highlights the significance of the Hunter-Prey Optimization Algorithm as a promising optimization technique; Fig. 11 gives an overview of the HPO algorithm.Inspired by nature, its unique approach to exploration and exploitation provides a fresh perspective on optimization.The benefits of the review can be addressed below.
For Researchers: • Consolidated knowledge: The paper summarizes the current state of the art and key concepts of the HPO algorithm.• Future research directions: By highlighting areas for future work, as suggested previously, the paper can guide researchers toward promising avenues for further development and application of HPO.• Comparative analysis: A well-structured review paper can compare HPO with other optimization algorithms, highlighting its advantages, limitations, and suitable use cases.This can help researchers choose the most appropriate optimization technique for their problem.
For developers and practitioners: • Understanding HPO's potential: The paper can showcase the capabilities of HPO for solving various optimization problems.This can encourage developers to explore its application in their projects.• Practical implementation guidance: The review paper can provide practical insights for implementing HPO with existing algorithms.This can save developers time and effort during implementation.
For the field of optimization:

Fig. 2 a
Fig. 2 a Classification of optimization algorithms.b Classification based on mode of inspiration

1 )Fig. 3
Fig. 3 Flow chart for HPO Population size (N): Many optimization algorithms work with a population of solutions.The complexity often increases as the population grows, as the algorithm must evaluate and update each solution.• Number of iterations (T): Most algorithms iterate through a loop, refining their solutions.The complexity increases with the number of iterations required for convergence.• Problem dimensionality (D) refers to the number of optimized variables.For some algorithms, the complexity increases with the number of variables as the search space increases.• Function complexity: The complexity of the objective function (what the algorithm is trying to optimize) can also play a role.Evaluating a more complex function in each iteration can increase the overall complexity of the time.
Lahari and Janamala Journal of Electrical Systems and Inf Technol (2024) 11:19The complexity of the standard GWO algorithm is O (N * d * Tmax), N is the size, d is the population, and Tmax is the maximum number of iterations.ALO's time complexity is generally considered to be O (D * N * T), where D represents the dimensionality of the problem (number of variables being optimized), N represents the population size (number of antlions), and T represents the maximum number of iterations.The total time complexity of WOA is generally considered to be O (M * N * D + f(N)), where M represents the maximum number of iterations, N represents the population size, D represents the number of dimensions in the problem, and f(N) represents the time required to evaluate the fitness function for N individuals.The complexity of HHO is generally considered to be O (N (D + T)), T is the maximum iteration count, D is the dimension, and N is the population.O(N*T) gives the complexity of SCA.

Fig. 6 Fig. 7 Fig. 8
Fig.6 Graphical representation of active loss reduction by capacitor bank placement in IEEE-33 bus system using HPO

Fig. 9 Fig. 10
Fig.9 Energy loss graphical depiction using various algorithms for an IEEE-33 bus system

.
inspiraƟon from nature provides a powerful framework for solving complex opƟmizaƟon problems in electrical engineering.By mimicking the successful hunƟng strategies observed in the animal kingdom ApplicaƟon:Difeerent fields of Engineering, Media and Journalism, Medicine, RoboƟcs etc.Key aspects:combine inspiraƟon from nature with opƟmizaƟon principles, making it a promising tool for solving complex problems in electrical engineering and potenƟally other domains.uƟlizestwo populaƟons hunters and preys and balances exploraƟon and exploitaƟon Future Scope:research to improve its exploraƟon-exploitaƟon balance, hybridize it with other algorithms, and handle mulƟ-objecƟve problems.

Table 3
Parameters of various algorithms t) 15b • • • Xb t