files/journal/2022-09-02_12-09-37-000000_955.png

International Journal of Electrical and Power Engineering

ISSN: Online 1993-6001
ISSN: Print 1990-7958
141
Views
1
Downloads

A Particle Swarm Optimization with Random Particles and Fine-Tuning Mechanism for Nonconvex Economic Dispatch

Chin-Wei Yen and Ming-Tang Tsai
Page: 35-41 | Received 21 Sep 2022, Published online: 21 Sep 2022

Full Text Reference XML File PDF File

Abstract

This study presents a new approach to the economic dispatch problems with valve-point effects. The practical economic dispatch problem has a nonconvex cost function with equality and inequality constraints that it is difficult to find the optimal solutions using any mathematical approaches. A Particle Swarm Optimization (PSO) with Random Particles and Fine-tuning mechanism (PSO-RPFT) is proposed to solve economic dispatch problem. The proposed developed in such a way that PSO with Constriction Factor (PSO-CF) is applied as a based level search which can give a good direction to the optimal global region. Random particles and fine-tuning mechanism is used as a fine tuning to determine the optimal solutions at the final. Effectiveness of the proposed method is demonstrated on 3 example systems and compared to that of SA, GA, EP. Results show that the proposed method is more effective in solving economic dispatch problem.


INTRODUCTION

Typical Economic Dispatch (ED) is to minimize total fuel cost subjected to several unit and system constraints. In the traditional ED problem, the generator cost functions were mostly approximated by piece-wise linear functions (Lin and Viviani, 1984). By considering the units with valve-point effects, the conventional process either ignores or flattens out these portions which could induce inaccurate results.

Recently, stochastic optimization techniques such as including the Simulated Annealing (SA) (Wong and Fung, 1993; Mantawy et al., 1998), the Genetic Algorithms (GA) (Walters and Sheble, 1993; Bakirtzis et al., 1994), Evolutionary Programming (EP) (Yang et al., 1996; Jayabarathi et al., 2005; Sinha et al., 2003), Particle Swarm Optimization (PSO) (Selvakumar and Thanushkodi, 2007; Victoire and Jeyakumar, 2004; Park et al., 2005) and Taguchi method (Liu and Cai, 2005) were applied to solve this problem. However for highly complex problems, these applications involved a wide solution space and the large number of searching and iterations were susceptible to related control parameters.

The efficiency will be affected and downgraded. Particle Swarm Optimization (PSO) based on the analogy of swarm of bird and fish school is developed by Kennedy and Eberhart (1995). In PSO, each individual searches a space by adjusting the trajectories of moving points in a multi-dimensions space and exchanges pervious experiences for find a better direction. PSO can generate the high-quality solutions within shorter calculation time and stable convergent characteristics. In past, PSO had been successfully applied to various fields of power system optimization (El-Dib et al., 2006; Wang and Singh, 2008; Kannan et al., 2004; Yuan et al., 2007). The disadvantage of the original PSO will be easily trapped the local optimum and takes a long computation time for finding the solution.

Shi and Eberhart (1998) proposed the Particles Swarm Optimization with Inertia Weighted (PSO-IW) for promoting the convergent rapid and the searching performance. Angeline (1998) combined the selection mechanism and PSO to promote the searching rapid and location of particles for finding better solution. The Particle Swarm Optimization with Constriction Factor (PSO-CF) (Clerc and Kenndy, 2000) used a constriction factor to control the trace of particles without considering the velocity of particles.

Shi and Eberhart (1998) have a low fine tuning ability of the solution when the converging solution arrived at the beginning of the run and a local search near the end of the run. Therefore, there are more possibilities to explore local optimum if the problem has more local optima. In this study, a Particle Swarm Optimization with Random Particles and Fine-Tuning mechanism (PSO-RPFT) is proposed to overcome this drawback. PSO-RPFT introduces 2 operators, Random particles and fine-tuning into the PSO algorithm for increasing the search ability.

The process of random particles will add the proper random particle into the whole particles when the solution is searched in the each generation. The procedure of fine-tuning will regulate the best position of group if the generation will be searched on the late period of PSO algorithm.

The study of fine-tuning during the tuning mechanism can be employed in the algorithm to make the search method more efficient at the end of search and the success rate of the searching global optimum could be increased. Effectiveness of the proposed method is demonstrated on three example systems and compared to that of SA, GA, EP and previous publications. Results show that the proposed method is more effective in solving economic dispatch problem.

PROBLEM FORMULATION

The ED problem can be modeled as an optimization process with following objective function and constraints:

(1)

subject to power balance constraints;

(2)

generating capability constraints;

(3)

Where:

CT = Total production cost ($ h-1)
N = No. of generation unit
Ci(Pgi) = Generation cost of power for the ith unit ($ h-1)
Pload = Total load demand (MW)
Pgi, min = Lower limit of the real power of the ith unit (MW)
Pgi, max = Upper limit of the real power of the ith unit (MW)

The fuel cost functions of generating units are characterized (Sinha et al., 2003):

(4)

ai-ci are the fuel cost coefficients of the ith unit. ei, fi are the fuel cost coefficients with valve-point effects. Basically, these fuel cost functions of generating units are the nonconvex functions. The associated incremental costs are not continuously or monotonically increasing. Multiple local optimal will exist in the objective function.

RANDOM PARTICLES AND FINE-TUNING MECHANISM

Brief of particles swarm optimization: In a PSO system, birds (particles) flocking optimizes a certain objective function. Each agent knows its best value so far (pbest) and its position. This information is analogy of personal experiences of each agent. Moreover each agent knows the best value so far in the group (gbest) among other agents around them have performed. In this study, PSO-CF was selected to trace the pbest value and gbest value. Using the PSO-CF, the velocity can be represented under the Eq. 5 in the PSO algorithm. Using the Eq. 5, a certain velocity can be calculated due to the position of individuals gradually close to pbest and gbest. The current position can be modified by Eq. 6:

(5)


(6)

Where:

Where:

c1, c2 = Acceleration constant (c1= c2 = 2.05)
rand () = Uniform random value with a range of [0, 1]
=

Dimension g of the position of particle i at iteration j

=

Dimension g of the velocity of particle i at iteration j

=

Dimension g of the own best position of particle i at iteration j

= Dimension g of the best particle in the swarm at iteration j

Random particle: In the PSO procedure, we can select particles from the set number of particles in a proper ratio as the random particles. These random particles are independent from the vector or positions obtained from individual or population particles. The search of random particles is conducted randomly in the whole space. The random particles have the ability of global search in the entire design space thus, they are expected to search the spaces not reached by its population. Besides the random particles, the remaining particles in the population update the information according to their velocities and positions. This study will not adopt pbest and gbest when setting random particles for position updating. The particles thus conduct random search in the design space. The formulation of random particle is expressed as follows:

(7)

In this study, initial particle set to 30 and random particle set to 5. If the random particles cannot find better fitness functions when searching the solution space, the remaining particles of the population will keep searching according to the velocity and position updating rule. Thus if random particles can be selected properly, the original search ability of the population will not be affected.

Fine-tuning mechanism: At the end of the search period, the population of particle swarm algorithm would move toward the optimal solution slowly due to the increasing similarity between pbest and gbest. It takes longer operating time thus, fine-tuning mechanism can be employed in the algorithm to make the search method more efficient at the end of search and the success rate of the algorithm in searching for the optimal solution could be increased.

Execution discriminant of fine-tuning mechanism: It is not required to execute the fine-tuning mechanism in each iteration, the fine-tuning operation is carried out only when the set discriminant value (pdfit) of directional function and the iteration interval s are met. In this study, the directional function value (pgifit) of dimension g at ith iteration is adopted as the discrimination condition. The directional function value of (pgifit) can be expressed as follows:

(8)

Where:

And:

is the fitness function of dimension g of the position of particle i at iteration j. Before the execution of fine-tuning mechanism, we need to preset the value (Pdfit) and the iteration interval s which are regarded as the discrimination condition forstarting the fine-tuning.After s iterations whether the directional function value of fgi is less than or equal to preset the value (Pdfit) is determined if yes, the population will skip normal executive process of iteration and turn to the fine-tuning mechanism for fine-tuning operation, otherwise it will resume the original iterative operation process.

Fine-tuning operational method: In the fine-tuning mechanism, gbest is treated as the object for fine-tuning mechanism in order to find solutions better than gbest in the fine-tuning region nearby gbest. The formulation of fine-tuning mechanism is expressed as follows:

(9)

n: dimension of

Each fine-tuning particle pgt is calculated to find the adaptive value (fgi) of the objective function and all fine-tuning particles are sequenced. The optimal fine-tuning particle Pgi, min and its corresponding optimal fine-tuning adaptive value function fgi, min are selected. If fgi, min is better than the optimal adaptive value function fgi of the original population, the optimal adaptive value function fgi of the original population and the optimal design value gbestf will be replaced by adaptive value function fgi, min corresponding to the optimal fine-tuning particle and its position Pgt in the solution space.

The procedure of proposed algorithm: A flowchart of the PSO-RPFT is briefly shown in Fig. 1. The detail of PSO-RPFT can be described as follows. Set the particle population, random particle population, number of iterations, iteration interval s deciede value of directional function Pdfit. During the execution of iteration determine the decision value of directional function Pdfit and the iteration interval s which are regarded as the discrimination condition for starting the fine-tuning operation. After s iterations whether the fgi directional function value Pgifit≤Pdfit is judged if yes, the population will skip normal executive process of iteration and turn to step 7 for fine-tuning operation otherwise, it will resume the original iterative operation process.

Calculate the optimal objective function adaptive value (fiti) of particle in the design space. Compare the function adaptive value (fiti) obtained by the particle with pbest function adaptive value (fiti, pbest) searched in the design space if the position of fiti, pbest is better than that of function adaptive value (fiti), a replacement shall be performed otherwise, this step is skipped. After step 4 and the replacement is carried out, compare with the gbest function adaptive value (fiti, gbest) which has been searched in the design space.

Fig. 1: The flowchart of the PSO-RPET

If the position of fiti, gbest is better than the optimal function adaptive value obtained through step 4 then a replacement shall be carried out otherwise this step is skipped.

The function adaptive value of random particle population updates its speed and position according to Eq. 7 and the other particle populations update the speed and positions according to Eq. 5 and 6. If PSO-RPFT algorithm meets the condition of step 2 then gbest shall be regarded as the object for fine-tuning operation and the fine-tuning particle population will calculate the function adaptive value according to Eq. 9.

Compare the function adaptive value obtained from the fine-tuning operation of step 7 with the function adaptive value prior to the fine-tuning if the former is better, a replacement shall be carried out otherwise, it is skipped.

The number of iterations is regarded as the stop condition, the iteration shall be stopped when the number is reached and the optimal objective function adaptive value is exported otherwise, return to step 2 and continue the iterations.

CASE STUDY

PSO-RPFT is numerically tested on three cases of ED problem. The associated fuel cost coefficients are adapted from (Sinha et al., 2003). Each optimization method (PSO-RPFT, PSO, SA, GA and EP) is implemented with Matlab language on a PIV-2.6 GHZ computer with 512 MB RAM.

Generating units: This case includes three generating units with nonconvex cost functions. In this case, the load demand is 850 MW. About 100 generations is set in this case as the stopping criteria. To verify the performance of PSO-RPFT, PSO, SA, GA and EP methods are adapted to test on same case as shown in Table 1. About 100 test runs are conducted for each method. From the Table 1, PSO-RPFT has more probability to reach optimal solution for each trial test. Other evolutionary algorithms for each trial test are often converged to the different optimum.The no. of reaching optimum and the average execution time of each trial test are 32 and 0.3268 sec. It is obvious that the performance of PSO-RPFT is better than other evolutionary algorithms. The convergent characteristics of different methods are shown in Fig. 2.

Table 1: Simulation results of 3 generating units with different methods

 

Table 2: Robust test of 13 generating units with different methods

 

 

Table 3: The generation output and the corresponding cost

 

 

Fig. 2: The convergent characteristics of the differnet methods

 

Case-2 consists of 13 generating units which has nonconvex functions with valve-point effects. The load demand is 1800 MW. About 800 generations is set in this case as the stopping criteria. Table 2 shows the robust test for all methods. Each method is executed by 100 trials. The best converged cost of the PSO-RPFT is 17976.015 the average execution time of each test is 5.7441 sec. The average execution time and the no. of trial reaching optimal have obviously better searching efficiency than other evolutionary algorithms.

Fig. 3: The convergent characteristics of the differernt methods

Due to the converged cost can reach the smaller value than other evolutionary algorithms, it is a better probability to guarantee a global optimum. From the Table 2, it can be proved that PSO-RPFT is a better performer in terms of solution quality, execution time and improving the searching performances. The generation output and the corresponding cost of the best solution can be shown in Table 3. Figure 3 is the convergent characteristics of the different mothos.

Generating units: Case-3 has 40 generating units which is a large system with more nonlinearity. The load demand is 10500 MW. Total 1000 generations is set in this case as the stopping criteria. Table 4 shows the robust test with different methods.

Each method is also executed by 100 trials. The average cost of the PSO-RPFT is $122813.371 and the average execution time of each test is 29.9832 sec. Table 5 shows the results of PSO-RPFT are compared with the results of pervious algorithms by Sinha et al. (2003), Park et al. (2005) and Liu and Cai (2005) such as Classical EP (CEP), Fast EP (FEP), Modified FEP (MFEP), Improved FEP (IFEP), Modified PSO (MPSO) and Taguchi Method (TM).

Table 4: The robustness test for all methods

 

Table 5: Comparison of simulation results

 

 

Table 6: The number of convergent cost range for each trail test

 

 

Fig. 4: The convergent characteristics of the different methods

 

Although, the solution is not guaranteed to be the global optimum, the superiority of PSO-RPFT has been still shown in Table 5. To compare the results with pervious algorithms in a statistical manner, the relative probability of convergence is provided for each range of cost as shown in Table 6. From the Table 6, PSO-RPFT can observe the more robustness than other evolutionary algorithms. Figure 4 is the convergent characteristics of the different methods.

CONCLUSION

A PSO-RPFT for solving the economic dispatch problem with valve-point effects is presented in this study. Many nonlinear characteristics of units could be handled properly in PSO-RPFT procedure with a reasonable time. Instead of deterministic rules, PSO-RPFT is using a random particle and fine-fine-tuning mechanism technique to enhance the searching performance that leads to a higher probability toward obtaining the global optimum. Test results reveal that the solutions can reach a better value in each trail test. With the advantages of both heuristic ideals and AI, PSO-RPFT has the conventional ideals in 3 fold: the complicated problem is solvable with a better performance than AI and the more likelihood to get a global optimum than heuristic methods. PSO-RPFT is first developed to be applied in ED problem. It has great potential to be further applied to many ill-conditioned problems in power system planning and operation.

How to cite this article:

Chin-Wei Yen and Ming-Tang Tsai. A Particle Swarm Optimization with Random Particles and Fine-Tuning Mechanism for Nonconvex Economic Dispatch.
DOI: https://doi.org/10.36478/ijepe.2011.35.41
URL: https://www.makhillpublications.co/view-article/1990-7958/ijepe.2011.35.41