Download PDF
Original Article  |  Open Access  |  4 Jan 2024

A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization

Views: 284 |  Downloads: 73 |  Cited:   0
J Smart Environ Green Comput 2024;4:4-21.
10.20517/jsegc.2023.18 |  © The Author(s) 2024.
Author Information
Article Notes
Cite This Article

Abstract

The research motivation of multi-objective bilevel optimization mainly stems from the need to solve practical problems and improve decision-making efficiency. On the one hand, bilevel optimization helps to solve the complexity and uncertainty in real life, thereby improving decision-making efficiency and robustness. On the other hand, by promoting the development and application of AI technology, bilevel optimization also provides support for sustainable development. Although the application of bilevel optimization has proven to be beneficial in addressing various real-life problems. However, recent studies indicate that achieving both high speed and high-quality optimization through existing algorithms remains challenging. This difficulty arises due to the NP-hard nature of the bilevel optimization problem. The nested structure method, commonly used to tackle this problem, involves each upper level solution independently performing the lower level optimization task. This approach significantly increases the number of evaluations for the lower level. To address this issue, our proposed method leverages the similarity in lower level optimization to group upper level solutions, enabling co-evolution of lower level solutions within the same group. Consequently, this approach substantially reduces the number of evaluations required for lower level solutions. Additionally, our method pairs parents and offspring, the optimized lower level solutions of the parents are utilized to optimize the lower level solutions of the offspring. This approach accelerates the optimization process for the lower level. To validate the effectiveness of our algorithm, we have applied it to a suite of test problems, demonstrating satisfactory performance.

Keywords

Bilevel multi-objective optimization, multi-task optimization, evolutionary algorithms, multi-objective optimization

1. INTRODUCTION

Today's society faces two major challenges: sustainable development and optimal use of resources. Green computing aims to reduce the environmental impact of computers and information technology, including reducing energy consumption and reducing e-waste. Bilevel multi-objective optimisation is an optimisation framework that considers multiple objective functions at the same time to find a good balance. By incorporating green computing principles, such as minimizing energy consumption and reducing environmental impact, as additional objectives or constraints within the optimization framework, we can effectively prioritize sustainability alongside other performance metrics. The bilevel optimization problem (BLOP) is a nested structural problem, where a lower level optimization problem is embedded within the upper level optimization problem [1]. Bilevel optimization has its roots in game theory [2]. It has gained increased attention due to its potential usage in various fields, such as economic management, resource allocation, energy sustainability, transportation planning, machine learning and medical engineering[3-9]. In some cases, both the upper level and lower level optimization problems may contain multiple conflicting objective functions, leading to multi-objective bilevel optimization problems (MBOPs). Without loss of generality, a MBOP can be expressed as follows:

$$ \begin{equation} \begin{split} \min\limits_{\mathbf{x}^u, \mathbf{x}^l}\; &\mathbf{f}^u=\{f_1^u(\mathbf{x}^u, \mathbf{x}^l), \ldots, f_{n^u}^u(\mathbf{x}^u, \mathbf{x}^l)\}, \\ \mathrm{s.t.}\; &g_j^u(\mathbf{x}^u, \mathbf{x}^l)\leq 0, j = 1, \ldots, p^u, \\ &\mathbf{x}^l\in \mathop{\arg\min}_{\mathbf{x}^l} \mathbf{f}^l=\{f_1^l(\mathbf{x}^u, \mathbf{x}^l), \ldots, f_{n^l}^l(\mathbf{x}^u, \mathbf{x}^l)\}, \\ &\quad\quad\; \mathrm{s.t.}\; g_j^l(\mathbf{x}^u, \mathbf{x}^l)\leq 0, j = 1, \ldots, p^l, \\ \end{split} \end{equation} $$

where $$ \mathbf{f}^u $$ and $$ \mathbf{f}^l $$ are the upper level and lower level objective functions, respectively; $$ \mathbf{x}^u $$ and $$ \mathbf{x}^l $$ refer to the upper level and lower level solutions, respectively; $$ n^u $$ and $$ n^l $$ indicate the number of upper level and lower level objective functions, respectively; $$ {{g_{j}^u}\left( {\mathbf{x}^u, \mathbf{x}^l} \right)} $$ and $$ {{g_{j}^l}\left( {\mathbf{x}^u, \mathbf{x}^l} \right)} $$ are the $$ j $$th upper level and lower level constraints, respectively; and $$ p^u $$ and $$ p^l $$ denote the number of upper level and lower level constraints, respectively. If and only if the upper level constraints are satisfied and $$ \mathbf{x}^l $$ is a Pareto-optimal solution to the lower level optimization problem with regard to the given $$ \mathbf{x}^u $$, then a solution $$ \mathbf{x}={({\mathbf{x}^u, \mathbf{x}^l})} $$ is a feasible solution of a BMOP. The goal of solving a BMOP is to find a set of widely distributed feasible solutions with respect to the upper level optimization problem. Due to the nested structure, traditional optimization methods usually have difficulty solving MBOPs effectively. Evolutionary algorithms (EAs) have been used to solve MBOPs [10, 11] because they do not depend on the mathematical characteristics of MBOPs. The existing EAs for solving MBOPs can be classified into three types: single-level reduction methods [12-17], surrogate-model-based methods [6, 18, 19], and nested-based methods [10, 20-22].

The single-level reduction approach converts a MBOP into a single-level optimization problem and then applies EAs to solve it. For instance, Li et al. used adaptive weighting sum scalarization and Karush-Kuhn-Tucker (KKT) conditions to transform a BMOP to a multi-objective optimization problem (MOP)[12]. Then they proposed an effective smoothing technique to cope with complementarity constraints. Li et al. [14] improved the algorithm presented in [12]. They first used the KKT condition to transform a BMOP into a MOP involving complementarity constraints and then proposed a decomposition-based constrained multi-objective differential EA. Jia et al. converted a MBOP into a MOP based on the primal and dual theory and used multi-objective metaheuristics and constraint processing techniques for optimization[13]. The method of converting a BMOP into a single-level optimization problem is very mature in the field of bilevel optimization, but these methods usually require strict mathematical assumptions, and these assumptions are usually difficult to satisfy in the real world.

The nested method solves the MBOP directly by performing the lower level optimization independently for each upper level solution. For example, Deb et al. used elite nondominated ranking GA or NSGA-Ⅱ for both upper and lower level optimization[10]. This algorithm was subsequently upgraded by Sinha et al. [22]. Their improved version evaluates the upper level only once, which significantly reduces the number of evaluations of the algorithm. Second, it also allows the file members to participate in crossover, which improves the performance of the algorithm. Deb et al. proposed a hybrid EA combined with a local search strategy for optimization[23]. Cai et al. proposed a divide-and-conquer strategy, in which all variables (both upper and lower levels) are divided into multiple mutually exclusive groups and optimized separately[20]. Although these methods are successful, due to the structural characteristics of bilevel optimization, they cannot afford the large number of evaluations required to be consumed by the lower level evaluation when the number of upper level solutions is large. In order to solve the more complex MBOP, many new algorithms are produced, and a genetic algorithm is adopted at both levels. For example, both the upper and lower levels use Particle swarm optimization, and both upper and lower levels use differential evolution (DE).

The surrogate-model-based methods use a surrogate model to approximate the constraints and objective function at the lower level, with the aim of reducing the number of lower level fitness evaluations (FEs). For example, Sinha et al. proposed an approximation-set-mapping approach which used quadratic functions to address the lower level optimization problem[18]. Sinha et al. modeled the lower level decision variables using a value function named m-BLEAQ; they used a quadratic function to estimate unknown lower level decision variables[19]. It is essential to note that the accuracy of the surrogate model has a significant impact on the performance of final solutions.

In the real world, we are often faced with a complex set of problems, which often require multiple goals to be considered at the same time, and each goal may have a mutually constrained relationship. In order to solve this kind of problem, the bilevel optimization algorithm provides an effective solution. The bilevel optimization algorithm decomposes the problem into two levels, the upper optimization part is responsible for generating a set of feasible solutions, and the lower optimization part further finds the lower optimal solutions that meets the constraints according to this set of feasible solutions. This strategy of decomposing the problem enables the bilevel optimization algorithm to comprehensively consider multiple aspects of the problem, so as to obtain more comprehensive and high-quality solutions. Most of the current MBLOPs are from the perspective of game theory, using the concept of objective function values and Pareto optimality. In contrast, Gu et al. are the first to study the convergence characteristics of MBLOPs problems from the perspective of traditional optimization, and consider a minimum and maximum robust version of the multi-objective problem, weighing the optimality of different objectives, and ensuring that each objective obtains a single optimal solution, rather than generating multiple Pareto optimal solutions[24]. Recently, Wang et al. designed a lower level environment selection strategy and an upper level solution regeneration strategy to improve the search efficiency of the algorithm[25]. Inspired by the biological classification of species, Mejía-de-Dios et al. proposed a novel evolutionary framework based on the concept of family[26]. They adopt the concept of science in biology to promote the diversity of solutions.

However, the implementation and application of the bilevel optimization algorithm still face many challenges. First, the complexity of bilevel optimization problems tends to be high, making the solution process very difficult. Secondly, the existing bilevel optimization algorithms are often difficult to balance the relationship between solution speed and solution quality. In order to solve these problems, we need to research and develop new bilevel optimization algorithms to improve the speed and quality of solutions.

In bilevel optimization, the lower level optimization tasks are usually relatively similar for multiple close upper level solutions. Inspired by this, based on the above observations, we design a new algorithm called CCBMO which utilizes match and cluster. To our knowledge, this is the first time that bilevel optimization has been performed using the similarity of parent and child lower level optimization tasks. The main contributions of this article are summarized as follows:

1. Initially, in the upper level decision space, we match a parent to each offspring based on the Euclidean distance. This makes it easier to select a partial solution from the parent's lower solution as the initial solution when performing a lower optimization for a child, greatly reducing the number of lower level optimizations required.

2. Then, the k-means is used to cluster the upper level offspring into $$ {N_u}/2 $$ groups. When solutions in the same group are being optimized at the next level, we use co-evolution.

3. Following this, environment selection is performed based on upper level objective values, and selected upper level solutions are chosen for iteration.

The remainder of this paper is structured as follows: Section Ⅱ presents the proposed bilevel optimization algorithm. Section Ⅲ provides a comprehensive experimental study, analysing and discussing the results. Finally, Section Ⅳ summarizes the findings and the potential for further research.

2. THE PROPOSED ALGORITHM

2.1. General framework of proposed algorithm

The framework of the CCBMO algorithm is established by algorithm 1. Initially, a set of $$ N_u $$ upper level solutions $$ {\mathcal{X}^u} = \left\{ {\mathbf{x}_1^u, \mathbf{x}_2^u, ..., \mathbf{x}_{N_u}^u} \right\} $$ is randomly generated. Subsequently, the lower level optimization is conducted for $$ \mathbf{x}_{i}^u $$ individually via algorithm 2. The obtained lower level solutions and their corresponding upper level solutions are stored in $$ \mathcal{L}^p $$ (line 2). The solutions in $$ \mathcal{L}^p $$ are then ranked based on the upper level objective values and the degree of constraint violation, with the nondominated solutions being stored in $$ {\mathcal{X}^o} $$ (line 3).

Algorithm 1: Proposed algorithm
A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization

Algorithm 2: Lower level optimization
A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization

During the main loop, we execute the DE on $$ {\mathcal{X}^u} $$, with the resulting offspring stored in $$ \mathcal{X}^{uo}=\left\{ {\mathbf{x}_{1}^{uo}, ..., \mathbf{x}_{{N_u}}^{uo}} \right\} $$ (line 5). For each solution in $$ {\mathcal{X}^u} $$, the closest solution to $$ \mathbf{x}_{i}^{uo} $$ is denoted as $$ \mathbf{x}_{i}^{um} $$ (lines 6-8). Next, we explain the several types of matches that can occur as a result. As shown in Figure 1, in the first case, $$ \mathbf{x}_{1}^{uo} $$ is matched with its closest parent upper level solution, $$ \mathbf{x}_{1}^{u} $$; therefore, $$ {\mathbf{x}}_1^{um} = {\mathbf{x}}_1^u $$. In the second case, $$ \mathbf{x}_{2}^{uo} $$ and $$ \mathbf{x}_{5}^{uo} $$ are both matched with $$ \mathbf{x}_{2}^{u} $$, as they are the closest to it; therefore, $$ {\mathbf{x}}_2^{um} = {\mathbf{x}}_2^u $$ and $$ {\mathbf{x}}_5^{um} = {\mathbf{x}}_2^u $$; in the third case, no offspring are closest to the solutions of $$ \mathbf{x}_{6}^{u} $$ and $$ \mathbf{x}_{7}^{u} $$.

A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization

Figure 1. We provide an example of a block plot for matching and clustering in Figure 1A and B. (A) Clustering of $$ \mathcal{X}^{uo} $$ via kmeans; (B) match the solutions in $$ \mathcal{X}^{u} $$ with each solution in $$ \mathcal{X}^{uo} $$ based on the Euclidean distance.

Then $$ \mathcal{X}^{uo} $$ is clustered into $$ N_u/2 $$ groups using the k-means, denoted as $$ \mathcal{G}_1, \dots, \mathcal{G}_{N_u/2} $$ (line 9). K-means is a clustering method based on Euclidean distance, which aims to minimize the sum of the distances of N objects from the nearest center point, and to divide these objects into K groups according to the size of the K value, and to consider the individuals in the same group to be similar. The index of each solution in $$ i $$th group in $$ \mathcal{X}^{uo} $$ is stored in $$ \mathcal{G}_i $$. Following the execution of the k-means clustering, the solutions belonging to the same cluster exhibit proximity, suggesting similarities among the corresponding tasks of lower level optimization. Therefore, the lower level optimization is performed simultaneously on the solutions belonging to the same group based on algorithm 2. After that, the optimized lower level solutions and their corresponding upper level solutions are subsequently stored in $$ \mathcal{L}^s $$ (lines 11-14). The parents in $$ \mathcal{L}^p $$ and offspring in $$ \mathcal{L}^s $$ are combined, denoted as $$ \mathcal{L} $$. Then, $$ N_u $$ distinct upper level solutions are chosen to replace the original solutions in $$ \mathcal{X}^u $$, based on the nondominated sorting and crowding distance obtained from the upper level environmental selection. The selected upper level solutions and their corresponding lower level solutions are stored in $$ \mathcal{L}^p $$. Finally, the nondominated solutions in $$ \mathcal{L} $$ are selected and merged into $$ \mathcal{X}^o $$ to update it.

In order to select the better solution for each iteration, the selection operator of NSGA-Ⅱ is applied to $$ \mathcal{L} $$ to obtain $$ \mathcal{X}^u $$, and $$ \mathcal{X}^u $$ is used to store $$ N_u $$ distinct upper level solutions. This selection is based on the upper level objective values and the upper level constraint violation. To be specific, the solutions in $$ \mathcal{X}^u $$ are initially divided into different nondominated sets using the constrained-domination principle. According to this principle, if we have two solutions, $$ \mathbf{x}_1 $$ and $$ \mathbf{x}_2 $$, $$ \mathbf{x}_1 $$ is considered better than $$ \mathbf{x}_2 $$ if any of the following conditions are met:

1. $$ cv\left( {{{\mathbf{x}}_1}} \right) = 0 $$ and $$ cv\left( {{{\mathbf{x}}_2}} \right) = 0 $$, and $$ \mathbf{x}_1 $$ Pareto dominates $$ \mathbf{x}_2 $$;

2. $$ cv\left( {{{\mathbf{x}}_1}} \right) = 0 $$ and $$ cv\left( {{{\mathbf{x}}_2}} \right) > 0 $$;

3. $$ cv\left( {{{\mathbf{x}}_1}} \right) > 0 $$ and $$ cv\left( {{{\mathbf{x}}_2}} \right) > 0 $$, and $$ cv\left( {{{\mathbf{x}}_1}} \right) < cv\left( {{{\mathbf{x}}_2}} \right) $$.

Then, the best half of the upper level solutions are added to $$ \mathcal{X}^u $$.

2.2. Lower level optimization

algorithm 2 provides the process of lower level optimization. First, an empty set $$ \mathcal{S} $$ is created (line 1). Then we use $$ k $$ to denote the index of the $$ j $$th element in $$ \mathcal{G}_{i} $$. Since we match a solution $$ \mathbf{x}_{i}^{um} $$ for each $$ \mathbf{x}_{i}^{uo} $$ (lines 6-8 of algorithm 1), we can find $$ \mathbf{x}_{k }^{um} $$ corresponding to $$ \mathbf{x}_{k}^{uo} $$. Then we store the lower level solution corresponding to $$ \mathbf{x}_{k }^{um} $$ into $$ S $$. As shown in Figure 2, the $$ \mathbf{x}_{1}^{uo}, ..., \mathbf{x}_{8}^{uo} $$ were divided into four groups using the k-means algorithm. For example, $$ \mathbf{x}_{1}^{uo} $$ is in a separate group. Based on Figure 1, $$ \mathbf{x}_{1}^{uo} $$ is matched with $$ \mathbf{x}_{1}^{u} $$, and the lower level solutions whose upper level solution is $$ \mathbf{x}_{1}^{u} $$ are stored in $$ S $$. $$ \mathbf{x}_{2}^{uo} $$ and $$ \mathbf{x}_{3}^{uo} $$ belong to the same group, but they are matched with solutions $$ \mathbf{x}_{2}^{u} $$ and $$ \mathbf{x}_{3}^{u} $$, respectively. Therefore, the lower level solutions whose upper level solutions are $$ \mathbf{x}_{2}^{u} $$ and $$ \mathbf{x}_{3}^{u} $$ are placed in $$ S $$.

A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization

Figure 2. Comparison of the average FEs statistical results of the three algorithms on TP1-TP2 and DS1-DS5. FEs: Fitness evaluations.

After removing the repeat values in $$ S $$, $$ \min \left\{ {{N_l}/2, \left| S \right|} \right\} $$ solutions are randomly selected from $$ S $$ and stored in $$ \mathcal{X}^{l}_1 $$. Additionally, $$ {N_l}-\min \left\{ {{N_l}/2, \left| S \right|} \right\} $$ lower level solutions are randomly generated and stored in $$ \mathcal{X}^{l}_2 $$. Then, $$ \mathcal{X}^{l}_1 $$ and $$ \mathcal{X}^{l}_2 $$ are combined to generate $$ \mathcal{X}^{l} $$ (lines 7-10 of algorithm 2). During the main loop of algorithm 2, we execute the differential evolution on $$ {\mathcal{X}^l} $$, with the resulting offspring stored in $$ \mathcal{X}^{lo} $$ (line 12). We merge $$ {\mathcal{X}^l} $$ and $$ \mathcal{X}^{lo} $$, and store the combined set in $$ {\mathcal{C}} $$. For solutions in $$ {\mathcal{G}_i} $$, we choose $$ N_l $$ optimal lower level solutions from $$ {\mathcal{C}} $$ considering the environmental selection of the upper level, and then store them in $$ \mathcal{A}_1, ..., \mathcal{A}_{|{\mathcal{G}_i}|} $$. Eventually, the lower level solutions merge, replacing the original $$ {\mathcal{X}^l} $$ (lines 12-18). Upon meeting the termination conditions of the lower level optimization, we store $$ \mathcal{A}_1, ..., \mathcal{A}_{|{\mathcal{G}_i}|} $$ and their corresponding upper level solutions in $$ \mathcal{L}^{s}_1, ..., \mathcal{L}^{s}_{|{\mathcal{G}_i}|} $$.

2.3. Termination criterion

Termination conditions for both upper and lower layers are based on their hypervolume (HV). The rate of convergence of HV for both the upper and lower levels is called the H-metric and is calculated as follows:

where $$ HV^{\max } $$ and $$ HV^{\min } $$ denote the maximum and minimum values of the hypervolume in the output solutions, respectively, and $$ 'u' $$, $$ 'l' $$ are used to distinguish the upper and lower levels respectively. We use the maximum objective value of the nondominated solution as the reference point to calculate the HV.

$$ \begin{equation} {H_u} = \frac{{HV_u^{\max } - HV_u^{\min }}}{{HV_u^{\max } + HV_u^{\min }}}, {H_l} = \frac{{HV_l^{\max } - HV_l^{\min }}}{{HV_l^{\max } + HV_l^{\min }}} \end{equation} $$

When the number of iterations exceeds $$ \sigma $$, and $$ H<\varepsilon $$, the algorithm terminates. We set the termination conditions as $$ {\varepsilon _l} = 0.001 $$; $$ {\varepsilon _u} = 0.001 $$; $$ {\sigma _l} = 10 $$, and $$ {\sigma _u} = 40 $$ dollars. In addition, for upper level optimization, if $$ {\sigma_u} $$ exceeds 40 and there are six consecutive generations where $$ {H_u} $$ remains constant, the algorithm terminates. Since we optimize the lower level solutions simultaneously for multiple upper level solutions, for all lower level optimization processes, if $$ {\sigma _l} $$ exceeds 10 and there are five consecutive generations where the value of $$ {H_l} $$ remains unchanged, the lower level optimization terminates.

3. EXPERIMENTAL STUDIES

In this section, the experimental study for investigating the performance of the proposed algorithm is presented. First, we introduce the test problems and parameter settings. Then we briefly describe the comparison algorithm. Afterward, we introduce the performance metrics used to evaluate the algorithm's performance. Finally, we give the experimental results and analyze the results.

3.1. Test problems

Two benchmark test sets, TP and DS [23], were selected for the experimental studies. In the TP test suite, we chose two test problems, TP1 and TP2, and the DS test suite includes five test problems (denoted as DS1-DS5).

The detailed settings for the TP test set and DS test set are provided in Table 1. $$ {D^u} $$ and $$ {D^l} $$ represent the dimensions of the upper and lower level decision variables, respectively. The dimensionality of both upper and lower level decision variables is K = 5 for DS1–DS3. For DS4-DS5, the dimensionality of the upper level decision variables is K = 1, and lower level decision variables is K + L = 10. The other parameters of the comparison algorithms remain the same as in their original papers.

Table 1

Settings of the test problems

Test problem$$ \boldsymbol{N^u} $$$$ \boldsymbol{N^l} $$KL
TP12020
TP22020
DS1202055
DS2202055
DS3202055
DS454019
DS554019

3.2. Compared algorithms and parameter settings

In order to evaluate the performance of the algorithms, two algorithms were chosen for comparison: BLMOCC [20], MOBEA-DPL [27].

(1) BLMOCC: it is an algorithm that uses a knowledge-based variable decomposition strategy to solve a MBOP. The knowledge-based variable decomposition strategy is used throughout the optimization process, and the variables are divided into three groups according to the correlation between the variables in the two levels. Different optimization methods are used independently for different groups.

(2) MOBEA-DPL: it is an algorithm developed on the framework of the nested bilevel multi-objective optimization algorithm. It uses a dual-population optimization strategy to improve the solution of the lower level optimization problem. The first group is used to store nondominated solutions in the lower level, and the second group is used to store upper level solutions that are not dominated by solutions in the first group. Additionally, to increase the effectiveness of the search, the offspring of the upper level solutions are selected from the neighborhood of the best solutions in the existing solutions.

3.3. Performance metrics

We limit the maximum FEs of the upper and lower levels to facilitate comparing the number of true evaluations of different algorithms. The maximum number of evaluations for the upper and lower levels is 50, 000 and 1, 000, 000, respectively. $$ F{E^u} $$ and $$ F{E^l} $$ represent the number of evaluations in the upper and lower levels, respectively. In addition, we compare the sum of the actual number of evaluations in the upper and lower levels (i.e., $$ F{E^u} + F{E^l} $$).

Without loss of generality, we consider using the inverted generational distance (IGD) [28] and hypervolume (HV) [29] to measure the performance of the algorithms. Both metrics can measure the diversity and convergence of the obtained solutions. A smaller IGD value means better algorithm performance, while the opposite is true for HV values, where a larger HV value means better algorithm performance.

3.4. Comparison between CCBMO and other algorithms on each index

In this case, we measure whether the algorithm can converge well with a small number of FEs. We recorded the average of FEs (including the upper level, lower level, and the sum of both levels), the IGD, and HV for all test problems. Specifically, each algorithm ran independently 21 times.

From Table 2, it can be seen that CCBMO has better results than BLMOCC and MOBEA-DPL in terms of lower level average FEs on TP1-TP2, DS1, and DS3-DS5, while slightly worse than BLMOCC in terms of upper level average FEs. This is because in lower level optimization, we introduce the idea of co-optimization, which optimizes similar lower level optimization problems at the same time, thus greatly reducing the number of optimizations and evaluations at the lower level level. However, as shown in Figure 3, due to the obvious advantage of CCBMO in the average FEs of the lower level, the sum of the average FEs of the upper and lower levels shows the best results, except that MOBEA-DPL outperformed CCBMO and BLMOCC on DS2. Obviously, this is all due to the improvement of the lower level optimization strategy.

Table 2

Performance comparison between BLMOCC, MOBEA-DPL and CCBMO regarding the average values of FEs on TP and DS

Problem$$ \boldsymbol{N_{u}+N_{l}} $$BLMOCCMOBEA-DPLCCBMO
$$ \boldsymbol{F{E^u}} $$$$ \boldsymbol{F{E^l}} $$$$ \boldsymbol{F{E^l}+F{E^l}} $$$$ \boldsymbol{F{E^u}} $$$$ \boldsymbol{F{E^l}} $$$$ \boldsymbol{F{E^l}+F{E^l}} $$$$ \boldsymbol{F{E^u}} $$$$ \boldsymbol{F{E^l}} $$$$ \boldsymbol{F{E^l}+F{E^l}} $$
FEs: Fitness evaluations.
TP120 + 202, 536320, 281322, 81853, 499768, 080821, 58040, 615250, 271290, 885
TP220 + 2038, 7361, 128, 3051, 167, 04153, 499768, 080821, 58036, 907230, 748267, 654
DS1 (5 + 5)20 + 2030, 553561, 769592, 32250, 913634, 383685, 29536, 763278, 767315, 530
DS2 (5 + 5)20 + 2036, 466650, 363686, 82955, 212297, 359352, 57166, 986291, 608358, 594
DS3 (5 + 5)20 + 204, 121657, 513661, 63365, 1831, 243, 6791, 308, 86233, 257298, 037331, 293
DS4 (1 + 9)5 + 4019, 492490, 870510, 362460, 0002, 070, 0002, 530, 00023, 39199, 774123, 165
DS5 (1 + 9)5 + 4014, 339362, 115376, 454352, 0001, 590, 0001, 942, 00056, 27199, 959156, 230
A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization

Figure 3. Comparison of the statistical results of the average IGD values of the three algorithms on TP1-TP2 and DS1-DS5. IGD: Inverted generational distance

Tables 3 and 4 provide the comparative results of three algorithms on seven test problems in terms of IGD and HV. Figures 3 and 4 clearly demonstrate that CCBMO outperformed both BLMOCC and MOBEA-DPL. Next we analyze these experimental results in detail.

Table 3

Performance comparison between BLMOCC, MOBEA-DPL and CCBMO regarding the IGD values on TP1-TP2 and DS1-DS5

Problem$$ \boldsymbol{N_{u}+N_{l}} $$BLMOCCMOBEA-DPLCCBMO
IGD: Inverted generational distance.
TP120 + 200.14260.01260.0128
TP220+200.03100.02350.0178
DS1 (5 + 5)20 + 2014.48420.11231.0858
DS2 (5 + 5)20 + 2025.57324.31120.1854
DS3 (5 + 5)20 + 203.95620.20581.9772
DS4 (1 + 9)5 + 400.07680.16800.0607
DS5 (1 + 9)5 + 400.35820.18200.0326
Table 4

Performance comparison between BLMOCC, MOBEA-DPL and CCBMO regarding the HV values on TP1-TP2 and DS1-DS5

Problem$$ \boldsymbol{N_{u}+N_{l}} $$BLMOCCMOBEA-DPLCCBMO
HV: Hypervolume.
TP120 + 200.77230.70260.6856
TP220 + 200.40290.40740.4277
DS1 (5 + 5)20 + 200.00000.30920.2813
DS2 (5 + 5)20 + 200.00000.00280.2443
DS3 (5 + 5)20 + 200.00000.23420.0000
DS4 (1 + 9)5 + 402.31100.16802.3495
DS5 (1 + 9)5 + 401.17560.18202.2105
A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization

Figure 4. Comparison of the statistical results of the average HV value of the three algorithms on TP1-TP2 and DS1-DS5. HV: Hypervolume.

Both TP1 and TP2 are non-scalability problems. MOBEA-DPL outperformed other algorithms in terms of IGD value for TP1 due to its lower level dual-population strategy, but it inevitably increased the lower level FEs. CCBMO demonstrates a slightly inferior performance in terms of IGD value compared to MOBEA-DPL. Among all the compared algorithms, BLMOCC obtained the best HV value but the worst IGD value. Regarding TP2, CCBMO achieved the most satisfactory IGD and HV, followed by MOBEA-DPL. BLMOCC performed the worst.

DS1 is often used to evaluate algorithm performance and the ability of the algorithm to coordinate the processing of upper level and lower level tasks. Regarding DS1, MOBEA-DPL achieved the best IGD and HV. CCBMO performed slightly inferior to MOBEA-DPL in terms of IGD and HV values but significantly outperformed BLMOCC.

DS2 is used to assess the algorithm's capability to conduct extensive searches before converging on the frontier. For DS2, CCBMO significantly outperformed the contrasting algorithms. Compared with BLMOCC and MOBEA-DPL, CCBMO could reduce the IGD value by one to two orders of magnitude, respectively, and increase the HV value by two orders of magnitude. BLMOCC and MOBEA-DPL had difficulty achieving satisfactory results in the DS2 test problem.

DS3 involves discrete variables, and as the number of variables increases, the problem becomes increasingly challenging, making it difficult for traditional algorithms to address. MOBEA-DPL achieved the best IGD and HV on DS3. In contrast, CCBMO performed worse than MOBEA-DPL due to its utilization of a traditional nesting method for addressing this problem. BLMOCC had the worst IGD and HV.

Both DS4 and DS5 evaluate the algorithm's capability to search for the appropriate lower level frontier that corresponds to the upper level frontier. Additionally, DS4 necessitates identifying a specific point in the lower level solution that corresponds to the upper level solution. In the case of DS4, CCBMO achieved superior IGD and HV values, which are one and two orders of magnitude better than the corresponding IGD values of MOBEA-DPL and BLMOCC, respectively, and one order of magnitude better than the HV value of MOBEA-DPL. In the case of DS5, CCBMO had significant superiority over the comparison algorithms. In terms of IGD value, CCBMO outperformed both comparison algorithms by two orders of magnitude. The HV value of CCBMO is an order of magnitude better than that of MOBEA-DPL. Therefore, these experiments demonstrate the effectiveness of CCBMO in solving BMOPs.

3.5. Comparison of CCBMO with its variants

To illustrate the effectiveness of the mechanism in CCBMO, we compared two variants of CCBMO, CCBMO-M and CCBMO-C, where CCBMO-M represents the variant that removes the k-means clustering in CCBMO, and CCBMO-C represents the variant that removes the matching between the upper level offspring and parents in CCBMO. Tables 5-7 present the results of CCBMO, CCBMO-M and CCBMO-C regarding the average number of FEs, IGD and HV over 21 runs on TP1-TP2 and DS1-DS5, respectively.

Table 5

FEs statistics of CCBMO and its variants on TP1-TP2 and DS1-DS5

Problem$$ \boldsymbol{N_{u}+N_{l}} $$CCBMOCCBMO-MCCBMO-C
FEs: Fitness evaluations.
TP120 + 20290, 885292, 893334, 300
TP220 + 20267, 654290, 721358, 603
DS1 (5 + 5)20 + 20315, 530312, 516363, 501
DS2 (5 + 5)20 + 20358, 594367, 339379, 283
DS3 (5 + 5)20 + 20331, 293319, 998406, 935
DS4 (1 + 9)5 + 40123, 165126, 272147, 857
DS5 (1 + 9)5 + 40156, 230161, 888119, 948
Table 6

IGD statistics of CCBMO and its variants on TP1-TP2 and DS1-DS5

Problem$$ \boldsymbol{N_{u}+N_{l}} $$CCBMOCCBMO-MCCBMO-C
IGD: Inverted generational distance.
TP120 + 200.01280.01560.0192
TP220 + 200.01780.11990.6835
DS1 (5 + 5)20 + 201.08581.00703.7125
DS2 (5 + 5)20 + 200.18540.20880.3523
DS3 (5 + 5)20 + 201.97722.14252.9403
DS4 (1 + 9)5 + 400.06070.06840.0635
DS5 (1 + 9)5 + 400.03260.03600.0571
Table 7

HV statistics of CCBMO and its variants on TP1-TP2 and DS1-DS5.

Problem$$ \boldsymbol{N_{u}+N_{l}} $$CCBMOCCBMO-MCCBMO-C
HV: Hypervolume.
TP120 + 200.68560.62980.6102
TP220 + 200.42770.44010.0239
DS1 (5 + 5)20+200.28130.35590.0000
DS2 (5 + 5)20+200.24430.19190.0759
DS3 (5 + 5)20+200.00000.00000.0000
DS4 (1 + 9)5+402.34952.28792.3383
DS5 (1 + 9)5+402.21052.20822.1234

Table 5 gives a comparison of CCBMO and its variants in terms of average FEs. CCBMO consumed the least number of evaluations in solving TP1-TP2, DS2 and DS4. CCBMO-M performed slightly worse than CCBMO, consuming the least number of evaluations on solving DS1 and DS3. CCBMO-C only consumed the least number of evaluations when solving DS5.

Tables 6 and 7 compare the IGD and HV values of CCBMO and its variants (CCBMO-M and CCBMO-C), respectively. In terms of TP1 and DS2-DS5, CCBMO outperformed CCBMO-M and CCBMO-C in both IGD and HV values. For TP2, CCBMO obtained the best IGD value, but the HV value was slightly worse than CCBMO-M. CCBMO-M had the best IGD and HV values when solving DS1. On the other hand, CCBMO-C performed the worst, not obtaining the best IGD and HV values for any problem. Therefore, our experiments demonstrate the effectiveness of matching and clustering incorporated in CCBMO.

4. CONCLUSION

Our work shows the existing bilevel multi-objective optimization algorithms ignore the similarity between the lower level optimization tasks when solving the problem, and thus consume a large number of FEs when performing the lower level optimization. Based on these findings, we have proposed a new optimization algorithm framework (called CCBMO), which utilizes the similarity of the lower level optimization tasks of the parents and offspring to accelerate the solution of the lower level optimization tasks of the offspring. CCBMO mainly includes two stages: a matching stage and a clustering stage. In the matching stage, upper level solutions of the parents and offspring are matched based on Euclidean distance. Each offspring selects the closest parent and inherits its corresponding lower level solutions. The clustering stage subsequently decomposes the offspring into $$ {N_u}/2 $$ groups using k-means. Co-evolution is then performed on the lower level solutions within the same group. CCBMO was tested on seven problems by comparing it with two algorithms (i.e., BLMOCC and MOBEA-DPL). The results demonstrate the effectiveness of CCBMO. At present, the research on bilevel optimization is very limited, and more complex methods are usually used to solve the bilevel optimization problem. In this paper, the traditional genetic algorithm is used to solve the upper level and lower level optimization problems respectively, and the correlation between the upper level and lower level optimization problems is noted, so that the bilevel optimization problem can be solved by a traditional nested method. On the theoretical side, we will continue to study the bilevel optimization problem that solves the NP hard by simple genetic algorithm framework. In addition to this, we will further explore the application of multi-objective bilevel optimization in real life.

Although CCBMO performed well on most problems, we found that CCBMO did not converge well when solving test problems with a large number of decision variables. Therefore, further efforts are needed to develop an efficient selection strategy to address larger-scale BMOPs.

DECLARATIONS

Acknowledgments

Thanks to Yaochu Jin, Xingyi Zhang, Ran Cheng, Ye Tian, Xilu Wang, Dan Guo, Dimo Brockhoff, Handing Wang Qingfu Zhang and Chaoli Sun for providing their source code.

Authors' contributions

Proposal and design of research propositions, drafting or final revision of the paper: Hu WY, Huang PQ

Perform data collection: Hu WY

Technical and material support provided: Li F, Ge EQ

Availability of data and materials

Not applicable.

Financial support and sponsorship

The authors is supported by the National Natural Science Foundation of China under 61903003 and supported by Anhui Provincial Natural Science Foundation (Grant 2308085MF199 and 2008085QE227) and Scientific Research Projects in Colleges and Universities of Anhui Province (Grant 2023AH051124) and Supported by the Open Project of Anhui Province Engineering Laboratory of Intelligent Demolition Equipment (Grant APELIDE2022A007) and Supported by the Open Project of Anhui Province Key Laboratory of Special and Heavy Load Robot (Grant TZJQR001-2021).

Conflicts of interest

All authors declare that they are bound by confidentiality agreements that prevent them from disclosing their conflicts of interest in this work.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2024.

REFERENCES

1. Dempe S, Dinh N, Dutta J, Pandit T. Simple bilevel programming and extensions. Math Program 2021;188:227-53.

2. von Stackelberg H. Market structure and equilibrium Heidelberg: Springer Berlin; 2010.

3. Fortuny-Amat J, McCarl B. A representation and economic interpretation of a two-level programming problem. J Oper Res Soc 1981;32:783-92.

4. Dempe S, Kalashnikov V, Pérez-Valdés GA, Kalashnykova N. Bilevel programming problems Heidelberg: Springer Berlin; 2015.

5. Wogrin S, Pineda S, Tejada-Arango DA. Applications of bilevel optimization in energy and electricity markets. In: Dempe S, Zemkoho A, editors. Bilevel optimization. Cham: Springer; 2020. pp. 139-68.

6. Yin Y. Multiobjective bilevel optimization for transportation planning and management problems. J Adv Transp 2002;36:93-105.

7. Liu R, Gao J, Zhang J, Meng D, Lin Z. Investigating bi-level optimization for learning and vision from a unified perspective: a survey and beyond. IEEE Trans Pattern Anal Mach Intell 2022;44:10045-67.

8. Eichfelder G. Multiobjective bilevel optimization. Math Program 2010;123:419-49.

9. Halter W, Mostaghim S. Bilevel optimization of multi-component chemical systems using particle swarm optimization. In: 2006 IEEE International Conference on Evolutionary Computation; 2006 Jul 16-21; Vancouver, Canada. IEEE; 2006. pp. 1240-47.

10. Deb K, Sinha A. Solving bilevel multi-objective optimization problems using evolutionary algorithms. In: Ehrgott M, Fonseca CM, Gandibleux X, Hao JK, Sevaux M, editors. Lecture notes in computer Science. Heidelberg: Springer Berlin; 2009. pp. 110-24.

11. Jia L, Wang Y. A genetic algorithm for multiobjective bilevel convex optimization problems. In: 2009 International Conference on Computational Intelligence and Security; 2009 Dec 11-14; Beijing, China. IEEE; 2009. pp. 98-102.

12. Li H, Zhang L. An efficient solution strategy for bilevel multiobjective optimization problems using multiobjective evolutionary algorithm. Soft Comput 2021;25:8241-61.

13. Jia L, Wang Y. Genetic algorithm based on primal and dual theory for solving multiobjective bilevel linear programming. In: 2011 IEEE Congress of Evolutionary Computation (CEC); 2011 Jun 5-8; New Orleans, LA, USA. IEEE; 2011. pp. 558-65.

14. Li H, Zhang Q, Chen Q, Zhang L, Jiao YC. Multiobjective differential evolution algorithm based on decomposition for a type of multiobjective bilevel programming problems. Knowl Based Syst 2016;107:271-88.

15. Pieume CO, Marcotte P, Fotso LP, Siarry P. Generating efficient solutions in bilevel multi-objective programming problems. Am J Oper Res 2013;3:289-98.

16. He Q, Lv Y. Particle swarm optimization based on smoothing approach for solving a class of bi-level multiobjective programming problem. Cybern Inf Technol 2017;17:59-74.

17. Alves MJ, Dempe S, Júdice JJ. Computing the Pareto frontier of a bi-objective bi-level linear problem using a multiobjective mixed-integer programming algorithm. Optimization 2010;61:335-58.

18. Sinha A, Malo P, Deb K. Approximated set-valued mapping approach for handling multiobjective bilevel problems. Comput Oper Res 2017;77:194-209.

19. Sinha A, Malo P, Deb K. Towards understanding bilevel multi-objective optimization with deterministic lower level decisions. In: Gaspar-Cunha A, Henggeler Antunes C, Coello C, editors. Lecture notes in computer science. Cham: Springer; 2015. pp. 426-43.

20. Cai X, Sun Q, Li Z, et al. Cooperative coevolution with knowledge-based dynamic variable decomposition for bilevel multiobjective optimization. IEEE Trans Evol Comput 2022;26:1553-65.

21. Islam MM, Singh HK, Ray T. A nested differential evolution based algorithm for solving multi-objective bilevel optimization problems. In: Ray T, Sarker R, Li X, editors. Lecture notes in computer science. Cham: Springer; 2016. pp. 101-12.

22. Sinha A, Deb K. Towards understanding evolutionary bilevel multi-objective optimization algorithm. IFAC Proc Vol 2009;42:338-43.

23. Deb K, Sinha A. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm. Evol Comput 2010;18:403-49.

24. Gu A, Lu S, Ram P, Weng TW. Min-max multi-objective bilevel optimization with applications in robust machine learning. arXiv. [Preprint. ] Mar 7, 2023[accessed 2023 Dec 21]. Available from: https://arxiv.org/abs/2203.01924.

25. Wang W, Liu HL. A multi-objective bilevel optimization evolutionary algorithm with local search. In: 2021 17th International Conference on Computational Intelligence and Security (CIS); 2021 Nov 19-22; Chengdu, China. IEEE; 2021. pp. 408-12.

26. Mejía-de-Dios JA, Rodríguez-Molina A, Mezura-Montes E. A novel evolutionary framework based on a family concept for solving multi-objective bilevel optimization problems. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. New York, NY, USA: Association for Computing Machinery; 2022. pp. 348–51.

27. Wang W, Liu HL, Shi H. A multi-objective bilevel optimisation evolutionary algorithm with dual populations lower-level search. Conn Sci 2022;34:1556-81.

28. Bosman PAN, Thierens D. The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE Trans Evol Comput 2003;7:174-88.

29. Zitzler E, Thiele L. Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 1999;3:257-71.

Cite This Article

Export citation file: BibTeX | RIS

OAE Style

Hu WY, Li F, Huang PQ, Ge EQ. A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization. J Smart Environ Green Comput 2024;4:4-21. http://dx.doi.org/10.20517/jsegc.2023.18

AMA Style

Hu WY, Li F, Huang PQ, Ge EQ. A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization. Journal of Smart Environments and Green Computing. 2024; 4(1): 4-21. http://dx.doi.org/10.20517/jsegc.2023.18

Chicago/Turabian Style

Hu, Wan-Yue, Fei Li, Pei-Qiu Huang, Er-Qian Ge. 2024. "A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization" Journal of Smart Environments and Green Computing. 4, no.1: 4-21. http://dx.doi.org/10.20517/jsegc.2023.18

ACS Style

Hu, W.Y.; Li F.; Huang P.Q.; Ge E.Q. A cluster-based co-evolutionary optimization method for bilevel multi-objective optimization. . J. Smart. Environ. Green. Comput. 2024, 4, 4-21. http://dx.doi.org/10.20517/jsegc.2023.18

About This Article

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
284
Downloads
73
Citations
0
Comments
0
5

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Cite This Article 5 clicks
Like This Article 5 likes
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Journal of Smart Environments and Green Computing
ISSN 2767-6595 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/