Department of Automation, Tsinghua University, Beijing100084, China.
Show Author Information
Hide Author Information
Abstract
To reduce intermediate levels of splitting process and enhance sampling accuracy, a multilevel splitting algorithm for quick sampling is proposed in this paper. Firstly, the selected area of the elite set is expanded to maintain the diversity of the samples. Secondly, the combined use of an adaptive difference evolution algorithm and a local searching algorithm is proposed for the splitting procedure. Finally, a suite of benchmark functions are used for performance testing. The results indicate that the convergence rate and stability of this algorithm are superior to those of the classical importance splitting algorithm and an adaptive multilevel splitting algorithm.
No abstract is available for this article. Click the button above to view the PDF directly.
References
[1]
H.Kahn and T. E.Harris, Estimation of particle transmission by random sampling, National Bureau of Standards Applied Mathematics Series, vol. 12, pp. 27-30, 1951.
W. S.Wadman, M. S.Squillante, and S.Ghosh, Accelerating splitting algorithms for power grid reliability estimation, in Proc. 2016 Winter Simulation Conf., Washington, DC, USA, 2016, pp. 1757-1768.
[4]
H.Louvin, E.Dumonteil, T.Lelièvre, M.Rousset, and C. M.Diop, Adaptive multilevel splitting for Monte Carlo particle transport, EPJ Web of Conferences, vol. 153, p. 06006, 2017.
L. P.Wang and W. H.Fan, A multi-level splitting algorithm based on differential evolution, Int. J. Model. Simul. Sci. Comput., vol. 9, no. 2, p. 1850021, 2018.
C. E.Bréhier and T.Lelièvre, On a new class of score functions to estimate tail probabilities of some stochastic processes with adaptive multilevel splitting, Chaos, vol. 29, no. 3, p. 033126, 2019.
D.Aristoff, T.Lelièvre, C. G.Mayne, and I.Teo, Adaptive multilevel splitting in molecular dynamics simulations, ESAIM: Proceedings and Surveys, vol. 48, pp. 215-225, 2015.
P. N.Suganthan, N.Hansen, J. J.Liang, K.Deb, Y. P.Chen, A.Auger, and S.Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization. Singapore: Nanyang Technological University, 2005.
Wang L, Fan W. A Multilevel Splitting Algorithm for Quick Sampling. Tsinghua Science and Technology, 2021, 26(4): 417-425. https://doi.org/10.26599/TST.2020.9010006
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).
10.26599/TST.2020.9010006.F001
Schematic diagram of multilevel splitting algorithm in 3-D space.
10.26599/TST.2020.9010006.F002
Flow diagram of the SADE-L algorithm.
3 SADE-L Algorithm
The ISp and ADAM algorithms can resolve the rare event problem well. However, with increased complexity, the number of intermediate level increases greatly, which slows the process. To address this problem, in this paper, we propose the SADE-L algorithm, which extends the selection range of the elite set to improve the diversity of the start states. The splitting process is executed using an adaptive differential evolution algorithm and a local searching algorithm, which reduces the number of intermediate levels. The details of this process are shown below.
First, randomly generate a population , where denotes the splitting level with a start value of zero and , and is the -dimension decision variance, with the upper bound vector and the lower bound vector . The corresponding objective function is . Then determine whether the terminal condition is met. The process ends when the probability is less than a specified value or the splitting level exceeds the maximum level. Otherwise, the process continues to the next step.
Next, evaluate the objective function values and sort them in increasing order. From the samples collected from the current and previous levels, choose the individuals ranked at the front for membership in the elite set . Details are provided in Section 3.1.
Thirdly, execute the splitting progress using the FE scheme by combining adaptive differential evolution with local searching, and update the decision vector , on each dimension.
We use an alternative approach to randomly assign the sampling method for a vector . In the adaptive DE algorithm, we allocate a mutation factor and a cross-over factor for each dimension. Initially, and . If the intermediate point is inferior to the start point, then update and , using Eqs. (
6
) and (
9
), respectively. In the local searching procedure, we impose a disturbance on the start point in the positive or negative direction to search for the points that are spatially closer to the rare set. More details are discussed in Section 3.2. As the splitting process progress, a rare set is ultimately obtained.
Figure 2
shows a flow diagram of the SADE-L algorithm.
10.26599/TST.2020.9010006.F002
Flow diagram of the SADE-L algorithm.
The pseudo-code of the SADE-L algorithm is presented in Algorithm 1.
3.1 Construction of elite set
As the initial state of the candidates, the samples of the elite set are crucial in the framework of the multilevel splitting algorithm. As mentioned above, the selection of the elite set is usually performed as follows: for the -th level, rank the components of sequence and reserve a predetermined number of samples that have the best objective function values. Experimental research has determined that if the objective function is more complex, might overlap as is close to , which shows the convergence speed. Therefore, we expand the elite set selection range far beyond that of the current collection , but also involve set . That is, we choose top samples from the set to construct . If , let . This procedure is presented below:
• If , randomly generate Let , rank the objective function value of each sample of as , and choose ;
• If , let . The sorted function is , and use the best samples to construct the elite , where , comprises the subscript index of the sorted order.
3.2 Splitting mechanism of the SADE-L algorithm
In previous work, we integrated the DE algorithm into the splitting framework to produce the next generation and obtained a good global optimization result. On this basis, here, we propose a novel splitting mechanism that combines an adaptive differential evolution algorithm with a local searching algorithm. We update the intermediate point using a dimension-by-dimension approach to improve the diversity.
(1) Adaptive differential evolution algorithm
Define the elite population as
The main steps of the adaptive DE algorithm in each iteration are as follows:
• Mutation. For the -th target point is the size of the elite set and denotes the splitting level. The mutation point of the DE/rand/1/bin strategy is:
where , , and are three unequal random and uniform individuals of set . is a magnification factor belonging to .
To enhance the validity of the offspring population, we propose the use of a novel adaptive DE algorithm for adaptively choosing the control parameter . We assign to start. Then, Eq. (
3
) can be rewritten as follows:
where . If exceeds the range of the decision variance, reset the mutation point as follows:
where and are the upper and lower bounds of the -th dimension, respectively.
In this process, if the mutation point is worse than the target point , choose in the same dimension of the next generation as shown in Eq. (
6
). Reset when it is less than 0 or greater than 1. Otherwise, reserve for the next generation, that is,
• Cross-over. Diversity can be improved with a cross-over factor. For simplicity, this is set to be a constant in the range of [0, 1] (e.g., 0.8) to determine whether the trail point inherits the value of the mutation point or remains unchanged, that is,
In SADE-L, at the start, and Eq. (
7
) can be rewritten as follows:
Generate as shown in Eq. (
9
) if the candidate fails and reset it to equal 0.8 when it goes beyond the boundaries. Otherwise, reserve for the next generation.
• Selection. Compute the objective function values of and , then make a comparison to choose a solution that can survive to the next level.
(2) Local searching algorithm
For global searching, we use an adaptive difference evolution algorithm. In addition, a local searching algorithm is utilized to obtain good solutions in a neighborhood. In this step, the intermediate is generated based on a local displacement of the start point in each dimension, and the vector with a superior objective function value is retained for the next level.
• Searching. For the -th target point , , the candidate point can be calculated as follows:
where is a reduction factor. and denote the decision vectors of the best and worst objective functions that correspond to the -th dimension in the -th level, respectively.
• Selection. Compare the relationship between vectors and and select the element of shown in Eq. (
12
):
3.1 Construction of elite set
As the initial state of the candidates, the samples of the elite set are crucial in the framework of the multilevel splitting algorithm. As mentioned above, the selection of the elite set is usually performed as follows: for the -th level, rank the components of sequence and reserve a predetermined number of samples that have the best objective function values. Experimental research has determined that if the objective function is more complex, might overlap as is close to , which shows the convergence speed. Therefore, we expand the elite set selection range far beyond that of the current collection , but also involve set . That is, we choose top samples from the set to construct . If , let . This procedure is presented below:
• If , randomly generate Let , rank the objective function value of each sample of as , and choose ;
• If , let . The sorted function is , and use the best samples to construct the elite , where , comprises the subscript index of the sorted order.
3.1 Construction of elite set
As the initial state of the candidates, the samples of the elite set are crucial in the framework of the multilevel splitting algorithm. As mentioned above, the selection of the elite set is usually performed as follows: for the -th level, rank the components of sequence and reserve a predetermined number of samples that have the best objective function values. Experimental research has determined that if the objective function is more complex, might overlap as is close to , which shows the convergence speed. Therefore, we expand the elite set selection range far beyond that of the current collection , but also involve set . That is, we choose top samples from the set to construct . If , let . This procedure is presented below:
• If , randomly generate Let , rank the objective function value of each sample of as , and choose ;
• If , let . The sorted function is , and use the best samples to construct the elite , where , comprises the subscript index of the sorted order.
3.1 Construction of elite set
As the initial state of the candidates, the samples of the elite set are crucial in the framework of the multilevel splitting algorithm. As mentioned above, the selection of the elite set is usually performed as follows: for the -th level, rank the components of sequence and reserve a predetermined number of samples that have the best objective function values. Experimental research has determined that if the objective function is more complex, might overlap as is close to , which shows the convergence speed. Therefore, we expand the elite set selection range far beyond that of the current collection , but also involve set . That is, we choose top samples from the set to construct . If , let . This procedure is presented below:
• If , randomly generate Let , rank the objective function value of each sample of as , and choose ;
• If , let . The sorted function is , and use the best samples to construct the elite , where , comprises the subscript index of the sorted order.
10.26599/TST.2020.9010006.F003
Evolutions of SADE-L, ADAM, and ISp algorithms for function in 30-dimension.
10.26599/TST.2020.9010006.F004
Evolutions of SADE-L, ADAM, and ISp algorithms for function at 50-dimension.
10.26599/TST.2020.9010006.T001Parameter settings of the three algorithms.
Part 1
1000
30
0.5
0.5
0.5
Part 2
1500
50
0.5
0.5
0.5
10.26599/TST.2020.9010006.T002Functions used in performance testing.
Function
Shifted Sphere Function
Shifted Schwefel’S Problem 1.2
Shifted and Rotated High Conditioned Elliptic Function
Shifted Schwefel’s Problem 1.2 with Noise in Fitness
Schwefel’s Problem 2.6 with Global Optimum on Bounds
Shifted Rosenbrock’s Function
Shifted Rotated Griewank’s Function without Bounds
Shifted and Rotated Ackley’s Function with Global Optimum on Bounds
Shifted Rastrigin’s Function
Shifted and Rotated Rastrigin’s Function
Shifted Rotated Weierstrass Function
Schwefel’s Problem 2.13
Expanded Extended Griewank’s plus Rosenbrock’s Function
Expanded Rotated Extended Scaffe’s F6
Hybrid Composition Function 1
Rotated Hybrid Composition Function 1
Rotated Hybrid Composition Function 1 with Noise in Fitness
Rotated Hybrid Composition Function 2
Rotated Hybrid Composition Function 2 with a Narrow Basin for the Global Optimum
Rotated Hybrid Composition Function 2 with the Global Optimal on the Bounds
Rotated Hybrid Composition Function 3
Rotated Hybrid Composition Function 3 with High Condition Number Matrix
Non-Continuous Rotated Hybird Composition Function 3
Rotated Hybrid Composition Function 4
10.26599/TST.2020.9010006.T003Means of the rare event probability over 10 runs, the number of intermediate levelsT, and the relative error RE of the three algorithms, with 30 and 1000.
RE (%)
73 900
6.80
2.29
8.03
7
8
13
20.68
6.51
83.21
100 000
2.62
4.27
7.62
6
8
27
4.68
5.32
80.27
2.65
1.19
1.94
2.61
3
6
32
2.81
3.34
28.97
110 000
6.30
3.86
1.94
7
13
32
13.60
20.82
99.13
53 500
8.60
2.84
2.79
2
4
14
1.07
4.02
95.37
3.78
6.82
2.64
4.70
5
13
18
5.74
88.24
99.99
3860
8.07
6.62
6.90
2
7
17
3.33
4.92
100
21.03
2.34
1.30
2.24
15
18
18
45.39
72.94
89.39
680
2.45
3.16
2.96
8
11
18
24.11
9.60
79.62
680
2.39
2.87
7.36
8
12
16
27.38
5.86
49.04
42
1.59
2.90
3.05
13
14
15
19.64
6.70
100
1.4
1.20
1.72
1.36
8
12
16
13.05
14.57
85.01
30
1.03
6.70
1.09
12
36
19
4.76
7.49
93.70
13.50
3.27
2.53
7.20
11
18
25
15.96
11.83
99.99
1230
7.10
2.71
6.14
6
12
20
18.11
8.08
99.99
1250
1.28
5.93
8.44
5
11
20
21.30
17.37
85.92
1630
1.95
6.87
3.81
5
6
21
3.23
8.83
99.99
900.20
1.90
3.78
8
34
25.36
27.42
900.30
1.55
3.37
8
31
15.54
54.52
900.40
3.86
9.76
8
30
1.17
35.88
1381
5.68
6.86
7.35
11
13
16
11.86
7.32
73.00
1800
1.16
1.43
7.65
6
7
29
15.48
4.87
99.52
1370
7.75
2.01
3.42
10
14
21
25.19
5.98
100
1450
1.20
1.39
4.05
8
15
33
12.85
14.22
99.91
10.26599/TST.2020.9010006.T004Means of the rare event probability over 10 runs, the number of intermediate levelsT, and the relative errors RE of the three algorithms, with and 1500.