Finding Two-dimensional Peaks

  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Finding Two-dimensional Peaks as PDF for free.

More details

  • Words: 5,566
  • Pages: 8
ISSN 1547-4771, Physics of Particles and Nuclei Letters, 2007, Vol. 4, No. 1, pp. 73–80. © Pleiades Publishing, Ltd., 2007.

PHYSICS OF ELEMENTARY PARTICLES AND ATOMIC NUCLEUS. EXPERIMENT

Finding Two-Dimensional Peaks1 Z. K. Silagadze Budker Institute of Nuclear Physics, Novosibirsk, Russia Received March 16, 2006

Abstract—Two-dimensional generalization of the original peak finding algorithm suggested earlier is given. The ideology of the algorithm emerged from the well-known quantum mechanical tunneling property which enables small bodies to penetrate through narrow potential barriers. We merge this “quantum” ideology with the philosophy of Particle Swarm Optimization to get the global optimization algorithm which can be called Quantum Swarm Optimization. The functionality of the newborn algorithm is tested on some benchmark optimization problems. PACS numbers: 02.60.Pn DOI: 10.1134/S154747710701013X

INTRODUCTION

Number m is a parameter of the model which mimics the (inverse) mass of the quantum ball, and therefore allows us to govern its penetrating ability.

Some time ago we suggested a new algorithm for automatic photopeak location in gamma-ray spectra from semiconductor and scintillator detectors [1]. The algorithm was inspired by quantum mechanical property of small balls to penetrate through narrow barriers and to find their way down to the potential wall bottom even in the case of irregular potential shape. In the one-dimensional case, the idea was realized by means of the finite Markov chain and its invariant distribution [1]. States of this Markov chain correspond to channels of the original histogram. The only nonzero transition probabilities are those which connect a given state to its closest left and right neighbor states. Therefore, the transition probability matrix for our Markov chain has the form ⎛ ⎜ ⎜ P = ⎜ ⎜ ⎜ ⎜ ⎝

0 1 0 0 0 · · · P 21 0 P 23 0 0 · · · 0 P 32 0 P 34 0 · · · · 0

· ·

· ·

· ·

· · · · · 010

The invariant distribution for the above-described Markov chain can be given by a simple analytic formula [2]: P 12 , u 2 = -------u P 21 1

where u1 is defined from the normalization condition n

∑u

⎞ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

i

= 1.

i=1

Local maximums in the original spectrum are translated into very sharp peaks in the invariant distribution, and therefore their location is facilitated. The algorithm proved helpful in uniformity studies of NaJ(Tl) crystals for the SND detector [3]. Another application of this “peak amplifier,” to refine the amplitude fit method in ATLAS Bs-mixing studies, was described in [4]. In this paper we will also try to extend the method in the two-dimensional case.

(1)

with

1. TWO-DIMENSIONAL GENERALIZATION m

Q i, i ± 1 =

∑ exp k=1

1 The

…,

P 12 P 23 …P n – 1, n -u , u n = -------------------------------------------------P n, n – 1 P n – 1, n – 2 …P 21 1

As for the transition probabilities, the following expressions were used: Q i, i ± 1 P i, i ± 1 = ---------------------------------Q i, i – 1 + Q i, i + 1

P 12 P 23 u 3 = ---------------u , P 32 P 21 1

Ni ± k – Ni -------------------------- . Ni ± k + Ni

The following two-dimensional generalization seems straightforward. For two-dimensional n × n histograms, the corresponding Markov chain states will (k) also form a two-dimensional array (i, j ). Let u ij be a

(2)

text was submitted by the author in English.

73

74

SILAGADZE

probability for the state (i, j ) to be occupied after k-steps of the Markov process. Then (k + 1) u lm

n

=



(k) u ij

where Pij; lm is a transition probability from the state (i, j ) to the state (l, m). We will assume that the only nonzero transition probabilities are those which connect a given state to its closest left, right, up, or down neighbor states. Then the generalization of Eqs. (1) and (2) is almost obvious. Namely, for the transition probabilities we will take Q ij; i, j ± 1 -, P ij; i, j ± 1 = ------------------------------------------------------------------------------------------------Q ij; i, j – 1 + Q ij; i, j + 1 + Q ij; i – 1, j + Q ij; i + 1, j (3) Q ij; i ± 1, j -, P ij; i ± 1, j = ------------------------------------------------------------------------------------------------Q ij; i, j – 1 + Q ij; i, j + 1 + Q ij; i – 1, j + Q ij; i + 1, j with Q ij; i, j ± 1 =

k

∑ ∑ exp k = 1 l = –k m

Q ij; i ± 1, j

N i + l, j ± k – N ij ------------------------------------ , N i + l, j ± k + N ij

k

(4)

N i ± k, j + l – N ij - . exp -----------------------------------= N i ± k, j + l + N ij k = 1 l = –k

∑∑

We are interested in the invariant distribution uij for this Markov chain, such that n

∑P

(k – 1)



(k) P ij; lm u ij ,

i, j = 1

m

(k)

probability distributions is less than the u ij and u ij desired accuracy :

ij; lm u ij

= u lm .

≠0

(k)

(k – 1)

u ij – u ij (k) - u < . 2 -----------------------------(k) ( k – 1 ) ij u ij + u ij

(5)

The performance of the algorithm is illustrated by Fig. 1 for a 100 × 100 histogram representing three overlapping Gaussians with different widths. As expected, it works much like its one-dimensional cousin: the invariant probability distribution shows sharp peaks at locations where the initial data have quite broad local maximums. Note that in this concrete example, one iteration variability  = 10–3 was reached after 258 iterations for m = 3 and after 113 iterations for m = 30. Convergence to the invariant distribution can be slow. In the example given above by Fig. 1, the convergence is indeed slow for small penetrating abilities. If we continue iterations for m = 3 further, the side peaks will slowly decay in favor of the main peak corresponding to the global maximum. In the case of m = 3, it takes too much iterations to reach the invariant distribution. However, as Fig. 1 indicates, the remarkable property to develop sharp peaks at locations of local maximums (k) of the initial histogram is already revealed by u ij when a number of iterations k is in the order of 300. One can make the algorithm to emphasize minimums, not maximums, by just reversing signs in the exponents: N lm – N ij exp ------------------------N lm + N ij

N lm – N ij exp – ------------------------- . N lm + N ij

This is illustrated by Fig. 2. Here the initial histogram is generated by using a variant of the Griewank function [5]

i, j = 1

Unfortunately, unlike the one-dimensional case, this invariant distribution cannot be given by a simple analytic formula. But there is a way out: having at hand the transition probabilities Pij; lm, we can simulate the corresponding Markov process starting with some initial (0) distribution u ij . Then, after a sufficiently large number of iterations, we will end with almost invariant dis(0) tribution irrespective to the initial choice of u ij . For (0)

example, in the role of u ij one can take the uniform distribution: 1 (0) u ij = -----2 . n For practical realization of the algorithm, it is desirable to have the precise meaning of the words “sufficiently large number of iterations.” In our first tests, the following stopping criterion was used: one stops at k-th iteration if the averaged relative difference between

2

2

( x – 100 ) + ( y – 100 ) F ( x, y ) = ------------------------------------------------------4000 y – 100 – cos ( x – 100 ) cos ----------------- + 1. 2

(6)

This function has the global minimum at a point x = 100, y = 100 and in the histogramed interval 50 ≤ x ≤ 150, 50 ≤ y ≤ 150 exhibits nearly a thousand local minimums. Many of them are still visible in the probability distribution for penetrating ability m = 3. But for m = 30 only one peak, corresponding to the global minimum, remains. 2. QUANTUM SWARM OPTIMIZATION The above discussion was focused on two-dimensional histograms, while, in practice, a more common problem is to find global optimums of nonlinear functions. The algorithm in the form discussed so far is not suitable for this latter problem. However, it is possible

PHYSICS OF PARTICLES AND NUCLEI LETTERS

Vol. 4

No. 1

2007

FINDING TWO-DIMENSIONAL PEAKS

75

50 (a)

(b)

40

300

30

200

20

100

10

0 40

0

20 0

10

20

30 40

50

0 0 10 20

50 30 40

50 (c)

(d)

40

0.02

30 0.01

20

0

10

40

0

20 0

10

20

30 40

50

0 0 10 20

50 30 40

50 (e)

(f)

40

0.2

30 0.1

20

0

10

40

0

20 0

10

20

30 40

50

0 0 10 20

50 30 40

Fig. 1. (a, b) The initial data in the contour and lego formats, respectively; (c, d) the corresponding probability distribution after 258 iterations for the penetrating ability m = 3; (e, f) the invariant distribution for the penetrating ability m = 30.

to merge this ideology with that of Particle Swarm Optimization [6–8] to get a workable tool. The Particle Swarm Optimization was inspired by the intriguing ability of bird flocks to find spots with food, even though the birds in the flock had no previous knowledge of their location and appearance. “This algorithm belongs ideologically to that philosophical school that allows wisdom to emerge rather than trying to impose it, that emulates nature rather than trying to control it, and that seeks to make things simpler rather than more complex” [6]. This charming philosophy is indeed very attractive. So, we attempted to develop Quantum Swarm Optimization—when each particle in the swarm mimics the quantum behavior. The algorithm that emerged goes as follows: • Initialize a swarm of np particles at random positions in the search space xmin ≤ x ≤ xmax, ymin ≤ y ≤ ymax. PHYSICS OF PARTICLES AND NUCLEI LETTERS

Vol. 4

• Find a particle ib in the swarm with the best position (xb, yb), so that the function F(x, y) under investigation has the most optimal value for the swarm at (xb, yb). • For each particle in the swarm, find the distance 2

2

from the best position d = ( x – x b ) + ( y – y b ) . For the best particle, instead take the maximal value of these distances from the previous iteration (or, for the first iteration take d =

2

2

( x max – x min ) + ( y max – y min ) ).

• Generate a random number r uniformly distributed in the interval 0 ≤ r ≤ 1 and find the random step h = rd. • Check for a better optimum the closest left, right, up, and down neighbor states with the step h. If the result is positive, change ib and (xb, yb) respectively. Otherwise, No. 1

2007

76

SILAGADZE 150 140 130 120 110 100 90 80 70 60 50 150 140 130 120 110 100 90 80 70 60 50

(a)

(b) ×10–3 0.4 0.2

140 100 60

80

60 60

100 120 140 (c)

100

140 (d)

0.002 0.001 0 140 100 60

80

100 120 140

60 60

100

140

Fig. 2. The probability distribution for the Griewank function, (a, b) histograms for m = 3; (c, d) histograms for m = 30.

• Move the particle to left, right, up, or down by the step h according to the corresponding probabilities of such jumps: qL -, p L = ----------------------------------------q L + q R + qU + q D

qR -, p R = ----------------------------------------q L + q R + qU + q D

qU -, p U = ----------------------------------------q L + q R + qU + q D

qD -, p D = ----------------------------------------q L + q R + qU + q D

(7)

where qL =



F ( x d, y' ) – F ( x, y )⎞ - , exp ⎛ I s ------------------------------------------⎝ ⎠ h



F ( x u, y' ) – F ( x, y )⎞ - , exp ⎛ I s ------------------------------------------⎝ ⎠ h



F ( x', y d ) – F ( x, y )⎞ - , exp ⎛ I s ------------------------------------------⎝ ⎠ h



F ( x', y u ) – F ( x, y )⎞ - , exp ⎛ I s ------------------------------------------⎝ ⎠ h

y' = y u, y, y d

qR =

y' = y u, y, y d

qD =

x' = x u, x, x d

qU =

x' = x u, x, x d

(8)

x d = max ( x – h, x min ),

y u = min ( y + h, y max ),

y d = max ( y – h, y min ).

–3

–3

–3

–3

⎧ 10 x m , if x m > 10 x f – xm ≤ ⎨ –3 –3 ⎩ 10 , if x m ≤ 10 ,

and x u = min ( x + h, x max ),

At last, Is = 1, if optimization means to find the global maximum, and Is = –1, if the global minimum is being searched. • Do not stick at walls. If the particle is at the boundary of the search space, it jumps away from the wall with the probability equaled one (that is, the probabilities of the other three jumps are set to zero). • Check whether the new position of the particle leads to the better optimum. If yes, change ib and (xb, yb) accordingly. • Do not move the best particle if not profitable. • When all particles from the swarm make their jumps, the iteration is finished. Repeat it a prescribed number of times or until some other stopping criteria are met. To test the algorithm performance, we tried it on some benchmark optimization test functions. For each test function and for each number of iterations, one thousand independent numerical experiments were performed and the success rate of the algorithm was calculated. The criterion of success was the following:

(9)

⎧ 10 y m , if y m > 10 y f – ym ≤ ⎨ –3 –3 ⎩ 10 , if y m ≤ 10 ,

PHYSICS OF PARTICLES AND NUCLEI LETTERS

Vol. 4

No. 1

(10)

2007

FINDING TWO-DIMENSIONAL PEAKS

77

Success rate of the algorithm in percentages for various test functions and for various numbers of iterations. Swarm size np = 20 Function name Chichinadze Schwefel Ackley Matyas Booth Easom Levy5 Goldstein–Price Griewank Rastrigin Rosenbrock Leon Giunta Beale Bukin2 Bukin4 Bukin6 Styblinski–Tang Zettl Three Hump Camel Schaffer Levy13 McCormic

Iterations 50

100

200

300

400

500

600

700

35.5 99.4 100 88.9 100 93.6 98.4 100 76.3 100 43.6 13.8 100 99.7 61.8 99.6 0.2 100 100 100 8.2 100 100

97 99.5 100 100 100 100 99.5 100 99.7 100 90.4 52.1 100 100 84.4 100 0.1 100 100 100 34.7 100 100

100 99.8 100 100 100 100 99.4 100 100 99.8 99.8 82 100 100 93.8 100 0 100 100 100 60.7 100 100

100 99.3 100 100 100 100 99.3 100 100 99.9 100 91.6 100 100 97.8 100 0.2 100 100 100 71.2 100 100

100 99.2 100 100 100 100 99 100 100 100 100 97.6 100 100 98.6 100 0 100 100 100 77.8 100 100

100 99.8 100 100 100 100 99 100 100 99.9 100 99.1 100 100 99.3 100 0.1 100 100 100 78.9 100 100

100 100 100 100 100 100 99.1 100 100 99.9 100 99.6 100 100 99.7 100 0.2 100 100 100 80.4 100 100

100 99.6 100 100 100 100 99.5 100 100 100 100 99.8 100 100 99.8 100 0.1 100 100 100 83.9 100 100

where (xm, ym) is the true position of the global optimum and (xf , yf ) is the position found by the algorithm. The results are given in the table. The test functions themselves are defined in the Appendix. Here we give only some comments on the algorithm performance. For some test problems, such as Chichinadze, Ackley, Matyas, Booth, Easom, Goldstein–Price, Griewank, Giunta, Beale, Bukin4, Styblinski–Tang, Zettl, Levy13, McCormic, and Three Hump Camel Back, the algorithm is triumphant. The Matyas problem seems simple, because the function is only quadratic. However, it is very flat near the line x = y and this leads to problems for many global optimization algorithms. The Easom function is an unimodal test function which is expected to be hard for any stochastic algorithms, because vicinity of its global minimum has a small area compared to the search space. Surprisingly, our algorithm performs quite well for this function and one needs only about 100 iterations to find the needle of the global minimum in a haystack of the search space. The Schwefel function is deceptive enough to cause search algorithms to converge in the wrong direction. PHYSICS OF PARTICLES AND NUCLEI LETTERS

Vol. 4

This happens because the global minimum is geometrically distant from the next best local minima. In some small fraction of events, our algorithm is also prone to converge in the wrong direction and, in these cases, the performance does not seem to be improved by further increasing the number of iterations. But the success rate is quite high. Therefore, in this case, it is more sensible to have two or more independent tries of the algorithm with rather a small number of iterations each. The Rastrigin function is a multimodal test function which has plenty of hills and valleys. Our algorithm performs even better for this function, but the success is not universal either. The Rosenbrock function is, on the contrary, unimodal. Its minimum is situated in a banana shaped valley with a flat bottom and is not easy to find. The algorithm needs more than 200 iterations to be successful in this case. The Leon function is of similar nature, with an even more flat bottom, and the convergence in this case is correspondingly more slow. Griewank, Levy5, and Levy13 are multimodal test functions. They are considered to be difficult for local optimizers because of the very rugged landscapes and a No. 1

2007

78

SILAGADZE

very large number of local optima. For example, Levy5 has 760 local minima in the search domain but only one global minimum, and Levy13 has 900 local minima. Test results reveal a small probability that our algorithm becomes stuck in one of the local minima for the Levy5 function. The Giunta function simulates the effects of numerical noise by means of a high frequency, low amplitude sine wave, added to the main part of the function. The algorithm is successful for this function. Convergence of the algorithm is rather slow for the Bukin2 function, and especially for the Schaffer function. This latter problem is hard because the highly variable data surface features many circular local optima, and, unfortunately, our algorithm often becomes stuck in the optima nearest to the global one. At last, the algorithm fails completely for the Bukin6 function. This function has a long narrow valley which is readily identified by the algorithm. But the function values differ very little along the valley. Besides, the surface is nonsmooth in the valley with numerous pitfalls. This problem seems hopeless for any stochastic algorithm based heavily on random walks, because one has to chance upon the very vicinity of the global optimum to be successful. The nonstochastic component of our algorithm (calculation of jump probabilities to mimic the quantum tunneling) turns out to be of little use for this particular problem.

ecological niche in a variety of global optimization algorithms.

CONCLUDING REMARKS The Quantum Swarm Optimization algorithm presented above emerged while trying to generalize in the two-dimensional case a “quantum mechanical” algorithm for automatic location of photopeaks in the onedimensional histogram [1]. “Everything has been said before, but since nobody listens we have to keep going back and beginning all over again” [9]. After this investigation had been almost finished, we discovered the paper [10] by Xie, Zhang, and Yang with the similar idea to use the simulation of particle-wave duality in optimization problems. However, their realization of the idea is quite different. Even earlier, Levy and Montalvo used the tunneling method for global optimization [11], but without referring to quantum behavior. Their method consisted in a transformation of the objective function, once a local minimum has been reached, which destroys this local minimum and allows one to tunnel classically to another valley. We also found that the similar ideology to mimic nature’s quantum behavior in optimization problems emerged in quantum chemistry and led to such algorithms as quantum annealing [12] and Quantum Path Minimization [13]. Nevertheless, the Quantum Swarm Optimization is conceptually rather different from these developments. We hope it is simple and effective enough to find an

F ( x, y ) = 20 [ 1 – exp ( – 0.2 0.5 ( x + y ) ) ]

APPENDIX Here we collect the test functions definitions, locations of their optimums, and the boundaries of the search space. The majority of them was taken from [14–16], but we also provide the original reference when known. Chichinadze function [16, 17] π 2 F ( x, y ) = x – 12x + 11 + 10 cos --- x 2 2

1 ( y – 0.5 ) + 8 sin ( 5πx ) – ------- exp ⎛ – -----------------------⎞ , ⎝ ⎠ 2 5 – 30 ≤ x, y ≤ 30, F min ( x, y ) = F ( 5.90133, 0.5 ) = – 43.3159. Schwefel function [18] F ( x, y ) = – x sin

x – y sin

y,

– 500 ≤ x, y ≤ 500, F min ( x, y ) = F ( 420.9687, 420.9687 ) = – 837.9658. Ackley function [19] 2

2

– exp ( 0.5 [ cos ( 2πx ) + cos ( 2πy ) ] ) + e, – 35 ≤ x, y ≤ 35,

F min ( x, y ) = F ( 0, 0 ) = 0.

Matyas function [15] 2

2

F ( x, y ) = 0.26 ( x + y ) – 0.48xy, – 10 ≤ x, y ≤ 10,

F min ( x, y ) = F ( 0, 0 ) = 0.

Booth function [16] 2

2

F ( x, y ) = ( x + 2y – 7 ) + ( 2x + y – 5 ) , – 10 ≤ x, y ≤ 10,

F min ( x, y ) = F ( 1, 3 ) = 0.

Easom function [20] 2

2

F ( x, y ) = – cos x cos y exp [ – ( x – π ) – ( y – π ) ], – 100 ≤ x, y ≤ 100,

F min ( x, y ) = F ( π, π ) = – 1.

Levy5 function [15] 5

F( x, y) =



5

i cos [ ( i – 1 )x + i ]

i=1

∑ j cos [ ( j + 1 )y + j ] j=1

2

2

+ ( x + 1.42513 ) + ( y + 0.80032 ) , – 100 ≤ x, y ≤ 100,

PHYSICS OF PARTICLES AND NUCLEI LETTERS

Vol. 4

No. 1

2007

FINDING TWO-DIMENSIONAL PEAKS

F min ( x, y ) = F ( – 1.30685, – 1.424845 ) = – 176.1375. Goldstein-Price function [15]

F min ( x, y ) = F ( – 10, 0 ) = 0. Bukin4 function [24]

2

F ( x, y ) = [ 1 + ( x + y + 1 ) ( 19 – 14x + 3x 2

2

2

F ( x, y ) = 100y + 0.01 x + 10 ,

2

– 14y + 6xy + 3y ) ] [ 30 + ( 2x – 3y ) ( 18 – 32x 2

– 15 ≤ x ≤ – 5,

2

+ 12x + 48y – 36xy + 27y ) ], – 2 ≤ x, y ≤ 2,

Bukin6 function [24]

Griewank function [5, 15] y x +y F ( x, y ) = ---------------- – cos x cos ------- + 1, 200 2

Styblinski–Tang function [25]

2

F ( x, y ) = x + y – 10 cos ( 2πx ) – 10 cos ( 2πy ) + 20, F min ( x, y ) = F ( 0, 0 ) = 0. 2 2

1 4 2 4 2 F ( x, y ) = --- [ x – 16x + 5x + y – 16y + 5y ], 2 – 5 ≤ x, y ≤ 15,

Rosenbrock function [15]

F min ( x, y ) = F ( – 2.903534, – 2.903534 ) = – 78.332.

2

F ( x, y ) = 100 ( y – x ) + ( 1 – x ) , – 1.2 ≤ x, y ≤ 1.2,

Zettl function [22]

F min ( x, y ) = F ( 1, 1 ) = 0.

2

3 2

– 5 ≤ x, y ≤ 5,

2

F ( x, y ) = 100 ( y – x ) + ( 1 – x ) ,

F min ( x, y ) = F ( – 0.0299, 0 ) = – 0.003791.

F min ( x, y ) = F ( 1, 1 ) = 0.

Three Hump Camel back function [14]

Giunta function [23]

– 5 ≤ x, y ≤ 5,

16 1 16 + ------ sin 4 ⎛ ------ x – 1⎞ + sin ⎛ ------ y – 1⎞ ⎝ ⎠ ⎝ ⎠ 50 15 15

F min ( x, y ) = F ( 0, 0 ) = 0.

2

2

sin x + y – 0.5 -, F ( x, y ) = 0.5 + -------------------------------------------------2 2 2 [ 1 + 0.001 ( x + y ) ]

– 1 ≤ x, y ≤ 1,

– 100 ≤ x, y ≤ 100,

F min ( x, y ) = F ( 0.45834282, 0.45834282 )

F min ( x, y ) = F ( 0, 0 ) = 0.

Levy13 function [14]

= 0.0602472184.

2

2

2

F ( x, y ) = sin ( 3πx ) + ( x – 1 ) [ 1 + sin ( 3πy ) ]

Beale function [15]

2

2

+ ( y – 1 ) [ 1 + sin ( 2πy ) ],

2 2

F ( x, y ) = ( 1.5 – x + xy ) + ( 2.25 – x + xy )

– 10 ≤ x, y ≤ 10,

3 2

+ ( 2.625 – x + xy ) ,

F min ( x, y ) = F ( 1, 1 ) = 0.

McCormic function [27]

F min ( x, y ) = F ( 3, 0 ) = 0.

2

F ( x, y ) = sin ( x + y ) + ( x – y ) – 1.5x + 2.5y + 1,

Bukin2 function [24] 2

– 1.5 ≤ x ≤ 4,

2

F ( x, y ) = 100 ( y – 0.01x + 1 ) + 0.01 ( x + 10 ) , – 15 ≤ x ≤ – 5,

4

Schaffer function [26]

1 16 2 16 + sin ⎛ ------ y – 1⎞ + ------ sin 4 ⎛ ------ y – 1⎞ + 0.6, ⎝ 15 ⎠ 50 ⎝ 15 ⎠

2

6

2 x F ( x, y ) = 2x – 1.05x + ----- + xy + y , 6 2

16 2 16 F ( x, y ) = sin ⎛ ------ x – 1⎞ + sin ⎛ ------ x – 1⎞ ⎝ 15 ⎠ ⎝ 15 ⎠

– 4.5 ≤ x, y ≤ 4.5,

2

2

F ( x, y ) = ( x + y – 2x ) + 0.25x,

Leon function [22] – 1.2 ≤ x, y ≤ 1.2,

–3 ≤ y ≤ 3,

F min ( x, y ) = F ( – 10, 0 ) = 0.

F min ( x, y ) = F ( 0, 0 ) = 0.

– 5.12 ≤ x, y ≤ 5.12,

y – 0.01x + 0.01 x + 10 ,

– 15 ≤ x ≤ – 5,

Rastrigin function [21] 2

2

F ( x, y ) = 100

2

– 100 ≤ x, y ≤ 100,

–3 ≤ y ≤ 3,

F min ( x, y ) = F ( – 10, 0 ) = 0.

F min ( x, y ) = F ( 0, – 1 ) = 3. 2

79

F min ( x, y ) = F ( – 0.54719, – 1.54719 ) = – 1.9133.

–3 ≤ y ≤ 3,

PHYSICS OF PARTICLES AND NUCLEI LETTERS

–3 ≤ y ≤ 4,

Vol. 4

No. 1

2007

80

SILAGADZE

ACKNOWLEDGMENTS Support from the INTAS grant no. 00-00679 is acknowledged.

14.

REFERENCES 1. Z. K. Silagadze, “A New Algorithm for Automatic Photopeak Searches,” Nucl. Instrum. Methods Phys. Res., Sect. A 376, 451–454 (1996). 2. W. Feller, An Introduction in Probability Theory and Its Applications (Wiley, New York, 1966), Vol. 1. 3. M. N. Achasov et al., “Medium Energy Calorimetry at SND: Techniques and Performances on Physics,” in Proc. of “Calorimetry in High Energy Physics,” Lisbon, 1999 (World Sci., Singapore, 2000), pp. 105–120. 4. A. V. Bannikov, G. A. Chelkov, and Z. K. Silagadze,

7. 8.

9.

10.

11. 12.

16. 17. 18.

Channel in the ATLAS B s Mixing Studies,” JINR Preprint No. E1-98-29 (Dubna, 1998). A. O. Griewank, “Generalized Descent for Global Optimization,” J. Optim. Theory Appl 34, 11–39 (1981). J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” in Proc. of IEEE Conf. on Neural Networks (Perth, 1995), pp. 1942–1948. J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence. Morgan Kaufmann Publishers (San Francisco, 2001). K. E. Parsopoulos and M. N. Vrahatis, “Recent Approaches to Global Optimization Problems through Particle Swarm Optimization,” Natural Computing 1, 235–306 (2002). A. Gide, The Tractate of Narcissus (Theory of Symbol), Transl. by A. Mangravite, in The Book of Masks; Atlas Arkhive Two: French Symbolist and Decadent Writing of the 1890s (Atlas Press, London, 1994). X. P. Xie, W. J. Zhang, and Z. L. Yang, “Solving Numerical Optimization Problems by Simulating ParticleWave Duality and Social Information Sharing,” in Intern. Conf. on Artificial Intelligence (Las Vegas, USA, 2002), pp. 1163–1169. A. V. Levy and A. Montalvo, “The Tunneling Method for Global Optimization,” SIAM J. of Sci. Stat. Comp. 6, 15–29 (1985). For a review and references see, for example, T. Kadowaki, “Study of Optimization Problems by Quantum

19.

– +





φπ–, D s

Ds a1 ( Ds

0

6.

15.

K*0K–) Decay

0

“ Bs

5.

13.

20. 21.

22. 23.

24. 25.

26. 27.

Annealing,” Ph.D. Thesis (Department of Physics, Institute of Technology, Tokyo, 1998). P. Liu and B. J. Berne, “Quantum Path Minimization: An Efficient Method for Global Optimization,” J. Chem. Phys. 118, 2999–3005 (2003). C. Jansson and O. Knuppel, “A Global Minimization Method: The Multi-Dimensional Case,” Technical Report 92-1 (TU Harburg, Hamburg, 1992). C. Jansson and O. Knuppel, “Numerical Results for a Self-Validating Global Optimization Method,” Technical Report 94-1 (TU Harburg, Hamburg, 1994). R. J. Van Iwaarden, “An Improved Unconstrained Global Optimization Algorithm,” PhD Thesis (Univ. of Colorado at Denver, Denver, 1996). V. Chichinadze, “The ψ-Transform for Solving Linear and Nonlinear Programming Problems,” Automata 5, 347–355 (1969). H.-P. Schwefel, Numerical Optimization of Computer Models (Wiley, Chichester, 1981). D. H. Ackley, A Connectionist Machine for Genetic Hillclimbing (Kluwer, Boston, 1987). E. E. Easom, “A Survey of Global Optimization Techniques,” M. Eng. Thesis (Univ. Louisville, KY, Louisville, 1990). H.-M. Voigt, J. Born, and I. Santibanez-Koref, “A Multivalued Evolutionary Algorithm,” Technical Report TR-92-038 (Intern. Comp. Sci. Inst., CA, Berkeley, 1992). S. Nagendra, “Catalogue of Test Problems for Optimization Algorithm Verification,” Technical Report 97-CRD110 (General Electric Company, 1997). A. A. Giunta, “Aircraft Multidisciplinary Design Optimization Using Design of Experiments Theory and Response Surface Modeling Methods,” MAD Center Report, 97-05-01 (Virginia Polytechn Inst. and State Univ. Blacksburg, VA, 1997). A. D. Bukin, “New Minimization Strategy for NonSmooth Functions. Budker Inst. of Nucl. Phys,” Preprint No. BUDKER-INP-1997-79 (Novosibirsk, 1997). M. Styblinski and T. Tang, “Experiments in Nonconvex Optimization: Stochastic Approximation with Function Smoothing and Simulated Annealing,” Neuron Networks 3, 467–483 (1990). D. Whitley et al., “Evaluating Evolutionary Algorithms,” Artif. Intell. 85, 245–276 (1996). K. Madsen and J. Zilinskas, “Testing Branch-and-Bound Methods for Global Optimization,” IMM Technical Report 05 (Techn. Univ. of Denmark, 2000).

PHYSICS OF PARTICLES AND NUCLEI LETTERS

Vol. 4

No. 1

2007

Related Documents

Twin Peaks
November 2019 15
Peaks Route
November 2019 19
Ortega Peaks 2009
April 2020 6
The Twin Peaks Theme
June 2020 8
Finding Vlcs
April 2020 20