IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 36, NO. 4, JULY 2006
495
Efficient Chromosome Encoding and Problem-Specific Mutation Methods for the Flexible Bay Facility Layout Problem Neil H. W. Eklund, Member, IEEE, Mark J. Embrechts, Member, IEEE, and Marc Goetschalckx
Abstract—Two chromosome encoding methods are compared for finding solutions to the nondeterministic polynomial-time hard flexible bay facilities layout problem via genetic algorithm (GA). Both methods capitalize on the random key GA approach to produce chromosomes that are viable for any combination of allele values. In addition, the effect of four problem-specific mutation methods is assessed for one of the encoding methods. The novel mutation methods are shown to have a substantial effect on performance. Optimal GA parameter settings for the problem-specific mutation methods are found empirically. Index Terms—Facility layout, flexible bay, genetic algorithms (GAs), problem-specific mutations.
I. INTRODUCTION
Fig. 1. Example of the flexible bay layout. Departments within a row share the same height but can be of different widths.
A. General Background HE OPTIMAL layout of a bounded physical area for the accomplishment of a particular activity is known as the unequal area facility layout problem (FLP). In the jargon of the FLP, the location and shape of a number of departments of unequal area must be specified within the fixed bounds of a facility such that the departments meet a minimum area requirement, do not overlap, and minimize material transportation costs both between the departments [1] and between the departments and the exterior of the facility [19]. Moreover, a constraint on the maximum aspect ratio (or penalty for violating a threshold aspect ratio) of each department can be added to make the problem more realistic by eliminating solutions with very narrow departments (e.g., consider the relative utility of a 1200-m2 storage area laid out 3×400 versus 30×40). In addition to manufacturing facility layout, many other real-life applications of considerable economic consequence can be formulated as FLPs, e.g., very large scale integration (VLSI) layout or hospital design. Because the FLP is nondeterministic polynomial-time hard (NP-hard), exact solutions to problems beyond a relatively trivial size are impractical. For this reason, many heuristic methods (e.g., ant colony optimization [2], simulated annealing [3], par-
T
Manuscript received March 5, 2003; revised December 19, 2005. This paper was recommended by Guest Editor S. W. Kercel. N. H. W. Eklund is with the Industrial Artificial Intelligence Laboratory, GE Global Research Center, Niskayuna, NY 12309 USA (e-mail: eklund@research. ge.com). M. J. Embrechts is with the Decision Sciences and Engineering Systems, Rensselaer Polytechnic Institute, Troy, NY 12180 USA (e-mail:
[email protected]). M. Goetschalckx is with the School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0205 USA (e-mail:
[email protected]). Digital Object Identifier 10.1109/TSMCC.2006.875412
ticle swarm optimization [13]) have been applied to find nearoptimal solutions to the FLP. This paper uses genetic algorithms (GAs) to find near-optimal solutions to a special case of the FLP. Many factors are known to influence the performance of GAs [4]–[6]; the method of encoding the problem (as a chromosome) is arguably one of the most important [6]. A good encoding method can make finding good solutions relatively efficient. Conversely, a poor chromosome encoding method can make finding good solutions nearly impossible [6]. Moreover, problem-specific mutation and crossover methods can leverage the power of efficient encoding, increasing the overall efficiency of the GA, resulting in better solutions for a given amount of computing power. This paper presents a comparison of two different chromosome encoding methods for the FLP under a relatively generic GA implementation. In addition, the effect of four different problem-specific mutation methods, which are found to result in significantly improved performance, is considered for one encoding. Finally, optimal GA parameter settings for the problemspecific mutation methods are found via experimentation. B. Flexible Bay FLP The FLP employed here uses the widely used flexible bay approach independently developed by Goetschalckx [20] and Tong [12]. The facility is divided into a number of horizontal layers, with each layer containing one or more departments. The height of each layer depends on the area of the departments in the layer (and is the same for all departments within a layer). The width of each department depends on the area of the departments in the same layer (see Fig. 1). Three examples of the FLP are studied: Case 1, with 7 departments [7]; case 2, with 14 departments [8]; and case 3 with 37 departments [20].
1094-6977/$20.00 © 2006 IEEE
496
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 36, NO. 4, JULY 2006
A layout is evaluated using a fitness function consisting of three components: interdepartmental flow costs, external flow costs, and shape penalties (imposed if the threshold aspect ratio for a department is violated). Formally following [21] we have fitness =
n −1 n
dR ij rij +
n
dR io rio +
n
max{0, si − Si }pi
(particularly for early exploratory runs) to set p somewhat high. Each gene could range from 0 to 1 inclusive. The genes representing order of the departments were decoded by sorting the values; the genes representing the splits were decoded using the transformation
g < 0.5 floor(2gk) + 1 (4) g ≥ 0.5 floor(2(n − k)(g − 0.5)) + k + 1 (1) where k = ceil(√n + 10), g is the gene value, and l is the split where n is the nuber of departments. If the origin of the depart- length (number of departments in the layer represented by the ment is (x, y), its size is (w, l), and the building dimensions are gene). Given a uniform distribution of split gene values, this transformation makes relatively small split lengths much more (W, L), then rectilinear interdepartmental flow distance is likely in large problems, but does not significantly change the dR (2) distribution for small problems. This is advantageous because ij = |xi − xj | + |yi − yj | split length is not uniformly distributed: For a large problem, it is and rectilinear external flow distance is more likely that a layer will have, for example, three departments x + w/2, y + l/2, in a layer rather than 30. dR = min (3) io W − x − w/2, L − y − l/2 Consider an example FLP with five departments (A–E), and rij is interdepartmental flow; rio is external flow (flow to the three potential split locations (S1, S2, and S3), for a total of outside of the facility); si is the aspect ratio of the department eight genes on a chromosome. Each gene represents one of the (max{w/l, l/w}); Si is the threshold aspect ratio (above which five departments and three split locations values a penalty is applied); and pi is the shape penalty. This method [A B C D E S1 S2 S3]. for evaluating the fitness of a FLP is common in industrial applications of the FLP. A realization of this chromosome might be In this paper, we used penalty threshold aspect ratios of 2.5, [.17 .39 .54 .22 .19 .11 .23 08]. 2, and 3 and shape penalties of 500, 1000, and 1000 for cases 1, 2, and 3, respectively. The order of the departments would therefore be [A E D B C], i=1 j =i+1
i=1
i=1
II. EXPERIMENT I: ENCODING METHOD In this section, an experiment comparing two different encoding methods for the FLP is described. A. Problem Representation The flexible bay FLP can be considered in two questions: First, in what layer is a department, and second, in what order do the departments appear within the layer? This experiment examines two of the (many) ways this problem could be represented in a GA chromosome, referred to here as Norman-Smith (NS) encoding [10] and concatenated chromosome (CC) encoding. Both of these encodings employ the random key genetic algorithm (RKGA) encoding [10], [15]–[17], where permutation problems are represented by a set of genes with random values. The order of the sorted genes determines the permutation. The advantage of the RKGA approach is that any chromosome produced (in the initial randomly generated population or through mutation or crossover) is a feasible layout; therefore, no clumsy chromosome repair mechanism, a common feature of other GA approaches to permutation problems, is required. The CC chromosome encoding is of length n + p, using n genes as random keys (where n is the number of departments) and p genes to determine the location of at most p of potential splits (transitions from one layer to another). The value of p was set to 5, 10, and 20 for case 1, 2, and 3, respectively. While for any problem it is better to have p close to (but at least equal to) the true optimal number of splits, this encoding is not particularly sensitive to p being too large, and it is better in practice
l=
corresponding to the sorted order of the gene values. The split lengths are [1 2 1]. Thus, there would be one department (A) in the bottom layer, two departments in the second layer (E and D), one department in the third layer (B), and the remainder of the departments (C) in the fourth layer. In the case where all of the departments are assigned before all of the splits have occurred, the split is simply ignored. For example, for the above problem, if the split lengths were [4 3 2], there would be four departments in the first layer (A, E, D, and B) and one department in the second layer (C). There would be no third layer and the width of department C would be equal to the width of the facility. The NS chromosome encoding [10] uses n real number genes per chromosome, with valid values ranging from 1 to n + 1. The integer portion of each gene represents the layer the department is in and the sorted order of the genes within a layer represents the order of the departments. Thus, to encode the example mentioned above, the genes in the NS chromosome would be arranged [A
B
C
D
E].
This NS chromosome results in the same phenotype as the above-mentioned CC example [1.46
3.72
5.38
2.36
2.19].
Note that the gene corresponding to department C has a value of 5.38, implying it would be in layer five. However, because there are no departments in layer four, layer five effectively becomes layer four.
EKLUND et al.: EFFICIENT CHROMOSOME ENCODING AND PSM METHODS FOR THE FLEXIBLE BAY FLP
497
Fig. 3. Distribution of relative fitness over 12 798 runs (each) using the NS and CC encodings for case 1. Fig. 2. Fitness of the best chromosome and average population fitness by generation for sample runs employing the CC and NS encoding.
B. GA Parameters The population size for all runs was fixed at 100. Fig. 2 is a plot of GA progress over time for one run each of the NS and CC encodings for case 3, showing the relative fitness (the ratio of chromosome fitness to best known fitness) of the best chromosome and population average over time; both runs have converged at about generation 500. Similar data were collected for the other cases (over multiple runs) to empirically determine the number of generations per run, which was set to 100, 300, and 600 generations for cases 1, 2, and 3, respectively. The initial population for the CC encoding runs was uniformly randomly distributed over the interval [0, 1]. However, a uniform [1, n + 1] distribution for the genes for the NS encoding results in very poor initial populations, and substantially diminishes overall performance. To address this, the range of the genes in the initial populations of the NS encoding runs was limited to the interval [1, p], which produced a substantially fitter starting population and consequently much fitter final populations. Tournament selection was used with three chromosomes per tournament (i.e., three chromosomes are selected at random from the population, with replacement, and the fittest one of the three selected is eligible for crossover and mutation). Elitism was used with the single best chromosome passed on to the next generation. Both of these measures were employed to create relatively low selection pressure, which allows the GA ample time to explore the search space. Uniform crossover and two-point crossover were each applied 40 times in each generation. In uniform crossover, approximately half of the genes (chosen randomly) are taken from each of two parents to form two children. In two-point crossover, a contiguous block of genes (of random length) from one parent is swapped into the corresponding genes of another parent, and vice versa, to form two children. The children were used to replace the parents; thus, 20 genes were passed on unmodified each generation. For all runs in this section, four types of mutation were applied a fixed number of times in each generation. Boundary mutation [11] (a random gene is set to its maximum or minimum value) was applied once per generation. Uniform mutation (a random gene is set to a uniform random value over the valid range of the gene) was applied three times per generation. Nonuniform mutation (a random gene perturbed by a Gaussian
Fig. 4. Distribution of relative fitness over 12 798 runs (each) using the NS and CC encodings for case 2.
Fig. 5. Distribution of relative fitness over 12 798 runs (each) using the NS and CC encodings for case 3.
random value, with mean of zero, variance initially high, and decreasing as the maximum generation is approached) was applied 13 times per generation [11]. Finally, multi-nonuniform mutation (nonuniform mutation applied to all genes in a chromosome) was applied five times per generation [11]. All GA runs were made using a highly modified version of the GA optimization toolbox for Matlab [11]. C. Encoding Performance A total of 12 798 independent runs of each combination of encoding and case were performed. Figs. 3–5 are histograms of the distribution of relative fitness of the fittest chromosome from each run for cases 1, 2, and 3, respectively. For this implementation (which differs somewhat from the application where the NS encoding was developed [10]), the CC encoding is clearly superior to the NS encoding. While the CC
498
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 36, NO. 4, JULY 2006
Fig. 6. LSM, (top) before and (bottom) after. Layers 2 and 3 are swapped, resulting in a shorter overall path length (segments AB and DE are shorter).
encoding works well for cases 1 and 2, frequently identifying solutions that are quite near or at the optimal in a single run, there is substantial room for improvement on more difficult problems (represented by case 3).
Fig. 7. LOI mutation, (top) before and (bottom) after. The order of layer 2 is reversed, resulting in a shorter overall path length (segment CD is shorter).
TABLE I ANOVA RESULTS FOR SECOND SERIES OF RUNS
III. EXPERIMENT II: PSM METHODS In Section II, the CC encoding was shown to be clearly superior to the NS encoding for the FLP problem; therefore, in the following work the NS encoding has been abandoned. In this section, the results from an experiment examining the effect of four different mutation methods on the CC encoding are described. A. Problem-Specific Mutation Methods Examination of early results of the initial runs of the GA using the CC encoding suggested two problem-specific mutation (PSM) methods that might be effective. Frequently, the departments are grouped in layers appropriately, but the layers are out of order (this is likely the result of finding a near optimal aspect ratio for a block of departments, while minimizing flow cost within a layer). This suggests that a layer swap mutation (LSM), in which all the departments in two layers are exchanged (preserving order), might be an effective mutation strategy (Fig. 6). It also appears that sometimes the departments are grouped together in layers appropriately and the departments within the layer are in the right order relative to each other. Nevertheless, the order of the entire layer relative to the order of the other layers is wrong. For example, layer 1 would be [E D A] and layer 2 would be [B C], but the ordering [A D E] of layer 1 is fitter. This suggests a layer order invert (LOI) mutation, in which the order of the departments in a randomly chosen layer is reversed (Fig. 7). Two additional mutation methods were separately considered to be potentially useful. Although swapping of two departments might easily occur as a result of the generic mutation operators, a specific mutation method, department swap mutation (DSM), was added. It exchanges the position of exactly two departments
(e.g., Fig. 7 could also be considered a case of DSM, exchanging the positions of departments D and F). Finally, a swap within layer (SWL) mutation method was developed. It exchanges exactly two departments within a layer. The SWL mutation is meant to address the case in which departments are grouped correctly in layers (i.e., so the aspect ratio is good), but the order is incorrect. Fig. 7 could again be considered a case of SWL, exchanging the positions of departments D and F. B. PSM Performance A series of GA runs performed for case 3 were similar to those described above, with the following exceptions. A fixed level of “standard” mutation methods were employed each generation for all runs: 1, boundary; 2, multi-nonuniform; 3, nonuniform; and 3, uniform. Additionally during each run, one of the 81 combinations of each PSM at three different levels of application (0, 3, or 10 times per generation) was active. Rarely in practice is just one run of a GA relied on to arrive at an optimal result, so the results of 10 separate runs were combined in a batch. A total of 20 250 runs (25 batches × 10 runs × 81 levels) were performed. Results from analysis of variance (ANOVA) are presented in Table I.
EKLUND et al.: EFFICIENT CHROMOSOME ENCODING AND PSM METHODS FOR THE FLEXIBLE BAY FLP
499
Fig. 8. (Top) Boxplots and (bottom) mean effect of SWL main effect. Note that the effect, while statistically significant, is quite small (∼2%).
Fig. 10. GA performance as a function of number of LSM and DSM mutations per generation. Contours are of overall fit of bootstrap samples; gray dots are individual bootstrap sample minima, and the black square is the overall minimum.
The PSMs yield a dramatic increase in performance over the NS or CC encodings alone. The median performance for case 3 was 3.7 for the NS encoding and 2.7 for the CC encoding. However, on a single run (i.e., not a batch) of the CC encoding using PSM (where #LSM = #DSM = 10), median performance is 1.6. Although there is a statistically significant effect of SWL, the experiment has many degrees of freedom, and thus is very sensitive to even tiny effects. The magnitude of the SWL effect is very small, approximately 2%. There is no effect of LOI. Therefore, SWL and LOI were dropped from further consideration. IV. EXPERIMENT III: OPTIMAL GA PARAMETERS
Fig. 9. Boxplots (top three axes) and mean effect (bottom axes) of DSM × LSM interaction. The value of LSM is 0, 3, and 10, respectively, for the first through third set of axes.
Table I indicates that there are three statistically significant main effects (LSM, DSM, and SWL) and one significant interaction between LSM and DSM. Thus, at these levels of application, the LOI mutation had, surprisingly, no effect. The effect of SWL, while statistically significant, is practically very small (Fig. 8), approximately 2%. The effect of the LSM × DSM interaction is illustrated in Fig. 9. The magnitude of the effect here is much more pronounced—an average of about 35% difference from the worst to best conditions.
In Section III, the PSMs were shown to substantially improve the GA performance for the CC encoding. In this section, results from an experiment to determine the optimal number of PSMs to perform each generation are described. Runs at each of the 49 combinations of the levels {0, 5, 10, 20, 30, 40, 50} of applications per generation of LSM and DSM were performed in 189 replications of batches of 10 (92 610 individual runs). To get an estimate of sensitivity, 100 bootstrap samples [18] were taken at the individual run level, i.e., the members of each batch were resampled with replacement from all runs at a particular level. A full quadratic model (fitting constant, both linear terms, both squared terms, and the product) was fit to each sample. Additionally, the same model was fit to all bootstrap samples simultaneously (plotted in Fig. 10)—this model accounted for 95% of the variance in the data. Fig. 10 is a contour plot of GA performance as a function of number of LSM and DSM per generation. These data suggest that the optimum number
500
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 36, NO. 4, JULY 2006
of LSM and DSM per generation for this particular case are 10 and 35, respectively. The distribution of bootstrap minima in Fig. 10 suggests that the GA for this problem is more sensitive to the number of LSMs than the number of DSMs per generation. V. DISCUSSION This paper describes a novel approach to the flexible bay FLP. In particular, a novel chromosome encoding method for GAs is described. The concatenated chromosome encoding was shown in the first section to be superior to the current published “state-of-the-art” NS encoding [10] over a wide range of problem difficulty; however, the difference is most pronounced for very difficult problems. The GA implementation (aside from the encoding method) was left quite “generic” in an attempt to make the comparison of the two encoding methods as fair as possible. However, GA performance is strongly influenced by the interaction of encoding and mutation/crossover methods and to a lesser degree on mutation/crossover parameters. This interaction is explored in later sections of the paper. Four novel PSM methods were introduced. One PSM, LOI, was shown to have no detectible effect (and certainly no effect of practical significance at the levels of application examined). Another PSM, SWL, was shown to have a small effect on performance (∼2% for the most difficult problem examined). The LSM and DSM were shown to have a dramatic effect on GA performance. An additional series of runs was performed at the calculated optimal settings. The mean batched relative fitness of the GA with no PSM was 1.61; with optimal LSM and DSM settings, mean batched relative fitness was 1.18, a 43% improvement. The effect of encoding and PSMs is illustrated in Fig. 11. Each set of axes show data for case 3 from 200 independent runs each under different conditions. The top set of axes is for the NS encoding under a generic GA implementation. Performance here is abysmal—the result of a typical individual run is 4.0 times less fit than the optimum. The middle set of axes is for the CC encoding under a generic GA implementation. Results here are better (a typical single run is 2.6 times less fit than the optimum), but still unsuitable for industrial applications. The bottom set of axes highlights what can be accomplished when the interaction between encoding method and mutation methods tailored to a particular problem are leveraged. They show the distribution for the CC encoding using optimal amount of LSM and DSM each generation, where a typical run for this difficult problem is about 1.6 times less fit than the optimum. However, resampling 383 independent single runs shows that for case 3, a practitioner is 99.9% likely to find a solution within 10% of the optimum in only 43 independent runs (about 30 min computing time); to be 99.9% confident that a solution is within 5% of the optimum, 192 independent runs are required (about 145 min computing time). One might ask if the optimal settings for case 3 do well for other problems. To partially address this question, the CC encoding and optimal LSM and DSM parameters for case 3 were
Fig. 11. Distribution of relative fitness over 200 runs (each) for case 3. (Top) NS encoding under a generic GA implementation. (Middle) CC encoding under a generic GA implementation. (Bottom) CC encoding using optimal amount of LSM and DSM each generation.
applied to case 1 and case 2 for 200 runs each. For case 1, the relatively simple seven department case, all 200 runs were at the global optimum. For case 2, the more complex 14 department case, 61% were within 2% of the global maximum (versus 29% for CC alone). Thus, while the optimal settings for case 3 may not be optimal for other problems, in at least in these two cases they are acceptable. The approach described here is a computationally efficient way to solve a variety of industrial problems, such as facility layout or VLSI design. The approach outlined here is able to quickly arrive at good solutions to problems as complex or more complex than problems addressed in previous literature [1], [8], [10], [12], [13], [19]–[21]. An opportunity for future work on this encoding and PSM approach lies with the way layers are encoded. The current CC implementation intentionally provides excess chromosomes for layer encoding (the intention being to not place any a priori constraints on potential layouts). However, the “building block hypothesis” [4] would suggest that these excess chromosomes hinder GA performance. An effective compromise might be to start with the excess layer-encoding chromosomes early in the run and then remove “vestigial” chromosomes as the layering structure of the problem emerges.
APPENDIX This appendix provides details of case 3 sufficient for other researchers to compare results with this paper. Fig. 12 provides the layout of the best known solution (fitness = 7288). Tables II–IV provide case 3 details, department areas and external cost flows, and interdepartmental flows, respectively.
EKLUND et al.: EFFICIENT CHROMOSOME ENCODING AND PSM METHODS FOR THE FLEXIBLE BAY FLP
501
TABLE IV INTERDEPARTMENTAL FLOWS
Fig. 12.
Layout of the best known solution for case 3.
TABLE II CASE 3 DETAILS
TABLE III DEPARTMENT AREAS AND EXTERNAL FLOW COSTS
REFERENCES [1] G. Armour and E. Buffa, “A heuristic algorithm and simulation approach to relative location of facilities,” Manage. Sci., vol. 9, pp. 294–309, 1963. [2] J. A. Bland, “Layout of facilities using an ant system approach,” Eng. Optim., vol. 32, pp. 101–115, 1999. [3] J. A. Chandy and P. Banerjee, “Parallel simulated annealing strategies for VLSI cell placement,” in Proc. Int. Conf. VLSI Design, Bangalore, India, 1996, pp. 37–42. [4] D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA: Addison-Wesley, 1989. [5] M. Mitchell, An Introduction to Genetic Algorithms. Cambridge, MA: MIT Press, 1996. [6] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs. Berlin, Germany: Springer-Verlag, 1996. [7] J. A. Tompkins and J. M. Moore, Computer Aided Layout: A User’s Guide, Facilities Planning and Design Monograph. Norcross, GA: IIE, 1978. [8] M. Bazaraa, “Computerized layout design: A branch and bound approach,” AIIE Trans., vol. 7, pp. 432–438, 1975. [9] J. Bean, “Genetics and random keys for sequencing and optimization,” ORSA J. Comput., vol. 6, pp. 154–160, 1994. [10] B. A. Norman and A. E. Smith, “Random keys genetic algorithm with adaptive penalty function for optimization of constrained facility layout problems,” in Proc. IEEE Conf. Evol. Comput., Indianapolis, IN, Apr. 13–16, 1997, pp. 407–411. [11] C. Houck, J. Joines, and M. Kay, “A Genetic Algorithm for Function Optimization: A MATLAB Implementation,” North Carolina State Univ. Ind. Eng., Tech. Rep. 95-09, 1995. [12] X. Tong, “SECOT: A sequential construction technique for facility design,” doctoral dissertation, Dept. Ind. Eng., Univ. Pittsburgh, Pittsburgh, PA, 1991. [13] C. T. Hardin and J. S. Usher, “Facility layout using swarm intelligence,” in Proc. Swarm Intelligence Symp., 2005, pp. 424–427.
502
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 36, NO. 4, JULY 2006
[14] C. G. Shaefer, “The ARGOT strategy: Adaptive representation genetic optimizer technique,” in Genetic Algorithms and Their Applications: Proc. 2nd Int. Conf. Genetic Algorithms, 1987, pp. 50–58. [15] G. Syswerda, “Uniform crossover in genetic algorithms,” in Proc. 3rd Int. Conf. Genetic Algorithms, 1989, pp. 2–9. [16] J. D. Schaffer, R. A. Caruna, L. J. Eshelman, and R. Das, “A study of control parameters affecting online performance of genetic algorithms for function optimization,” in Proc. 3rd Int. Conf. Genetic Algorithms, 1989, pp. 51–60. [17] R. Parsons, S. Forrest, and C. Burks, “Genetic algorithms, operators, and DNA fragment assembly,” Mach. Learn., vol. 21, no. 1–2, pp. 11–33, 1995. [18] B. Efron and R. Tibshirani, An Introduction to the Bootstrap. New York: Chapman & Hall, 1993. [19] M. Goetschalckx, “Computed-aided layout with the expanded spiral technique,” Comput. Ind. Eng., vol. 9, suppl. 1, pp. 159–163, 1984. [20] , “An interactive layout heuristic based on hexagonal adjacency graphs,” Eur. J. Oper. Res., vol. 63, pp. 304–321, 1992. [21] , “Decomposition methods for the facilities layout problem with shape constraints,” presented at the 7th Industrial Engineering Research Conf. IERC98, Banff, AB, Canada, May 9–10, 1998.
Neil H. W. Eklund (S’91–M’98) received the B.S. degree in psychology, the M.S. degree in human factors/experimental psychology, the M.S. degree in operations research and statistics, and the Ph.D. degree in engineering science from the Rensselaer Polytechnic Institute, Troy, NY, in 1991, 1998, and 2002, respectively. From 1993 to 1999, he was a Research Scientist at the Lighting Research Center. From 1999 to 2002, he was with the Network Planning Department, PSINet. Since 1992, he has been with the General Electric Global Research Center, Niskayuna, NY. Since 2005, he has also been an Adjunct Professor with the Engineering/CS Department, Graduate College of Union University, Schenectady, NY. He has worked on a wide variety of research projects, including early detection of cataract using intraocular photoluminescence, multiobjective bond portfolio optimization, and on-wing fault detection and accommodation in gas turbine aircraft engines. His current research interests include developing hybrid soft/hard computing approaches for real-world problems, particularly real-time monitoring, diagnostics, and prognostics.
Mark J. Embrechts (S’77–A’79–M’81) received the M.S. degree in electrical engineering from the University of Leuven, Leuven, Belgium, and the M.S. and Ph.D. degrees in nuclear engineering from Virginia Polytechnic Institute and State University, Blacksburg, in 1977, 1978, and 1981, respectively. From 1980 to 1983, he was with Los Alamos National Laboratory, Los Alamos, NM. In 1983, he joined the Department of Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, NY. Currently, he is an Associate Professor with the Rensselaer Polytechnic Institute. He has published more than 150 conference and journal papers and coauthored Exchange Rate Theory (Blackwell, 1993). His current areas of interest include neural networks, evolutionary computing, fuzzy logic, data mining, data fusion for intrusion detection, and applications of soft computing to data mining, biotechnology, and molecular drug design.
Marc Goetschalckx received the M.S. degree in mechanical engineering from the University of Leuven, Leuven, Belgium, the M.E. degree in operations research from the University of Florida, Gainesville, and the Ph.D. degree in industrial engineering from the Georgia Institute of Technology, Atlanta, in 1977, 1979, and 1983, respectively. He is currently an Associate Professor with the School of Industrial and Systems Engineering, Georgia Institute of Technology. He has published more than 100 conference and journal papers and book chapters on the design and management of industrial logistics systems. He has authored three software packages for design of facilities and vehicle routing. His current areas of interest include facilities design, material handling, warehousing design and operation, distribution systems design, and robust global supply chain design.