2D Packing problem with irregular shapes Juan Miguel Lavista Ferres
Gurpreet Singh
InterAmerican Deveopment Bank 1300 New York Av. Nw Washington, DC, 20577 (202) 623 3416
Johns Hopkins University 6810 Deerpath Road, Suite 200 Elkridge, MD 21075 (410) 540 2960
[email protected]
[email protected]
ABSTRACT Data representation and the methodology used to solve a problem play a major role in the results of an evolutionary algorithm or any algorithm for that matter. Our first contribution in this paper is to illustrate how important data representation and methodology really are and how they can impact the performance of an algorithm using the two dimensional pattern cutting problem as an example. To do this, we draw comparisons between two different genetic algorithms using different data representations and methodologies. The results indicate that using different data representations and methodologies does in fact impact the performance in a significant way. Our second contribution is a fitness function which working with a diversity Control mechanism enables better results for the problem being studied.
Categories and Subject Descriptors I.2 [Artificial Intelligence]: I.2.8 Problem Solving, Control Methods, and Search
General Terms Algorithms
Keywords Evolutionary Computation, Genetic Algorithms, 2D Packing Problem, Irregular Shapes, Diversity Control
We hypothesize that our two different approaches will have significantly different results in terms of the number of evaluations required to solve the problem efficiently. To quantify our hypothesis we will propose that the two genetic algorithms will result in significantly different results. Further more, we will compare our algorithm with the results of the related work.
2. GENERAL PROBLEM DESCRIPTION Our particular version of this problem has been modeled after the textile industry. Textile companies are required to take large spools of fabric and cut them into the necessary patterns for producing garments. Many of the fabrics must be cut based on the client’s requirements, and these patterns are not often regular in shape. In order to solve this problem we need to be able to take the designer’s blue prints and turn them into a representation that will work with our algorithms. After looking at related work in the field, we have decided that we will discretize the designer’s blue print of the pattern into a simpler representation. This process is illustrated in Figure 1 below. If, for any reason more precision is needed, we could just adjust the size of each square unit. The tradeoff will be more processing time to get finer patterns for the manufacturing process.
1. INTRODUCTION We believe that data representation and the methodology used to solve a problem play a major role in the results of an evolutionary algorithm or any algorithm for that matter. This being said a primary objective is to remind the evolutionary algorithms research community that when one compares algorithms it is extremely important to make the competition as fair as possible when providing the world with experimental results, in order to minimize bias. To illustrate our point we would like to compare the results of our two different data representations and methodologies using evolutionary techniques to solve the two-dimensional pattern cutting problem.
Figure 1. Discretizing Irregular Patterns The above is a good description of the general problem we will be tackling. In the real world problems the spools of fabric are very large, the number of patterns is very large as described earlier, the patterns are complex and have different areas, and there are many restrictions on pattern distribution. But in our particular case, we will be simplifying the problem by using smaller spools of fabric, a smaller set of patterns, simple patterns with equal area, and limited restrictions on pattern distributions. The objective of the problem is to minimize the amount of wasted space on a spool of fabric by efficiently placing as many simple patterns as possible
In our research we have come across two different methodologies to solve this problem. The first being the more obvious, starting with an empty piece of fabric and then adding patterns to it until you cannot add anymore because of the inevitable conflicts. The second is starting off with a fabric that already has lots of patterns placed on it (overlapping allowed) and then attempting to mutate patterns by changing the orientation of the pattern or the pattern type itself to reduce conflicts. In the first approach (GS Solution) a solution is being built up from scratch and in the second approach (JM Solution) an over fitted solution is being reduced into a workable solution. These two approaches and the problem will be described further in more detail.
2. RELATED WORK The problem we are trying to solve is a Packing problem [1]. The general idea of the problem is to determine how to put the most objects in the least number of fixed space bins. There are many variants such as 3D, 2D, pack by volume, pack by weight, fixed orientation, allow rotation, etc. We can find this problem in Very Large Scale Integration (VLSI) Systems, manufacturing, glass cutting, box packing, etc. The most common problem is 2d packing problems were objects have a rectangular shape. The problem has been solved using greedy algorithm, heuristic such as tabu search [6] and genetic programming. This problem is so common and so widely used that we can find specific software to solve it (.e.g. www.astrokettle.com) There is also a research made by [4]. Edward G. Coffman, Jr. J´anos Csirik David S. Johnson , Gerhard J. Woeginger of an introduction to Bin packing, that is an inventory of research in this field that has 50 pages of references to this type of problem. Despite the amount of research in this field, we did not find many papers that solved the problem of irregular shapes. Papers [3] and [9] are some of the few that solved the problem with irregular shapes instead of using rectangles.
3. PROBLEM BASICS 3.1 Patterns The shapes we will be using are similar to the basic Tetris shapes, but each has an added block so they do not all fit together as easily. Note: these are not complex irregular figures but will work just fine for this first research effort. The exact patterns we will use are depicted in Figure 2. In addition, keep in mind that that each shape also has horizontal and vertical orientations as well. The first pattern depicted in Figure 2 is shown in its eight horizontal and vertical orientations
Figure 2. Patterns and Orientations
3.2 Pattern Distribution To make our problem similar to the real world, we decided to set a required parameter distribution for one of the test runs. We did this by taking the grid size and defining the optimal number of pieces that could be placed on it. For example, the 10x10 grid is 100 blocks and given our patterns are of size 5, there can be a total of 20 patterns placed on the grid. We required that each of the 4 patterns we are using can be used a maximum of 5 times each, totaling 20 total patterns. This requirement specifies the number of pieces of each pattern that would be optimal for a given spool of fabric. This pattern distribution requirement makes our problem similar to the real world where the spools are used to make clothing and the numbers of patterns required are specified exactly.
3.3 Genetic Algorithm Basics We will use a Steady-State model [5], in each generation we will replace x percentage of the old individuals with the new offspring. Generational gap: x = λ / µ. In other words, each generation we will generate x percentage new kids and kill off x percentage of the existing population to make room for them in the fixed population. This x percentage will be defined as the selection rate and will be individually set by the two different approaches to achieve optimal performance for both respectively. This parameter will be set through empirical testing and preliminary results of this testing. These individual selection rates will be discussed further in each individual approach description. To select two parents to mate we will choose two individuals from the population based on a ranking selection [5] and a random selection. For the first parent we use ranking selection. The second parent is selected randomly out of the entire population. On the off chance that the parents are the same, we change the second parent by selecting the next individual in the fitness ordered set. In addition, the actual population size must be set. We empirically tested our genetic algorithms to run with populations of size 100. Because we had poor results we ended up increasing the population size and realized that population sizes above 1000 yielded little improvement. Therefore we decided that population of size 1000 was optimal for both our approaches. For replacement selection we use the same ranking selection [5] used to select the first parent in parent selection but now the individuals with worse fitness are given a higher probability of selection. At some point in the future we would like to create parameter-less versions of these algorithms as suggested by Lobo [7]. Note: As Brizuela commented in his dissertation [2], one of the most timeconsuming processes is finding the appropriate values of genetic algorithm parameters including the population size, selection rate, mutation rate, etc. We spent a lot of time testing and fine tuning these parameters as well. Although our parameter values are not perfect they had good preliminary results when compared to the other values of the parameters we tested.
Our final fitness function which we actually used during our testing includes what we called a free-point distribution. The count of the empty spaces is still the most important feature and is given more weight by multiplying it by 1000. The free-point distribution is a calculation of the distance between the empty or free spaces on the grid. It is calculated simply by adding the distance between each free space and every other free space in the grid using the x and y differential. Figure 3 illustrates how the new fitness and old fitness functions work and what the difference is between them. Figure 4 is a graph showing the performance difference when using the new fitness vs. old fitness. As you can see the new fitness does in fact make a difference.
3.4 Fitness Function
3.5 Diversity Control Diversity control is used to ensure that the population stays diverse throughout the course of evolution. Diversity control was needed for our experiment because during our preliminary testing we kept prematurely converging on a non-optimal solution. We created our own form of diversity control that takes a look at the individuals and kills off individuals that have the same fitness.
Fit ne ss in 10 x10 , Wit h a nd wit hout Dive rsit y Cont rol. Crossove r ra t e :2 0 % , mut a t ion ra t e :1%
Figure 3. Free Point Distribution Initially, we started with a basic fitness function that simply counted up the number of blocks that were wasted [8]. After
400000
350000
Simple fitness vs Free point Distribution Fitness enhancement 400000
300000
350000
Fitness
250000 300000
Fitness
250000
200000
150000 200000
100000
150000
50000
100000
50000
0 0
20000
40000
0 1 47 93 139 185 231 277 323 369 415 461 507 553 599 645 691 737 783 829 875 921 967 "Simple Fitness"
"New Fitness"
Figure 4. FPD Fitness vs Simple Waste Fitness using this fitness function in empirical testing it became clear that it was going to take the algorithms a very long time to get a good solution. Then we analyzed the results of the algorithms including the best individuals in the final generation and realized the problem was that empty spaces were dispersed and it wasn’t possible to place additional pieces on the grid. This gave us the idea that we should modify our fitness function to ensure that the remaining empty spaces should be kept together or that the patterns being placed on the grid should be kept together. Therefore, we modified our fitness function to penalize those individuals which didn’t keep the empty spaces together.
60000
80000
100000
120000
Evaluations Without Diversity Control
With Diversity Control
Figure 5. Fitness with and without Diversity Control Because the fitness includes a free-point distribution calculation the fitness that are the same are very likely to have the same exact grid or same solution. Note this is not always the case but we allow ten different individuals that exhibit the same fitness score to remain in the population, but the eleventh and greater are automatically killed off and replaced with a newly initialized individual. We will discuss how the individuals get initialized later in the respective approach descriptions. Figure 5 and 6 includes two graphs that indicate how diversity control impacts our experiment. In the first chart above(fig.5) we
have the fitness charted on the y-axis vs. number of evaluations on the x-axis. As you can see, approximately at 45,000 evaluations the genetic algorithm without diversity control prematurely converges on a non-optimal solution, whereas the genetic algorithm with diversity control keeps on chugging along and ends up at a more optimal solution. In addition, to illustrate this effect you can see in the second graph (fig. 6) below that the number of individuals that are unique sharply falls off right at the same time as the convergence takes place. Finally, you can see that the run
1000
4.2 Initialization The population was initialized with a few random patterns so that it wouldn’t have to start off from scratch and take longer to fill in the fabric. The actual number of patterns for each test was picked to provide optimal performance for this particular genetic algorithm. For the first test it was set to two patterns and for the second and third test it was set to five patterns. By random patterns we mean random pattern types with random horizontal and vertical orientations. This will be accomplished by randomly choosing a spot on the matrix until one is found that is available, then the pattern type is randomly selected, and finally the horizontal and vertical orientations are randomly selected. If the pattern cannot be placed, then just repeat process until the desired numbers of patterns are on the initial fabric.
Individuals with different Fitness
900
4.3 Selection Rate
800
This parameter like all the others was set based on empirical testing and the results it produced. We believe the best value for the selection rate for this particular approach is fifty percent. This means that if we have a population size of 1000, then we will have 500 new offspring each generation.
700 600 500
4.4 Recombination
400
The recombination operator this approach uses is uniform crossover.
300 200 100 0 0
20000
40000
60000
80000
100000
120000
Evaluations Without Diversity Control
With Diversity Control
Figure 6. Number of individuals with and without Diversity Control using diversity control also has a sharp fall off but it is cut short because of the diversity control mechanism we added.
4. APPROACH #1 –BUILDING UP–GS SOLUTION Starts off with a clean slate or empty fabric, then adds patterns to the fabric in a way that conflicts are not created. This process is repeated over and over until it is not possible to add any more patterns.
4.1 Data Representation The fabric will be represented as a matrix and will be initialized to -1s. Then as patterns are added to the fabric they are assigned an auto-incrementing number. This number is then used to fill in the blocks of the matrix that represent the blocks on the fabric that are taken up by that particular pattern. In addition, a vector will be used to keep track of the patterns and their details.
Figure 7. Recombination – Approach #1 The new child will be created by randomly picking one pattern from parent A. Then attempt to place the pattern on the new child’s grid. Once you find one that works move on to parent B and continue this process over and over until no more patterns can be placed on the grid. This will allow each parent to contribute to the child, will prevent conflicts from arising, and will give each parent a random chance of passing on good or bad traits. There is a slight bias if we always start with parent A, therefore we will randomly select which parent to start with each time we create a new child. Figure 7 illustrates how two parents recombine to create a new child. Obviously the left parent was randomly selected first to pass on a trait. As you can see both parents contributed but the random one selected to go first (the left one) got to contribute more.
4.5 Mutation The mutation step will attempt to add a new pattern to the new child. This will be accomplished by randomly choosing a spot on the matrix until one is found that is available, then the pattern
type is randomly selected, and finally the horizontal and vertical orientations are randomly selected. This process will be repeated
except the storage block. The final solution will not allow any Figure 10. Conflicts conflicts and they will be resolved during the rendering process.
5.2 Initialization
Figure 8. Mutation – Approach #1 until one new pattern is successfully added or the adding process fails five times. This five-limit failure is in place in case the matrix is so filled that it is not possible to place an additional pattern. The five-limit failure was decided based on empirical testing. We tested first with five and had decent results, but when we tested with a larger number and the algorithm suffered a major hit in performance. Therefore, we decided on the smaller fivelimit failure. In addition, we selected the mutation rate to be twenty percent using a similar empirical test and the results it produced.
5. APPROACH #2 –BUILDING DOWN–JM SOLUTION Start off with a fabric that is overflowing with patterns. Then rotate or change existing patterns to reduce the number of conflicts. This process is repeated over and over until a perfect solution is achieved.
5.1 Data Representation The fabric will be represented as a C1 matrix and will be C2 initialized to -1s. C3 C4 Then as patterns are added to the fabric C5 C8 they are recorded on C6 the matrix using their C7 pattern type and horizontal and vertical orientations. Figure 9. Representation – Approach #2 The pattern type is recorded with one alpha character (A, B, C, or D) and the horizontal and vertical orientations are stored by a second numeric character (1, 2, 3, 4, 5, 6, 7, or 8). This representation only stores one square (red block) per pattern, as shown in Figure 9.
The population was initialized with many random patterns so that more than 100% of the initial fabric is covered including overlapping. The actual number of patterns for each test was picked to provide optimal performance for this particular genetic algorithm. For all three tests it was set to one hundred patterns. By random patterns we mean random pattern types with random horizontal and vertical orientations. This will be accomplished by randomly choosing a spot on the matrix, then the pattern type, horizontal and vertical orientations are randomly selected. Finally the type and orientation are recorded in the space and the process is repeated until the desired number of patterns is on the initial fabric.
5.3 Selection Rate This parameter like all the others was set based on empirical testing and the results it produced. We believe the best value for the selection rate for this particular approach is twenty percent. This means that if we have a population size of 1000, then we will have 200 new offspring each generation.
5.4 Recombination For recombination we use uniform crossover [5]. Uniform crossover works by treating each pattern independently and making a random choice as to which patterns will be passed on to the kids and which patterns won’t.
5.5 Mutation For mutation we assign a probability “n” for each one of the patterns in the grid to be mutated. We will use two types of mutation, the first is mutation in type and the second is mutation in orientation. The type mutation is represented by changing the first alpha character to another valid one, which ends up changing C1 C2 Mutation: Rotation
C1 Mutation in Form B1
Figure 11. Mutation – Approach #2 the pattern type to another. The orientation mutation is represented as a change on the second numeric character, which ends up changing the orientation of the pattern. After extensive empirical testing we decided that a mutation rate of one percent would be optimal. It turns out that higher mutation rates end up being very destructive and the offspring created have very bad fitness.
6. EXPERIMENTAL DESIGN 5.1.1 Conflicts between patterns This data representation was specifically designed to allow conflict between patterns in every square of the pattern
D1
B1
For our experiment we used the simple patterns described above and attempted to package them into large rectangular fabrics. The two different size fabrics that were described above were used in our experiment. Further more, we tested with and without the
pattern distribution requirement discussed above. These testing requirements resulted in a total of three tests: a 5x5 grid with free distribution, a 10x10 grid with free distribution, and a 10x10 grid with fixed distribution. Each of these three tests were run on both algorithms (JM Solution and GS Solution) thirty times, in order to get meaningful results [10] After running the experiments we used the average, the min, the max, the standard deviation and other statistics [10] of the thirty runs for each case to compare the algorithms. The primary goal here was to prove that the two genetic algorithm approaches yield statistically different [10] results even though they both are evolutionary techniques. In addition, we looked to see if the increased fabric size made any difference in the performance of the algorithms Note: Before we begun our experiment of we tried to tweak the GA parameters set out above to create optimal performance for both algorithms respectively. When we found that tweaking a particular parameter one way or another lead to an unfair advantage of one of the algorithms over the other, we let each approach set the parameter to create optimal performance for both algorithms respectively.
Table 1. Tests results Test/Final Fitness
JM Solution
GS Solution
5x5 Free Distribution
0.0
0.23
10x10 Free Distribution
1.28
2.48
10x10 Fixed Distribution
2.28
3.48
Test/Success Rate
JM Solution
GS Solution
5x5 Free Distribution
100%
80%
10x10 Free Distribution
0%
0%
10x10 Fixed Distribution
0%
0%
7.1 5x5 Grid with Free Distribution and Diversity Control For the first test we used a 5x5 grid with free distribution and diversity control. The results for this test were good for both genetic algorithm approaches.
7. RESULTS AND ANALYSIS This section reviews the results we got from our experiment. Before we analyze the results we need to go over a few terms that will be used as a basis for comparison: success rate, evaluations, and fitness.
Instead of using number of generations (traditional measure for genetic algorithms) we decided to use number of evaluations. We define the number of evaluations as the number of times the conflict check function is called. This measure was used as opposed to the number of generations, because both approaches had different selection rates, mutation rates, and methodologies leading to an advantage for one algorithm over another when purely measuring the number of generations. This measure of evaluations is an unbiased measure of the time and computational power being used by both algorithms, because the conflict check is the very heart of each algorithm. This measure of evaluations will allow us to compare and see how quickly the algorithms converge on the perfect solution. Finally we use the fitness as a means of comparing the algorithms when a perfect solution is not reached. This way we can compare the progress they have made within the 100,000 evaluations. Table 1 will translate our raw fitness score into a more manageable number, the number of additional patterns needed to achieve a perfect solution. Again this figure is low for a good fitness and high for a bad fitness. So for example, a translated fitness of zero means that it is a perfect solution and that zero additional patterns were required to achieve a perfect solution.
Figure 12. 5x5, Solution The chart provided below in figure 13 shows how the fitness improves as the number of evaluations goes up for both approaches. As you can see, the algorithms are going head to head until about 7500 evaluations. JM solution vs GS solution, 5x5, free distribution
200000 180000 160000 140000
Fitness
The success rate is defined as the percentage of perfect solutions found in the runs. A perfect solution is when the entire grid is covered with patterns or when it is not possible to place any more patterns on the grid (area remaining is smaller than all available patterns).For example, if fifteen of the thirty runs produce perfect solutions then the success rate is 50%.
120000 100000 80000 60000 40000 20000 0 0
5000
10000
15000
20000
25000
30000
Evaluations GS
JM
Figure 13. 5x5, Free Dist. JM vs GS approaches At that point, the GS solution begins to converge on a nonoptimal solution whereas the JM solution keeps on improving and converges on a perfect solution. Keep in mind that this chart shows the average fitness for each of the thirty runs at each evaluation step. The GS solution did in fact find the perfect solution many times but the average cannot hit zero because it only had an 80% success rate. Since the JM solution had a 100%
success rate the average touches down to zero once the last test run is solved.
JM solution vs GS solution, 10x10, Fixed Distribution
500000
7.2 10x10 Grid with Free Distribution and Diversity Control
Figure 14. 10x10 Solution
350000 300000 250000 200000 150000 100000 50000 0 0
20000
40000
60000
500000
80000
100000
120000
Evaluations GS Solution
JM solu tion vs G S solu tion , 10x10, f ree d istrib u tion
JM Solution
Figure 16. 10x10 Fixed Dist. JM vs GS Approaches
8. COMPARISONS TO RELATED WORK
450000
The main related work is the Two-Dimensional Packing For Irregular Shaped Objects of Ping Chen, Zhaohui Fu, Andrew Lim and Brian Rodrigues [3].
400000 350000
Fitness
400000
Fitness
For the second test we used a 10x10 grid with free distribution and diversity control. As expected the results in this test were not as good as they were for the smaller grid. The chart provided below shows how the fitness improves as the number of evaluations goes up for both approaches. As you can see, the GS solution does better in the beginning but then quickly converges on a suboptimal solution.
450000
300000 250000
The paper refers to 2d packing problem with irregular shapes. They ran a few experiments; the only ones that we could replicate are the following:
200000 150000 100000 50000 0 0
20000
40000
60000
80000
100000
120000
Evaluations GS
JM
The authors use only one pattern that is the T-Shape of four blocks in three different tests. The first one is a 10 by 8 grid, the second 12 by 8 and the third one is 11 by 9. During the first and second experiments [3] uses genetic algorithms and during the third experiment they uses tabu search [6]
Figure 15. 10x10 Free Dist. JM vs GS Approaches The JM solution does not work as fast initially, but slow and steady and wins the race. Note: the JM solution also converges on a sub-optimal solution but it is a better sub-optimal solution than the GS solution. Again, both of these solutions are better than the greedy algorithms solution but take orders of magnitude more evaluations to achieve.
7.3 10x10 Grid with Fixed Distribution and Diversity Control For the third test we used a 10x10 grid with fixed distribution and diversity control. As expected the results for this test were not as good as they were for the test without the fixed distribution requirement. The graph provided below shows how the fitness improves as the number of evaluations goes up for both approaches. As you can see, the GS solution does better in the beginning but then quickly converges on a sub-optimal solution. The JM solution does not work as fast initially, but slow and steady and wins the race. Note: the JM solution also converges on a sub-optimal solution but it is a better sub-optimal solution than the GS solution. It is worth noting that the performance here is one pattern worse than it was without the fixed distribution.
Figure 17.Left,[3]Solution, Right our solution The 10 by 8 problem was originally proposed by Sakait and Hae [9]. The authors were able to take the same problem and come up with somewhat better results. Their best solution is depicted in figure 17. We ran a similar experiment for comparison purposes and were happy to see that we were able to come up with the same solution. Our solution is depicted in figure 17 on the right. The author’s measure is based on utilization, in this case their measures is 76 blocks / 80 blocks = 95% utilization. We got the same result. The second test they ran is the similar but now instead of 10 by 8 they use a 12 by 8 grid. In this experiment the authors [3] achieve 83% utilization. Unfortunately they did not provide an image of their result. Again, for comparison purposes we ran the very same exact experiment and were delightfully surprised to see that
we achieved 100% utilization. We have provided our solution below for review.
approaches with three different tests and used a t-test to statistically prove that the two algorithms do in fact produce different results. Along the process of doing our experiment we think we have made at least one significant and noteworthy contribution to the field of two dimensional pattern cutting. We created a new fitness function that makes use of a new concept known as freepoint distribution. The use of this fitness function has allowed our results to trump the results of other colleague.
Figure 18. 12x8. 100% utilization The third problem is also similar but now they use an 11 by 9 grid and instead of using genetic algorithms he uses tabu search [6]. They are able to achieve 77% utilization.
Finally, we would like to conclude by saying that data representations and methodologies do play a major role in the results of an evolutionary algorithm. Again, we would like to remind the research community that when one compares algorithms it is extremely important to make the competition as fair as possible when providing the world with experimental results.
10. ACKNOWLEDGMENTS Our thanks to Professor John Shepard and to Luciana Cavestany for their help Figure 15. 11x9. 93% utilization Our results, as one can see in the figure above has only 7 blocks wasted, this can be translated into a 93% utilization that is much better then their tabu search [6] implementation. The authors [3] defined the fitness function as the number of edges and wasted area. They also believe that this measurement is more accurate than the one used by Sakait and Hae [9]which is the sum of area minimum bounding rectangle and the momentum of inertia, in the sense that it also takes the number of edges into consideration. From the empirical point of view our approach shows the same results for the 10 by 8 and much better results for the 12 by 8 and the 11 by 9 grids. They mentions in their paper [3] that, “The inner holes will be ignored. The rationale is that the lesser the edge number is, the better the packing is.” Our approach addresses the inner holes and does not ignore them. From our point of view, taking care of the inner and outer holes is key to having good results. What our fitness function does is tries to get the inner holes to stay together, so that another pattern can be placed later.
9. CONCLUSIONS We started off by hypothesizing that data representations and methodologies play a major role in the performance of evolutionary techniques. We also claimed that using two different sets of data representations and methodologies to solve the same problem would result in significantly different results. During the course of this work we have outlined two different approaches to solving the two dimensional pattern cutting problem using evolutionary techniques. We have tested these two
11. REFERENCES [1] T. Back, D.B. Fogel, T. Michalewicz. Evolutionary Computation [2] Carlos Alberto Brizuela Rodriguez . Genetic Algorithms for Shop-scheduling Problems: Partial Enumeration and Stochastic Heuristics , 2000 [3]: Ping Chen, Zhaohui Fu, Andrew Lim and Brian Rodrigues Two-Dimensional Packing For Irregular Shaped Objects , 2002 [4] , Edward G. Coffman, Jr. J´anos Csirik David S. Johnson, Gerhard J. Woeginger . An Introduction to Bin packing [5].: E. Eiben – J.E. Smith Introduction to Evolutionary Computing, A, Springer, 2003 [6] A. Hertz and D. D. Werra. The tabu search metaheuristics. How we used it. Annals of Mathematics and Artificial Intelligence, [7] Lobo, Fernando G. “The Parameter-less Genetic Algorithm: Rational and Automated Parameter Selection for Simplified Genetic Algorithm Operation,” Illinois Genetic Algorithm Laboratory, University of Illinois at Urbana-Champaign. [8] Jakob Puchinger, G¨unther R. Raidl, and Gabriele Koller. Solving a Real-World Glass Cutting Problem. Institute of Computer Graphics and Algorithms, Vienna University of Technology, Vienna, Austria [9]. J. Sakait and C. Hae. Two dimensional packing problems using genetic algorithms. 1998. [10]. R.L Scheaffer, Jomes T. McClave, Probability and Statistics for Engineers