Title: Optimization Techniques for Engineering Design
Abstract: This paper discusses the popular evolutionary optimization technique, Genetic Algorithm (GA)and Teaching-learningbased Optimization (TLBO) algorithm.It also covers the definitions of various parameters used by these algorithms. I.INTRODUCTION Most of the engineering design problems are competing multi-objective problems for which the optimal values of the design variables are searched that optimize several objectivesfor a given set of constraints.The different methods available to formulate a multi-objective problem as a single objective problem are weighted global criterion method, weighted sum method, lexicographic method, weighted min-max method, exponential weighted criterion, weighted product method, goal programming methods, bounded objective function method, and physical programming (Marled and Arora, 2004).The weighted sum approach is more widely used in which a normalized objective function is formulated by assigning proper weighting factors to all the objectives.By selecting different values of the weighting factorsto objectives, the results are obtained as a set of optimum solutions and each solution in this set is a trade-off between the different objectives (Marled and Arora, 2010).A constrained optimization problem is considered more complex than that of an unconstrained problem.It finds a feasible solution that optimizes one or more mathematical functions in a constrained search space.The constrained optimization problem is transformed into an unconstrained optimization problem by modifying the objective function on the basis of the constraint violations.The constraint violations areused to penalize infeasible solutions to favor the feasible solutions.The constraints are normally treated as penalty functions such as static, dynamic or adaptive penalty to the objective function.The various constraint handling techniques are suggested such as superiority of feasible solutions (SF) (Deb, 2000), stochastic ranking technique (SR) (Runarsson and Yao, 2005), ε-constraint technique (EC) (Takahama and Sakai, 2006), self-adaptive penalty approach (SP) (Tessema and Yen, 2006) and ensemble of constraint handling techniques (Montes and Coello, 2005;Mallipeddi and Suganthan, 2010).After formulating the optimization problem, it can be solved by using either traditional or evolutionary optimization algorithms.The traditional or classical optimization algorithms are based on deterministic approach, i.e., they use gradient information of objective function with respect to the design variables and move from one solution to other following the specific rules.Depending on the starting solution these algorithmsmay end up with a local optimum solution.Therefore, one has to explore all local solutions; one of them is the global optimum solution.To improve the chances of getting the global optimum solution, a large set of randomly generated initial solutions is required for these algorithms.The global optimum solution is then found as the best of all local optimum solution provided by different instances of the algorithm.The popular methods in this category are quadratic programming, steepest descent method, linear programming, nonlinear programming, dynamic programming and geometric programming, etc.For the complex optimization problem having a large number of design variables and multiple local minimum solutions, these methods converge on the optimum solution near to the initial solution provided and thus produce local optimum solution (Marler and Arora, 2004;Mariappan and Krishnamurty, 1996).These techniques are generally not suitable for the optimization problems with (1) large number of constraints (2) large number of design variables (3) multi-objective function (4) multi-modalityand (5) differentiability.A function is multimodal if it has two or more local optimum solutions in the design space.A function is regular if it is differentiable at each point of its domain.Thetraditional optimization methods require the gradient information and thus not useful in case of the non-differentiable functions.Evolutionary or advanced optimization techniques are stochastic in nature, and the optimum solution is searched following the probabilistic transition rules.These algorithms mimic the natural evolutionary principles and start with a set of solutions known as the population to search the optimum solution through parallel computing.Thus, it is advantageous to use these techniques to find the global optimum solution with less computational efforts for large and difficult optimization problems.The popular techniques in this category are: Genetic algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO), Biogeography-based optimization (BBO), Ant Colony Optimization (ACO), Differential Evolution (DE), Grey Wolf Optimizer (GWO), Fireworks