applications of metaheuristics in process engineering ||

451
Jayaraman Valadi Patrick Siarry Editors Applications of Metaheuristics in Process Engineering

Upload: patrick

Post on 01-Feb-2017

280 views

Category:

Documents


45 download

TRANSCRIPT

  • Jayaraman ValadiPatrick Siarry Editors

    Applications of Metaheuristics in Process Engineering

  • Applications of Metaheuristics in ProcessEngineering

  • Jayaraman Valadi Patrick SiarryEditors

    Applications ofMetaheuristicsin ProcessEngineering

    123

  • EditorsJayaraman ValadiEvol. Comput. & Image Proc. GroupCenter for Develop. of Adv. ComputingPuneIndia

    and

    Center for Informatics, Shiv NadarUniversity

    Gautam Buddha NagarIndia

    Patrick SiarryLab. LiSSi (EA 3956)Universit Paris-Est CrteilCrteilFrance

    ISBN 978-3-319-06507-6 ISBN 978-3-319-06508-3 (eBook)DOI 10.1007/978-3-319-06508-3Springer Cham Heidelberg New York Dordrecht London

    Library of Congress Control Number: 2014945265

    Springer International Publishing Switzerland 2014This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part ofthe material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,broadcasting, reproduction on microfilms or in any other physical way, and transmission or informationstorage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodologynow known or hereafter developed. Exempted from this legal reservation are brief excerpts in connectionwith reviews or scholarly analysis or material supplied specifically for the purpose of being enteredand executed on a computer system, for exclusive use by the purchaser of the work. Duplication ofthis publication or parts thereof is permitted only under the provisions of the Copyright Law of thePublishers location, in its current version, and permission for use must always be obtained from Springer.Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violationsare liable to prosecution under the respective Copyright Law.The use of general descriptive names, registered names, trademarks, service marks, etc. in this publicationdoes not imply, even in the absence of a specific statement, that such names are exempt from the relevantprotective laws and regulations and therefore free for general use.While the advice and information in this book are believed to be true and accurate at the date ofpublication, neither the authors nor the editors nor the publisher can accept any legal responsibility forany errors or omissions that may be made. The publisher makes no warranty, express or implied, withrespect to the material contained herein.

    Printed on acid-free paper

    Springer is part of Springer Science+Business Media (www.springer.com)

    www.springer.com

  • Dedicated to my parents, sisters, wifeand Children

  • Preface

    Heuristics as a problem-solving method has been known to humans for ages.They form an important branch of learning in many academic courses, like socialscience, psychology, economics, and engineering, etc. Heuristics play a major rolein situations where there is insufficient knowledge (and sometimes incompleteinformation) about the problem to be solved. Such problems are common in scienceand engineering, particularly in the context of optimization. Due to this, heuristics-based algorithms became quite popular when they were first introduced during theearly decades of the twentieth century.

    Metaheuristics, as the word suggests, are upper-level heuristics. They areexpected to perform better than simple heuristics as the problem-solving processis guided with some kind of information (or knowledge) embedded within thatprocess. These algorithms slowly became quite popular amongst researchers, andnow they are increasingly employed in different fields to solve simple to complexoptimization problems. In process engineering, many optimization problems ofpractical importance, e.g., heat and mass exchanger network synthesis, static anddynamic optimization of chemical and bioreactors (batch/semi-batch/continuous),supply chain optimization, etc., need meaningful solutions in a reasonable timewith good accuracy. Metaheuristics are playing a key role in solving such difficultproblems. This book is an attempt to expose the readers to the cutting-edgeresearch and applications related to the domains of process engineering wherenovel metaheuristic techniques can be and have been successfully employed. Inthe book, we follow Glover (who coined the term metaheuristics in the 1990s)and call all nature-inspired algorithms metaheuristics, e.g., genetic algorithms,simulated annealing, ant colony optimization, differential evolution, particle swarmoptimization, firefly algorithm, etc. The 18 chapters in the book are organized asfollows.

    The first chapter is an extensive review on metaheuristics and their historicaldevelopments in process engineering. It covers many important applications in pro-cess engineering where metaheuristics were applied successfully. Chapters 2 and 3provide a genetic algorithm methodology to solve multiple-objective optimizationproblems and their applications in the areas of optimization of polymerization

    vii

  • viii Preface

    reactors, catalytic reactors, etc. Chapter 4 gives different strategies for evolution-ary data-driven modeling and its application in chemical, metallurgical systems.Chapter 5 presents a swarm intelligence-based algorithm, namely, the honeybee algorithm, to solve optimization problems in the paper and pulp industry.Chapter 6 gives an idea about particle swarm optimization and its application tothe optimal design of a plate type distillation column, an important problem inprocess engineering. In Chap. 7, readers will learn about ant colony optimization andbootstrapped aggregated neural networks that are applied to solve optimal controlproblems in a fed batch fermentation process.

    Chapter 8 employs biogeography-based optimization, a novel heuristic, for tack-ling the difficult problem of dynamic optimization of chemical reactors. Chapter 9extends the use of biogeography-based optimization for the optimal design of heatexchangers. Chapter 10 presents two interesting optimization heuristics that mimicchemical processes, namely, the construction of an artificial chemical kinetic modeland the LARES algorithm. Chapter 11 gives a genetic algorithm-based methodologyto process biopolymer sequences and improve their functions. Chapter 12 presentsa review of some metaheuristic optimization algorithms for theoretical modeling ofconducting polymers.

    Chapter 13 presents the genetic algorithm and its application in the area of quan-titative structure activity relationship (QSAR) and quantitative structure propertyrelationship (QSPR) studies, an important area in drug design. Chapter 14 discussesvarious applications of genetic algorithms in drug design including designing acombinatorial library, a QSAR/QSPR study, and designing the lead candidacy indrug discovery. Chapters 15 and 16 present several applications that use geneticalgorithms to solve multi-objective optimization problems, namely, a natural gastransportation network, new product development in the pharmaceutical industry,and operations management of a fresh fruit supply chain. Chapters 17 and 18 presenttwo modified algorithms, namely, the jumping gene adaptation of NSGA-II with analtruism approach and an improved multi-objective differential evolution algorithm.These algorithms are applied to solve the WilliamsOtto process problem, analkylation process, a three-stage fermentation process integrated with extraction,and a three-stage fermentation process integrated with pervaporation.

    These 18 chapters, as readers can see, cover a wide spectrum of algorithmsand their application to solve many interesting optimization problems (singleand multiple objective) arising in process engineering. We expect that this bookwill serve as a unified destination where an interested reader will get detaileddescriptions about many of these metaheuristic techniques and will also obtaina fairly good exposure to the directions in which modern process engineering ismoving.

    Pune, India and Gautam Buddha Nagar, India Jayaraman ValadiParis, France Patrick SiarryJanuary 2014

  • Contents

    1 Metaheuristics in Process Engineering: A Historical Perspective . . . . . 1Prakash Shelokar, Abhijit Kulkarni, Valadi K. Jayaraman,and Patrick Siarry

    2 Applications of Genetic Algorithms in ChemicalEngineering I: Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Santosh K. Gupta and Manojkumar Ramteke

    3 Applications of Genetic Algorithms in ChemicalEngineering II: Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Santosh K. Gupta and Manojkumar Ramteke

    4 Strategies for Evolutionary Data Driven Modelingin Chemical and Metallurgical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Nirupam Chakraborti

    5 Swarm Intelligence in Pulp and Paper Process Optimization . . . . . . . . . 123Tarun Kumar Sharma and Millie Pant

    6 Particle Swarm Optimization Technique for the OptimalDesign of Plate-Type Distillation Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Sandip Kumar Lahiri

    7 Reliable Optimal Control of a Fed-Batch FermentationProcess Using Ant Colony Optimization and BootstrapAggregated Neural Network Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183Jie Zhang, Yiting Feng, and Mahmood Hilal Al-Mahrouqi

    8 Biogeography-Based Optimization for DynamicOptimization of Chemical Reactors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201Sarvesh Nikumbh, Shameek Ghosh, and Valadi K. Jayaraman

    9 Biogeography-Based Optimization Algorithmfor Optimization of Heat Exchangers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217Amin Hadidi

    ix

  • x Contents

    10 Optimization Heuristics Mimicking Chemical Processes . . . . . . . . . . . . . . 253Roberto Irizarry

    11 In silico Maturation: Processing Sequences to ImproveBiopolymer Functions Based on Genetic Algorithms . . . . . . . . . . . . . . . . . . 271Nasa Savory, Koichi Abe, Wataru Yoshida,and Kazunori Ikebukuro

    12 Molecular Engineering of Electrically ConductingPolymers Using Artificial Intelligence Methods . . . . . . . . . . . . . . . . . . . . . . . . . 289A.K. Bakhshi, Vinita Kapoor, and Priyanka Thakral

    13 Applications of Genetic Algorithms in QSAR/QSPR Modeling . . . . . . . 315N. Sukumar, Ganesh Prabhu, and Pinaki Saha

    14 Genetic Algorithms in Drug Design: A Not-So-Old Storyin a Newer Bottle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325Subhabrata Sen and Sudeepto Bhattacharya

    15 Multi-Objective Genetic Algorithms for ChemicalEngineering Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343Guillermo Hernandez-Rodriguez,Fernando Morales-Mendoza, Luc Pibouleau, CatherineAzzaro-Pantel, Serge Domenech, and Adama Ouattara

    16 A Multi-Objective Modelling and OptimizationFramework for Operations Management of a Fresh FruitSupply Chain: A Case Study on a Mexican Lime Company . . . . . . . . . . 373Marco A. Miranda-Ackerman, Gregorio Fernndez-Lambert,Catherine Azzaro-Pantel, and Alberto A. Aguilar-Lasserre

    17 Jumping Gene Adaptations of NSGA-II with AltruismApproach: Performance Comparison and Applicationto WilliamsOtto Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395Shivom Sharma, S.R. Nabavi, and G.P. Rangaiah

    18 Hybrid Approach for Multiobjective Optimization and ItsApplication to Process Engineering Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 423S. Sharma and G.P. Rangaiah

  • List of Contributors

    Koichi Abe Tokyo University of Agriculture and Technology, Koganei, Tokyo,Japan

    Alberto A. Aguilar-Lasserre Research and Postgraduate Studies Division, Insti-tuto Tecnolgico de Orizaba, Col. Emiliano Zapata, Orizaba, Veracruz, Mxico

    Catherine Azzaro-Pantel Laboratoire de Gnie Chimique, UMR 5503CNRS/INPT/UPS, Universit de Toulouse, ENSIACET, BP, Toulouse, France

    A.K. Bakhshi Department of Chemistry, University of Delhi, Delhi, India

    Executive Director, Tertiary Education Commission (TEC), Reduit, Mauritius

    Sudeepto Bhattacharya Department of Mathematics, School of Natural Sciences,Shiv Nadar University, Tehsil Dadri, District Gautam Buddha Nagar, UP, India

    Nirupam Chakraborti Department of Metallurgical and Materials Engineering,Indian Institute of Technology, Kharagpur, India

    Serge Domenech Laboratoire de Gnie Chimique, UMR 5503 CNRS/INPT/UPS,Universit de Toulouse, BP, Toulouse, France

    Gregorio Fernndez-Lambert Research and Postgraduate Studies Division, Insti-tuto Tecnolgico Superior de Misantla, Misantla, Veracruz, Mxico

    Shameek Ghosh Advanced Analytics Institute (AAI), University of Technology,Sydney, Australia

    Santosh K. Gupta Department of Chemical Engineering, University of Petroleumand Energy Studies (UPES), Bidholi, Dehradun, UK, India

    Guillermo Hernandez-Rodriguez Laboratoire de Gnie Chimique, UMR 5503CNRS/INPT/UPS, Universit de Toulouse, ENSIACET, BP, Toulouse, France

    Kazunori Ikebukuro Tokyo University of Agriculture and Technology, 2-24-16Naka-Cho Koganei Tokyo 184-8588 Japan

    xi

  • xii List of Contributors

    Roberto Irizarry Applied Mathematics and Modeling, Informatics IT, Merck andCo., West Point, PA, USA

    Valadi K. Jayaraman Evolutionary Computing and Image Processing (ECIP),Center for Development of Advanced Computing (C-DAC), Pune, India

    Shiv Nadar University, Tehsil Dadri, District Gautam Buddha Nagar, UP, India

    Vinita Kapoor Department of Chemistry, University of Delhi, Delhi, India

    Abhijit Kulkarni SAS R&D (I) Pvt Ltd., Magarpatta city, Pune, India

    Sandip Kumar Lahiri Scientific Design Company Inc., Little Ferry, NJ, USA

    Marco A. Miranda-Ackerman LGC, UMR 5503 CNRS/INPT/UPS, ENSIACET,INPT, Universit de Toulouse, BP, Toulouse, France

    Fernando Morales-Mendoza Laboratoire de Gnie Chimique, UMR 5503CNRS/INPT/UPS, Universit de Toulouse, ENSIACET, BP, Toulouse, France

    S.R. Nabavi Faculty of Chemistry, University of Mazandaran Babolsar, Babolsar,Iran

    Sarvesh Nikumbh Computational Biology and Applied Algorithmics, Max PlanckInstitut fr Informatik, Saarbrcken, Germany

    Adama Ouattara Dpartement de Gnie Chimique et Agro-Alimentaire, InstitutNational Polytechnique Houphouet-Boigny, BP, Yamoussoukro, Cte dIvoire

    Millie Pant Department of Applied Science and Engineering, Indian Institute ofTechnology Roorkee, Roorkee, India

    Luc Pibouleau Laboratoire de Gnie Chimique, UMR 5503 CNRS/INPT/UPS,Universit de Toulouse, ENSIACET, BP, Toulouse, France

    Ganesh Prabhu Department of Chemistry and Center for Informatics, Shiv NadarUniversity, Tehsil Dadri, District Gautam Buddha Nagar, UP, India

    Manojkumar Ramteke Department of Polymer and Process Engineering, IndianInstitute of Technology Roorkee, Saharanpur Campus, Saharanpur, UP, India

    G.P. Rangaiah Department of Chemical & Biomolecular Engineering, NationalUniversity of Singapore, Singapore

    Pinaki Saha Department of Chemistry and Center for Informatics, Shiv NadarUniversity, Tehsil Dadri, District Gautam Buddha Nagar, UP, India

    Nasa Savory Tokyo University of Agriculture and Technology, Koganei, Tokyo,Japan

    Subhabrata Sen Department of Chemistry, School of Natural Sciences, ShivNadar University, Tehsil Dadri, District Gautam Buddha Nagar, UP, India

  • List of Contributors xiii

    Shivom Sharma Department of Chemical & Biomolecular Engineering, NationalUniversity of Singapore, Singapore

    Tarun Kumar Sharma Department of Applied Science and Engineering, IndianInstitute of Technology Roorkee, Roorkee, India

    Prakash Shelokar European Center for Soft Computing, Mieres, Spain

    Patrick Siarry Universit Paris-Est Crteil Val-de-Marne, Paris, France

    N. Sukumar Department of Chemistry and Center for Informatics, Shiv NadarUniversity, Tehsil Dadri, District Gautam Buddha Nagar, UP, India

    Priyanka Thakral Department of Chemistry, University of Delhi, Delhi, India

    Wataru Yoshida Tokyo University of Technology, Hachioji, Tokyo, Japan

  • Chapter 1Metaheuristics in Process Engineering:A Historical Perspective

    Prakash Shelokar, Abhijit Kulkarni, Valadi K. Jayaraman, and Patrick Siarry

    Whenever youre called on to make up your mind, and yourehampered by not having any, the best way to solve the dilemma,youll find, is simply by spinning a penny. No-not so that chanceshall decide the affair while youre passively standing theremoping; but the moment the penny is up in the air, you suddenlyknow what youre hoping.

    A Psychological Tip in Grooks by Piet Hein (1982)

    1.1 Introduction

    Heuristics as a problem-solving technique is known to the human race from ages.We often use heuristics in our day-to-day life (knowingly or unknowingly) to solvemany practical problems. The word heuristics (derived from the verb heuriskein,which is originated in Greek language) means to find or to discover. In moreprecise terms, heuristics are techniques that use readily accessible, though loosely

    P. Shelokar ()European Center for Soft Computing, Mieres, Spaine-mail: [email protected]

    A. Kulkarni ()SAS R&D (I) Pvt Ltd., Magarpatta city, Pune, Indiae-mail: [email protected]

    V.K. Jayaraman ()Shiv Nadar University, Tehsil Dadri, District Gautam Buddha Nagar, UP 201314, India

    Center for Development of Advanced Computing (CDAC), Pune, Indiae-mail: [email protected]; [email protected]

    P. Siarry ()Universit Paris-Est Crteil Val-de-Marne, Paris, Francee-mail: [email protected]

    J. Valadi and P. Siarry (eds.), Applications of Metaheuristics in Process Engineering,DOI 10.1007/978-3-319-06508-3__1, Springer International Publishing Switzerland 2014

    1

    mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]

  • 2 P. Shelokar et al.

    applicable, information to control problem solving in human beings and machines.The most fundamental (and simplest) heuristic is trial and error, which can be usedin most of the problems (easy or complex), e.g., matching nuts with bolts, solvinga mathematical puzzle, finding the values of variables in algebra, solving a highlynonlinear multimodal optimization problem, etc. Apart from trial and error, the othercommonly used heuristics are cited below:

    If you are having difficulty in understanding a problem, try drawing a picture. If you cannot find a solution, try assuming that you have a solution and then

    derive from that (i.e., working backward). Try to solve a more general problem first.

    Heuristics are very well studied in psychology literature also. Herbert Simon[220] (a Nobel Laureate) introduced this concept in psychology. His original,primary object of research was problem solving and he showed that we operatewithin what he calls bounded rationality. He coined the term satisficing, whichdenotes the situation where people seek solutions or accept choices or judgmentsthat are good enough for their purposes, but could be optimized further (ifadditional resources are available). Precisely, in psychology, heuristics are simple,efficient rules learnt by evolutionary processes that have been proposed to explainhow people make decisions and solve problems when they face complex problemsor incomplete information. These rules work well under most circumstances, butin certain cases they may lead to systematic errors. There are a lot of theorizedheuristics in psychology, e.g., availability heuristics, naive diversification, affectheuristics, and familiarity heuristic.

    Though one can trace the origins of heuristics in different branches of science,philosophy, or even social sciences, one can clearly see that heuristics play a majorrole as an important method of problem solving for many complex problems. Thecommon approach across all these disciplines is to explore a fairly good numberof possibilities to find the solution and then select (and accept) those solutions thatare decent. Thus, a heuristic can be considered as a kind of an algorithm, but onethat will not explore all possible states of the problem, or will begin by exploringthe most likely ones. As pointed earlier, purely heuristics-based solutions may beinconsistent and often subjective. To overcome this problem, metaheuristics wereintroduced [85].

    The Greek word meta means beyond or upper level. Thus, one can thinkof metaheuristics as an upper level heuristics that generally perform better thansimple heuristics. The term metaheuristic was coined by Dr. Fred Glover in hispivotal paper [85]. He proposed metaheuristic as a master strategy that guidesand modifies other heuristics to produce solutions beyond those that are normallygenerated in a quest for local optimality [86]. Earlier, algorithms with stochasticcomponents were often referred to as heuristics. But recent literature cites them asmetaheuristics. There might be some difference of opinion on what one should callheuristics against metaheuristics algorithm. In this chapter, and rest of the book,we follow Glovers convention and call all modern nature-inspired algorithms asmetaheuristics [8587], e.g., genetic algorithm, scatter search, simulated annealing,

  • 1 Metaheuristics in Process Engineering: A Historical Perspective 3

    tabu search, ant colony optimization (ACO), particle swarm optimization (PSO),differential evolution, firefly algorithm, and bee algorithm [85].

    1.1.1 Characteristics of Metaheuristics

    All metaheuristic algorithms use a certain trade-off between randomization andlocal search. In the context of solving optimization problems, they find decentsolutions to difficult problems in a reasonable amount of time, but there is noguarantee that optimal solutions can always be reached. Almost all metaheuristicalgorithms tend to be suitable for global optimization [85].

    All metaheuristic algorithms consist of two important components, namely,diversification and intensification (or exploitation and exploration) [85, 87]. Indiversification, the algorithm tries to generate diverse solutions to explore the searchspace globally, whereas in intensification the algorithm focuses the search in a localregion knowing that a current good solution is found in this region. A good balancebetween diversification and intensification should be found in selecting the bestsolutions to improve the rate of algorithm convergence. A good combination of thesetwo important components usually ensures that global optimum can be achieved.

    Metaheuristic algorithms are primarily nature inspired and apply either popu-lation or single solution to explore the search space. Methods using single solutionencompass local search-based metaheuristics, like tabu search and simulated anneal-ing, which share the property of describing a state in the search space during thesearch process. On the contrary, population-based metaheuristics explore the searchspace through the evolution of a set of solutions in the search space, e.g., geneticalgorithm. Further, these methods can be categorized as memory or memorylessmethods. We will elaborate all these methods with their technical details as weprogress. We will first trace the historical developments of these algorithms.

    1.1.2 Brief Historical Perspective

    Heuristic methods are in use in various scientific domains from 1940. However, as afield, the first landmark came as a technique called evolutionary strategy developedin the 1960s Germany by Igno Rechenberg [196], Hans-Paul Schwefel [209], andco-workers. Evolution strategy was introduced as a method to solve optimizationproblems with computers. Parallel work in the 1960s USA by Lawrence Fogel andco-workers [76] led the development of evolutionary programming in order to usesimulated evolution as a learning process. In the 1970s, Holland [102] inventedgenetic algorithm, what is now a flourishing field of research and application thatgoes much wider than the original proposal published in his pathbreaking bookAdaption in Natural and Artificial Systems. In the same decade, Fred Grover [84]proposed the fundamental concepts and principles of the scatter search method,which is founded on the premise that systematic designs and methods for creating

  • 4 P. Shelokar et al.

    new solutions afford significant benefits beyond those derived from recourse torandomization. These developments are nowadays collectively called evolutionaryalgorithms [15] or evolutionary computation [16].

    During 1980 and 1990, the development of metaheuristic algorithms reachedits peak. The first big step was the development of simulated annealing in 1983,pioneered by S. Kirkpatrick et al. [121]. This technique was inspired by theannealing process of metals. Another important algorithm, i.e., artificial immunesystem, was developed in 1986 by Farmer et al. [73]. Fred Glover, in 1980, initiatedthe use of memory in a metaheuristic algorithm called tabu search, where thesearch moves are recorded in a tabu list so that future moves will try to avoidrevisiting previous solutions. He later published a book on tabu search in [86].In 1992, Marco Dorigo [62, 65] described his innovative work on ACO algorithmin his Ph.D. thesis. This technique was inspired by the swarm intelligence of antsusing pheromone as a chemical messenger. During this time, John Koza [124] alsopublished a book on genetic programming that laid the foundation of a whole newarea of machine learning. In 1995, James Kennedy and Russell Eberhart [120]developed PSO algorithm. Finally, in 1996 R. Storn [223] proposed the applicationdifferential evolution for optimization and later Storn and Price [224] developed avector-based evolutionary algorithm called differential evolution, which has provedto be highly successful for continuous function optimization when compared togenetic algorithms in many applications.

    During the twenty-first century, developments become more interesting. In 2000,Kevin Passino [179,180] proposed a bacteria-foraging algorithm for distributed opti-mization and control applications. In 2001, Zong Geem et al. [83] developed har-mony search, a music-inspired algorithm. In 2004, Sunil Nakrani and Craig Tovey[170] developed a honey-bee algorithm and its application for optimizing Internethosting centers, while Roberto Irizarry [110] described LARES algorithm based onan artificial chemical process. In 2008, Dan Simon [219] proposed a biogeography-based optimization algorithm inspired by biogeography, which is the study of thedistribution of biological species through time and space. Also in 2008, Henry Wuand colleagues [97, 143] described a group search optimizer, an optimization tech-nique that imitates animal searching behavior. Meanwhile in 2008, Xin-She Yang[257, 258] proposed a firefly algorithm, and later in 2009 Xin-She Yang and SuashDeb [260,261] developed an efficient cuckoo search algorithm, which demonstratedthat its search process is quite effective amongst all other metaheuristic algorithmsconsidering many applications. Besides, in 2010, Xin-She Yang [259] also proposeda bat algorithm based on the echolocation behavior of bats. Many interesting thingsare still happening in metaheuristic-based algorithmic developments.

    1.2 Metaheuristic Algorithms

    Before we trace the historical developments in the context of process engineering,we first introduce a few metaheuristics algorithms, mainly based on the classifica-tion described above, i.e., a single solution versus population-based approaches.

  • 1 Metaheuristics in Process Engineering: A Historical Perspective 5

    Fig. 1.1 The pseudocode of SA

    1.2.1 Single Solution-Based Methods

    In this section, we outline two most popular metaheuristics, i.e., simulated annealing(SA) and tabu search (TS) that apply single-solution search. The kind of searchprocess these methods follow can be characterized by a state in the search spacesuch as the evolution in discrete time steps [255]. However, the solution generated(new state) by these methods may not be from the neighborhood of the currentsolution (state).

    1.2.1.1 Simulated Annealing

    SA is the oldest metaheuristics that implemented an explicit strategy to avoid localminima. The method has its roots in statistical mechanics (Metropolis algorithm)and was first presented as a search algorithm for combinatorial optimizationproblem [121]. The basic idea is to probabilistically accept worse quality candidatesolution than the current solution in order to escape from local minima. Theprobability of accepting such solutions is decreased during the search. The pseudocode of SA is given in Fig. 1.1.

    The algorithm initializes the current solution S either randomly or heuristicallyconstructed and also initializes temperature T D T0, where T0 is the initialtemperature value. At each iteration a neighborhood solution is randomly sampledS

    0 D N.S/ and its probability of acceptance is usually computed followingthe Boltzmann distribution P D exp..f .S 0/ f .S//=T /, where f ./ is thesolution quality or fitness. The temperature T is varied at the end of iteration

  • 6 P. Shelokar et al.

    Fig. 1.2 The pseudocode of TS

    using some cooling rule. Thus, the initially acceptance rate of worse solutionsis high and it gradually decreases as the algorithm progresses. Thus, the searchprocess is a combination of two strategies. Initially, it performs a random walk andconverts to a simple iterative improvement method as the search continues. Thisprocess is in analogy to the annealing process in metals and glass, where a lowenergy state is assumed when it is cooled with an appropriate cooling schedule.The use of appropriate cooling schedule such as logarithmic rule can achievealgorithm convergence in probability to optimum [1]. However, such cooling rulesare extremely slow and are not feasible in practical applications. Thus, more fastcooling schedule such as exponential rule, T D T is applied, where parameter 2 .0; 1/ causes an exponential decay of the temperature. Thus, these twoalgorithm parameters, the cooling schedule and the initial temperature, should beadapted for the underlying problem considering the nature of search landscape. Aquick and simple approach is to sample the search space for different values of initialtemperature and roughly evaluate the mean and variance of objective function. But,more sophisticated schemes can also be implemented [109].

    1.2.1.2 Tabu Search

    TS was the first metaheuristic to explicitly utilize the search history in order to bothintensify and diversify the search process [86]. The pseudo code of TS is given inFig. 1.2.

    The basic components of the algorithm are the best improvement local search,tabu list, and aspiration criteria. At each iteration, the algorithm uses a tabu list,a short-term memory in order to escape from local minima and to avoid cycles.

  • 1 Metaheuristics in Process Engineering: A Historical Perspective 7

    Thus, the neighborhood space of the current solution is restricted to the solutionsthat do not belong to the tabu list. In practice, the tabu list usually consists ofsolution attributes like components of solutions, moves, or differences between twosolutions. The reason is that the implementation of tabu list of complete solutions ishighly inefficient and impractical. The length of the tabu list controls the memoryof the search process. A small size tabu list corresponds to exploiting small areasof the search space, while a large size tabu list corresponds to exploring largeregions. Although the use of solution attributes as tabu list is feasible and efficient,it introduces a loss of information, i.e., a solution attribute in the tabu list meansassigning the tabu status to probably more than one solution and thus good qualitysolutions can be excluded. In order to overcome this problem, aspiration criteria isintroduced. The most commonly used aspiration criterion is to select solutions thatare better than the current best solution.

    The application of the tabu list as a short-term memory is one of the possibleways of using the search history. Information gathered during the search processcan be included to TS through four different principles: frequency, recency, quality,and influence. Frequency-based memory utilizes the number of visits to each ofthe solution attributes. Recency-based memory corresponds to involvement of eachsolution attribute in the current iteration. Such information can be utilized to knowabout the search process confinement and the need of diversification in the searchprocess. The use of solution quality can be utilized to identify good building blocksin order to guide the search process. Influence property corresponds to obtainingwhich solution attributes might be critical in order to guide the search process andthus can be given some preference during the search operation. Some TS algorithmsthat utilize these ideas are robust TS [227] and reactive TS [20]. In general, TSis a rich source of ideas and strategies that have been currently adopted by manymetaheuristics.

    1.2.2 Population-Based Methods

    In this section, we outline two most widely known population-based metaheuristics,i.e., genetic algorithm (GA) and ACO. These methods deal with a set of solutionsinstead of a single solution from iteration to iteration. Since these methods workon a population of solutions, they are naturally able to explore different regions ofthe search space simultaneously. In GA, a population of individuals is modifiedby recombination and mutation operators, and in ACO, artificial ants constructsolutions guided by the pheromone trails and heuristic information. Apart fromthese popular methods, we briefly talk about other population-based methods suchas PSO, differential evolution (DE), and scatter search (SS), which have alsobeen recently applied to process engineering applications. We also discuss geneticprogramming (GP), a special type of GA that can be applied to evolve mathematicalequations or computer programs based on input data.

  • 8 P. Shelokar et al.

    Fig. 1.3 The pseudocode of GA

    1.2.2.1 Genetic Algorithm

    GA is inspired by the process of natural evolution and applies techniques likeselection, recombination, and mutation. The pseudo code of GA is given in Fig. 1.3.

    GA starts with an initial population of individuals P , a set of randomly generatedsolutions of some fixed size. These individuals are termed genotypes and solutionsthey represent are called phenotypes in order to distinguish solution representationsand solutions themselves. Classically, a genotypic space of solutions constitutesbinary encoding (or also called allele) but other encoding types are also possible.The evolution process in GA is termed generation. In each generation, the fitness ofan individual, typically value of an objective function, is computed for individuals inthe current population. A fitness-based stochastic selection is applied in the currentpopulation in order to construct the parent population of individuals of equal size.This selection process assumes more fit individuals will have more copies in theparent population. Most popular selection mechanisms are roulette-wheel selection,rank selection, tournament selection, and stochastic universal sampling [197]. Eachparent individual is then modified by the process of recombination and mutation tocreate new individuals also called as offspring. The most common recombinationprocess is the use of two-parents to create offspring, but other different ways suchas many-parents-based recombination can also be used. In the next generation, theselection process utilizes offspring solutions and elitism. Elitism is the process ofallowing some best fit individuals to be the part of parent population in the nextgeneration. Commonly, the algorithm utilizes a maximum number of generations asthe termination criterion to end the search process. GA has been applied to a varietyof optimization problems in engineering [48] and science [103] and more recentlyit is extended to data mining and machine learning applications [79] and rapidlygrowing bioinformatics area [176].

  • 1 Metaheuristics in Process Engineering: A Historical Perspective 9

    Fig. 1.4 The main components of the ACO paradigm

    1.2.2.2 Ant Colony Optimization

    ACO [64] metaheuristic has emerged recently as a new field for discrete opti-mization problems [63, 66]. The ACO metaheuristic is basically a multi-agentsystem in which agents cooperate through a low-level of interactions. The ACOparadigm has been inspired from nature. It is based on the foraging behavior of antcolonies. During the course, ants deposit pheromone on the ground. Ants follow,in probability, the pheromone deposited by previous ants. In this way, real antsare capable of finding the shortest path from food source to their nest. Artificialants in the ACO algorithm behave in similar way. A general structure of the ACOmetaheuristic is outlined in Fig. 1.4.

    ACO algorithms simulate ants walk through a construction graph to evolvesolutions. Thus, it is imperative to define a problem representation with solutioncomponents and relation between those components, i.e., a construction graphG.V;E/, where V is a set of elements and E is a set of connections betweenthe elements. There are four main phases in the outlined structure. The way thesephases are implemented defines the possible algorithms that can be obtained, i.e.,ant system (AS) [65], elitist-AS [65], AS-rank [29], MAX-MIN-AS [225], and antcolony system [64]. The application of these phases could vary from one algorithmto another, however, they can be described in a general way as follows.

    Initialization: Initialize pheromone information structure, the heuristic informa-tion (if available), and any other structure necessary to complete the problemrepresentation.

    BuildingSolution: Artificial ants incrementally construct solutions. At each stepof the solution construction, the ant makes a stochastic choice of the solutioncomponent to be added in the current partial solution S based on the currentpheromone information. This stochastic action is called state transition rule thatcan be given as follows:

    P.r j Sq/ D810), large-scale combinatorial optimization, and inter-disciplinary fields such asquantum and biological computing.

  • 2 Applications of Genetic Algorithms in Chemical Engineering I: Methodology 57

    References

    1. Agarwal, A., Gupta, S.K.: Jumping gene adaptations of NSGA-II and their use in the multi-objective optimal design of shell and tube heat exchangers. Chem. Eng. Res. Des. 86, 123139(2008a)

    2. Agarwal, A., Gupta, S.K.: Multi-objective optimal design of heat exchanger networks usingnew adaptations of the elitist non-dominated sorting genetic algorithm, NSGA-II. Ind. Eng.Chem. Res. 47, 34893501(2008b)

    3. Beveridge, G.S.G., Schechter, R.S.: Optimization: Theory and Practice. McGraw Hill,New York (1970)

    4. Bhat, S.A., Gupta, S., Saraf, D.N., Gupta, S.K.: On-line optimizing control of bulk free radicalpolymerization reactors under temporary loss of temperature regulation: an experimental studyon a 1-liter batch reactor. Ind. Eng. Chem. Res. 45, 75307539 (2006)

    5. Bhat, S.A.: On-line optimizing control of bulk free radical polymerization of methyl methacry-late in a batch reactor using virtual instrumentation. Ph. D. Thesis, Indian Institute ofTechnology (2007)

    6. Bryson, A.E., Ho, Y.C.: Applied Optimal Control. Blaisdell, Waltham (1969)7. Chankong, V., Haimes, Y.V.: Multi-objective Decision Making-Theory and Methodology.

    Elsevier, New York (1983)8. Charnes, A., Cooper, W.: Management Models and Industrial Applications of Linear Program-

    ming. Wiley, New York (1961)9. Coello Coello, C.A., Veldhuizen, D.A.V., Lamont, G.B.: Evolutionary Algorithms for Solving

    Multi-objective Problems, 2nd edn. Springer, New York (2007)10. Corne, D.W., Knowles, J.D., Oates, M.J.: The Pareto envelope-based selection algorithm for

    multiobjective optimization. In: Schoenauer, M., Deb, K., Rudolph, G., Yao, X., Lutton, E.,Merelo, J.J., Schwefel, H.P. (eds) Parallel Problem Solving from Nature, 4th Conference.Lecture Notes in Computer Science, pp. 839848. Springer, Paris (2000)

    11. Corne, D.W., Jerram, N.R., Knowles, J.D., Oates, M.J.: PESA-II: region-based selectionin evolutionary multiobjective optimization. In: Proceedings of Genetic and EvolutionaryComputation Conference (GECCO2001), pp 283290 (2001)

    12. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, Chichester(2001)

    13. Deb, K.: Optimization for Engineering Design: Algorithms and Examples, 2nd edn. PrenticeHall of India, New Delhi (2004)

    14. Deb, K., Agrawal, R.B. Simulated binary crossover for continuous search space. Complex Syst.9,115148 (1995)

    15. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast and elitist multiobjective geneticalgorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182197 (2002)

    16. Edgar, T.F., Himmelblau, D.M., Lasdon, L.S.: Optimization of Chemical Processes, 2nd edn.McGraw Hill, New York (2001)

    17. Fonseca, C.M., Fleming, P.J.: Genetic algorithm for multiobjective optimization: formulation,discussion and generalization. In: Proceedings of the 5th International Conference on GeneticAlgorithm, San Mateo, California, pp. 416423 (1993)

    18. Gadagkar, R.: Survival Strategies of Animals: Cooperation and Conflicts. Harvard UniversityPress, Cambridge (1997)

    19. Garg, S., Gupta, S.K.: Multi-objective optimization using genetic algorithm (GA). In:Pushpavanam, S. (ed.) Control and Optimization of Process Systems. Advances in ChemicalEngineering, vol. 43, pp 205245. Elsevier, New York (2013)

    20. Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic, New York (1981)21. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-

    Wesley, Reading (1989)

  • 58 S.K. Gupta and M. Ramteke

    22. Guria, C., Verma, M., Mehrotra, S.P., Gupta, S.K.: Multi-objective optimal synthesis anddesign of froth flotation circuits for mineral processing using the jumping gene adaptationof genetic algorithm. Ind. Eng. Chem. Res. 44, 26212633 (2005)

    23. Hajela, P., Lin, C.: Genetic search strategies in multicriterion optimal design. Struct. Optim. 4,99107 (1992)

    24. Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, AnnArbor (1975)

    25. Horn, J.N., Nafpliotis, N., Goldberg, D.: A niched Pareto genetic algorithm for multiobjectiveoptimization. In: Proceeding of the 1st IEEE Conference on Evolutionary Computation, vol. 1,pp. 8287 (1994)

    26. Jain, H., Deb, K.: An improved adaptive approach for elitist non-dominated sorting geneticalgorithm for many-objective optimization. In: Proceeding of Evolutionary Mult-criterionOptimization, 7th International Conference. Lecture Notes in Computer Science, vol. 7811,pp. 307321. Springer, Berlin (2013)

    27. Kasat, R.B., Gupta, S.K.: Multi-objective optimization of an industrial fluidized-bed catalyticcracking unit (FCCU) using genetic algorithm (GA) with the jumping genes operator. Comp.Chem. Eng. 27, 17851800 (2003)

    28. Knowles, J.D., Corne, D.W.: The Pareto archived evolution strategy: a new baseline algorithmfor multiobjective optimization. In: 1999 Congress on Evolutionary Computation, pp 98105.IEEE Service Centre, Washington (1999)

    29. Knowles, J.D., Corne, D.W.: Approximating the non-dominated front using the Pareto archivedevolution strategy. Evol. Comput. 8, 149172 (2000)

    30. Lapidus, L., Luus, R.: Optimal Control of Engineering Processes. Blaisdell, Waltham (1967)31. Man, K. F., Chan, T. M., Tang, K.S., Kwong, S.: Jumping genes in evolutionary computing.

    In: The 30th Annual Conference of IEEE Industrial Electronics Society (IECON04), Busan(2004)

    32. McClintock, B.: The Collected Papers of Barbara McClintock. Garland, New York (1987)33. Pareto, V.: Cours deconomie Politique. F. Rouge, Lausanne (1896)34. Ramteke, M., Gupta, S.K.: Biomimicking altruistic behavior of honey bees in multi-objective

    genetic algorithm. Ind. Eng. Chem. Res. 48, 96719685 (2009a)35. Ramteke, M., Gupta, S.K.: Biomimetic adaptation of the evolutionary algorithm, NSGA-II-

    aJG, using the biogenetic law of embryology for intelligent optimization. Ind. Eng. Chem.Res. 48, 80548067 (2009b)

    36. Ray, W.H., Szekely, J.: Process Optimization with Applications in Metallurgy and ChemicalEngineering. Wiley, New York (1973)

    37. Reklaitis, G.V., Ravindran, A., Ragsdell, K.M.: Engineering Optimization. Wiley, New York(1983)

    38. Ripon, K.S.N., Kwong, S., Man, K.F.: A real-coding jumping gene genetic algorithm (RJGGA)for multiobjective optimization. Inf. Sci. 177, 632654 (2007)

    39. Sankararao, B., Gupta, S.K.: Multi-objective optimization of the dynamic operation of anindustrial steam reformer using the jumping gene adaptations of simulated annealing. Asia-Pacific J. Chem. Eng. 1, 2131 (2006)

    40. Schaffer, J.D.: Some experiments in machine learning using vector evaluated genetic algorithm.PhD Thesis, Vanderbilt University (1984)

    41. Schaffer, J.D.: Multiple objective optimization with vector evaluated genetic algorithm. In:Grenfenstett, J.J. (ed.) Proceeding of 1st International Conference on Genetic Algorithm andtheir Applications, pp. 93100 (1985)

    42. Simoes, A.B., Costa, E.: Transposition vs. crossover: an empirical study. In: Proceeding ofGECCO-99, pp. 612619. Morgan Kaufmann, Orlando (1999a)

    43. Simoes, A.B., Costa, E.: Transposition: a biologically inspired mechanism to use with geneticalgorithm. In: Proceeding of the 4th ICANNGA, pp 178186. Springer, Portorez (1999b)

    44. Srinivas, N., Deb, K.: Multiobjective function optimization using non-dominated sortinggenetic algorithm. Evol. Comput. 2, 221248 (1994)

  • 2 Applications of Genetic Algorithms in Chemical Engineering I: Methodology 59

    45. Sulfllow, A., Drechsler, N., Drechsler, R.: Robust multiobjective optimization in high dimen-sional spaces. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) Proceedingof Evolutionary Multi-criterion Optimization, 4th International Conference. Lecture Notes inComputer Science, vol. 443, pp 715726. Springer, Heidelberg (2007)

    46. Yang, X.S.: Nature-inspired Metaheuristic Algorithms. Luniver Press, Frome (2008)47. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and

    the strength Pareto approach. IEEE Trans. Evol. Comput. 3, 257271 (1999)48. Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evolutionary

    algorithm for multiobjective optimization. In: Gian nakoglou, K.C., Tsahalis, D.T., Priaux, J.,Papailiou, K.D., Fogarty, T. (eds) Evolutionary Methods for Design, Optimization and Controlwith Applications to Industrial Problems (EUROGEN 2001), pp. 95100 (2001)

  • Chapter 3Applications of Genetic Algorithms in ChemicalEngineering II: Case Studies

    Santosh K. Gupta and Manojkumar Ramteke

    3.1 Introduction

    Chemical engineering systems are often associated with complex phenomena. Thegeneral formulation [described in Part I (Chap. 2)] of a model can be representedas z D fmodel.d; x;p/ in which z is the state of the system for the specific valuesof the decision variables, x, specified variables, d, and parameters, p. The state ofthe system is represented through a set of output variables, y.x;p/. These complexsystems, often modeled using a large number of coupled differential and algebraicequations, need to be optimized using multiple objectives. Genetic algorithm (GA)clearly outperforms conventional algorithms for handling multiple objective, non-linear formulations in a derivative-free environment. However, the very stochasticnature that provides a derivative-free operation leads to a requirement of a largecomputational time. Thus, the application of genetic algorithm (GA) [16, 17, 19,28, 34] to these complex chemical systems is seldom straightforward. Very often,optimization using GA requires a large computational effort because of the timeit takes for solving the model equations. Thus, one needs to use faster and moreefficient algorithms in order to obtain reasonably good optimal solutions.

    Several interesting and faster variants of GA have been developed over the lastfour decades to improve the applicability and the convergence speed. A few notableexamples are SGA, VEGA, HLGA, NPGA, NSGA, NSGA-II, SPEA, PESA,NSGA-II-JG, Altruistic-NSGA-II, and Real-coded NSGA-II. These are described in

    S.K. Gupta ()Department of Chemical Engineering, University of Petroleum and Energy Studies (UPES),Bidholi, via Prem Nagar, Dehradun, Uttarakhand 248007, Indiae-mail: [email protected]

    M. RamtekeDepartment of Chemical Engineering, Indian Institute of Technology Delhi, Hauz Khas,New Delhi 110016, Indiae-mail: [email protected]; [email protected]

    J. Valadi and P. Siarry (eds.), Applications of Metaheuristics in Process Engineering,DOI 10.1007/978-3-319-06508-3__3, Springer International Publishing Switzerland 2014

    61

    mailto:[email protected]:[email protected]:[email protected]

  • 62 S.K. Gupta and M. Ramteke

    detail in Chap. 2. The application of these to real-life chemical engineering systemssuch as the optimization of polymerization reactors, catalytic reactors, separationequipment, planning and scheduling, combinatorial optimization, and data-drivenapplications are described in this chapter.

    3.2 Optimization of Polymerization Reactors

    In recent years, multi-objective genetic algorithm has been used to optimize severalinteresting and complex systems in the field of polymer science and engineering [1,5,39,45,68]. These include the optimization of (existing) industrial reactors as wellas optimization of these reactors at the design stage (where more decision variablesare available), online optimizing control studies, etc. An example of the MOO of anindustrial semi-batch nylon 6 reactor is first discussed in detail. This is followed by ashort discussion of several other systems, e.g., poly methyl methacrylate (PMMA),poly ethylene terephthalate (PET), polystyrene (PS), and low-density polyethylene(LDPE).

    The MOO of the hydrolytic step growth polymerization of nylon 6 in an industrialsemi-batch reactor [48] is one of the early applications of multi-objective GA inpolymerization reaction engineering. Fiber-grade nylon 6 is commercially producedusing this reactor. The reaction scheme comprises of five reactions given in Table 3.1[75]. The polymerization is carried out in a semi-batch reactor (see Fig. 3.1; [57])at a temperature of about 250 C. Vaporization of the volatile components in theliquid phase, namely, the monomer, "-caprolactam, and the condensation by-productwater, takes place as the temperature of the liquid reaction mass increases fromthe feed temperature of about 90250 C. This leads to a gradual build-up ofthe pressure, p.t/, in the vapor space above (since the exit control valve at theexit is closed initially). All the reactions are reversible. Clearly, a relatively highconcentration of W is required at the beginning so as to drive the first reaction (ringopening) forward. However, lower amounts of W are needed later so that the secondreaction, poly-condensation, is driven in the forward direction to produce longermolecules of the polymer. This is achieved in the industrial reactor by adding higheramounts of water at the beginning, and then having its concentration in the liquidphase (where the reactions take place) decrease with time by vaporization (openingthe control valve at a prescribed rate). In other words, the concentration, W .t/, ofwater (or equivalent) in the liquid phase is a decision variable, a function of time.

    A mathematical model of this industrial reactor was first developed by Guptaet al. [29] and improved by Wajge et al. [75]. The model comprises 15 ordinarydifferential equations of the initial value type (ODE-IVP) describing the state of thesystem. Kinetic and thermodynamic data is compiled from the literature (some aretuned) and empirical correlations are used for heat and mass transfer phenomena.Complete details of the model are available in Wajge et al. [75]. The model has beentested against data in the plant.

    This model has been used by Mitra et al. [48] for two-objective optimization usingNSGA. The objectives were minimization of the reaction (or the batch) time, tf ,

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 63

    Table 3.1 Reaction scheme [75] of nylon 6 polymerization

    Ring opening: C1 CW S1Polycondensation: Sn C Sm SnCm CWPolyaddition: Sn C C1 SnC1Ring opening of cyclic dimer: C2 CW S2Polyaddition of cyclic dimer: Sn C C2 SnC2C1 D " -Caprolactam, W DWater, C2 D Cyclic dimer, Sn D Polymer chain with chain length, n

    Fig. 3.1 Nylon 6 polymerization in an industrial semibatch reactor (adapted from Ramteke andGupta [57])

    since this increases the annual production of the polymer, and minimization ofthe undesired side-product concentration of the cyclic dimer, C2. The presenceof cyclic compounds (of which C2 is a representative) in the product leads toprocessing problems as well as gives an unacceptable finished fabric. The followingtwo-objective optimization problem was solved (with penalty functions used for theconstraints):

    min I1p.t/; Tj D tf C w1 1 xm.tf /xm;ref

    2 C w2 1 n.tf /n;ref

    2 (3.1)

    min I2p.t/; Tj D C2f C w1 1 xm.tf /xm;ref

    2 C w2 1 n.tf /n;ref

    2 (3.2)

  • 64 S.K. Gupta and M. Ramteke

    Fig. 3.2 MOO results(adapted from Ramteke andGupta [57]) of an industrialnylon 6 semibatch reactor innondimensional form for fourdifferent cases (with andwithout vacuum and with orwithout a history of the jacketfluid temperature). W 0 =3.45 weight percent

    w1 D large positive number for xm.tf / < xm;ref; otherwise D 0 (3.3)

    w2 D large positive number for n.tf / n;ref n;ref; otherwise D 0 (3.4)

    End-point (at tf ) constraints were imposed on the monomer conversion,xmxm.tf / xm;ref, in the product stream as well as on the number averagechain length, nn.tf / D n;ref, of the product. The decision variables used werethe pressure history (a function of time, t ), p.t/, of the vapor in the semi-batchreactor and the jacket fluid temperature, Tj (a constant). This was probably thefirst trajectory optimization problem solved using multi-objective GA. Both theminimization functions were converted to equivalent maximization functions usingmax F D 1=.1C min I / since the MO-GA code used required all the objectivesto be maximized. Pareto optimal solutions were obtained for a specified value ofthe feed water concentration, W 0. These studies were further extended recently[57] by incorporating vacuum operation (as done in another industrial reactor)and the use of amino caproic acid, S1, in the feed (S1 was obtained from the de-polymerization of scrap nylon 6). NSGA-II-aJG was used for solving four MOOproblems: with and without a vacuum pump and with or without Tj being a functionof time. Figure 3.2 shows the Pareto sets (in dimensionless form) for four differentcases for W 0 D 3:45 weight percent.

    One of the early applications of GA for chain-growth (free radical addition)polymerization was the SOO of a PMMA batch reactor [13]. As for nylon 6, atrustworthy model developed earlier [22, 67, 70] was used for optimization usingSGA. The model comprises a reaction scheme, mass and energy balance equationsusing kinetic information, and several empirical equations for the heat and masstransfer rates. Details of these are available in Chakravarthy et al. [13]. The decisionvariable used is the temperature history, T .t/, while the objective function selectedwas the minimization of the reaction time, tf . End-point constraints included theattainment of design values of the final monomer conversion, xm;f xm.tf / Dxm;d , and of the number average chain length, n;f n.tf / D n;d . Although,

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 65

    the SGA algorithm served well for this SOO problem, it hardly touched uponthe power of GA that lies in its ability to solve MOO problems effectively. Also,multiple objectives were actually present in this system. For example, the propertiesof the polymer are known to be dependent on the average molecular weight aswell as on the breadth of the molecular weight distribution, as reflected throughthe polydispersity index, Q. The narrower the distribution, the better is the qualityof the polymer. This leads to an MOO problem solved by Garg and Gupta [24]:

    min I1Tt D tf C w1

    1 xm.tf /xm;d

    2

    C w2

    1 n.tf /n;d

    2

    (3.5)

    min I2Tt D Qf C w1

    1 xm.tf /xm;d

    2

    C w2

    1 n.tf /n;d

    2

    (3.6)

    w1 D large positive number for xm.tf / < xm;d ; otherwise D 0 (3.7)

    w2 D large positive number for n.tf / < n;d ; otherwise D 0 (3.8)

    End-point constraints were used on n;f and xm;f , as for the nylon 6 problem. Also,the minimization functions are converted to maximizations as described above.Interestingly, a unique optimal solution (instead of Pareto set) is obtained for thisproblem, a conclusion that was not apparent at the outset.

    Similar to industrial batch reactors, GA is also applied effectively to optimizeseveral other polymerizations. One application is the MOO of the continuous castingof PMMA films in a furnace. Methyl methacrylate (MMA), the monomer, is firstpre-polymerized in a plug flow tubular reactor (PFTR) at a constant temperature,TPFR, till a desired value of the monomer conversion, xm;PFR, is attained. Aconcentration, cI;0, of the initiator is used in the feed. Thereafter, the reaction masspasses in the form of a thin film of thickness, tfilm, through a furnace that has an axialprogram of the temperature, Tw.z/. This study [79, 80] comprises two objectives:maximization of the cross section-average value of the monomer conversion at theend of the furnace, xm;av;f , and minimization of the length, zf , of the furnace.Additionally, the cross section-average value, n;av;f , of the number average chainlength is constrained to be equal to a desired value, n;d , of commercial importance.Also, the temperature at any point in the film is constrained to lie below an uppersafe value, to prevent degradation/discoloration of the polymer film (referred to asa local constraint). Optimal values of several decision variables, TPFR, cI;0, xm;PFR,tfilm, and the history, Tw.z/, are obtained. The constraints were taken care of by usingthe bracketed penalties described above. This formulation is solved using NSGA toobtain the Pareto optimal solutions [79, 80].

    Recently, Agrawal et al. [3, 4] carried out the optimization of an industrial LDPEtubular reactor under steady state conditions, using multiple objective functions,both at the operation stage as well as at the design stage. Grade (of polymer)-change policies were also studied using the dynamic model for polymerization.NSGA-II and its JG adaptations were used. Usually, low-density polyethylene

  • 66 S.K. Gupta and M. Ramteke

    is produced by the high pressure polymerization of ethylene in the presenceof chemical initiators (e.g., peroxides, oxygen, azo compounds), in long tubularreactors. Very severe processing conditions are used, such as pressures from 150300 MPa and temperatures from 325625 K. Very flexible and branched polymermolecules are obtained. The typical conversion of ethylene per pass is reportedto be 3035 % and the unreacted ethylene is separated and recycled. Very severeoperating conditions deteriorate the quality of the polymer due to the formationof undesired side products (short chain branching, unsaturated groups, etc.). Theproblem formulation comprises the minimization of these side products and thesimultaneous maximization of the monomer conversion (for a given feed flowrate). The desired properties of the LDPE produced, e.g., the number-averagemolecular weight, are constrained to lie at desired values. Several MOO problemsare formulated and solved to obtain the Pareto optimal solutions [3, 4].

    Polyethylene terephthalate (PET, the most common polyester) is an importantcommodity polymer. Bhaskar et al. [57] optimized the industrial production of PETin a third-stage, wiped-film finishing reactor using terephthalic acid (TPA) as one ofthe two monomers. The problem formulation comprises of two objective functions,namely, the minimization of the concentrations of the acid end groups (which leadto breakage of filaments during the high-humidity spinning operation) and of thevinyl end-groups (which lead to a coloration of the fiber) in the product. In orderto maintain the strength of the fiber, the degree of polymerization of the productis restricted to a desired value [n;f D n;d .D 82/] using an equality constraint.Also, inequality constraints are imposed on the acid end-group concentration inthe product and the concentration of the diethylene glycol end-group. The formeris restricted below a specific value (one constraint is used) whereas the latter ismaintained within some range (two constraints are used). These constraints areimposed to maintain the quality of the finished product in terms of the crystallinityand dyeability of the fiber. The three inequality constraints are taken care of byusing penalty functions. Temperature was used as one of the decision variables.The MOO using NSGA was found to give a just a single optimal solution in eachrun. However, different solutions were obtained for multiple applications of thealgorithm using different values of the random seed number [see Table 2.1 in Part I(Chap. 2)]. These solutions are found to be superior to current operating conditionsin the industrial reactor, albeit by only a few percent. The non-dominated collectionof all such solutions constituted a Pareto optimal front. This illustrates the inabilityof NSGA to converge to the Pareto set. This MOO problem is, thus, an unusual oneand can be used as a test problem for developing improved optimization algorithms.

    Apart from these, several interesting applications of GA to polymer reaction engi-neering [8,18,69] and polymer designing [72] have been reported. Silva and Biscaia[69] optimized the batch free radical polymerization of styrene. They maximized themonomer conversion rate and minimized the concentration of the initiator residue inthe product. Deb et al. [18] optimized the initial stages of the epoxy polymerizationprocess using NSGA-II. The objectives used in this study were the maximizationof the number average molecular weight, minimization of the polydispersity index(which, strictly speaking, does not mean much physically), and minimization of

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 67

    the reaction time. Pareto optimal results were obtained. These showed 300 %improvement in the productivity over the benchmark values. Bhat et al. [8] reportedthe multi-objective optimization of the continuous tower process for styrene poly-merization. The two objectives used were the maximization of the final monomerconversion and the minimization of the polydispersity index of the product. Allthese MOO studies show considerable improvement in the productivity as wellas the quality of the product. This also gives the design engineer several choicesof selecting the operating conditions. Unfortunately, sometimes the best operatingconditions are not the most robust operating points. This is important in real-lifesituations where unavoidable fluctuations always exist in the process variables.Ramteke and Gupta [60] investigated such fluctuations in the process variables andobtained robust Pareto solutions for the industrial polymerization of nylon 6.

    Application of GA to experimental online optimizing control of polymerizationreactors is a challenging problem. It comprises two parts: re-tuning of the modelparameters using measured variables (to negate the effects of model inaccuracies)and computation of re-optimized decision variables (to negate the effect of distur-bances like heater failure, etc.). Usually, a single objective function such as the batchtime is minimized while meeting the requirements on quality, e.g., xm;f D xm;d andn;f D n;d . An optimal history of the decision variable [usually, the temperature,Topt.t/] so computed, offline, is implemented on the reaction mass using controllers(using a slave computer with Labview). In between, a simulated disturbance, forexample, switching off the electrical heater for a short period, is implemented onthe reactor. During this time (as the temperature decreases with time), the twolevels of the computer (the master and slave) retunes the model parameters usingexperimental data collected till then and compute the re-optimized temperaturehistory, Tre-op.t/. This is implemented as soon as the electrical power comes back(the disturbance passes by). One of the first studies along this direction was that ofonline optimizing control of MMA polymerization in a specially made viscometer-cum-reactor assembly [44] using a guided version of SGA [so as to speed up thecomputation of Tre-op.t/ in about 6 real minutes]. The use of the JG operator inNSGA-II is particularly useful in the cases of online optimizing control due to itsfaster convergence compared to the usual NSGA-II. This was further illustrated byBhat et al. [9] on a 1 L Labview-interfaced stainless steel batch reactor using NSGA-II-aJG. In this study, the power input to the stirrer-motor and the temperature historywere used as a soft sensor to estimate, experimentally, the average molecular weightand the monomer conversion in the reaction mass at any time, thus identifyingthe state of the system (continuously). Sangwai et al. [66] extended the onlineoptimizing control studies on PMMA polymerization in a viscometer-cum-reactorassembly for the more complex case of non-isothermal conditions using NSGA-II-aJG. In addition to the JG operation, the adaptation of the biogenetic law ofembryology [59] was found to be quite effective in handling the complex problemsof online optimizing control. In this adaptation, the offline results were used as aseed population, akin to the embryo, while solving actual online optimizing controlproblems. This reduces the computational efforts considerably. This is illustrated inRamteke and Gupta [59] for the nylon 6 system.

  • 68 S.K. Gupta and M. Ramteke

    3.3 Optimization of Catalytic Reactors

    Catalytic reactors are an integral part of the chemical industry. These are commonlyencountered in several petrochemical units and are associated with a turnover ofbillions of dollars. Multiple objectives are present quite naturally in these systems.Thus, MOO of these operations has been studied extensively [5, 45, 68]. A fewinteresting cases are discussed here, e.g., production of phthalic anhydride (PA) andmaleic anhydride (MA), steam reforming, fluidized-bed catalytic cracking (FCC) ofheavier components of crude oil like gas oil, to value-added lighter products, likegasoline or naphtha, etc.

    Phthalic anhydride is a common raw material for polyester production. Com-mercially, it is produced using the gas phase catalytic oxidation of o-xylene inmulti-tubular reactors. A single reactor tube involves several zones of catalysts withalternate regions in between either being hollow or having an inert packing. Thisis shown in Fig. 3.3a. The reaction scheme is shown in Fig. 3.3b. The reactions arehighly exothermic and the hollow regions or inert packings in between the catalystzones help keep the temperature of the gas within limits. The gaseous reactionmixture coming out of the reactor is processed in switch condensers operatingalternately to separate the PA. At any time, the PA is condensed (and solidified) onthe metal surface of one of the condensers, while in the other, the solidified PA ismelted and the condenser made ready for use in the next cycle. The treated gas fromthe condensers is then scrubbed with water, or incinerated catalytically or thermally.This system has been modeled and multi-objectively optimized by Bhat and Gupta[10]. The mass and energy balance equations for this reactor are available in Bhatand Gupta [10]. The gas phase is described by ODE-IVPs while the nonlinearalgebraic equations describe the impervious catalyst particles. The industrial reactorto be optimized comprises of nine catalyst zones with eight intermediate inertcooling zones. The state of the system is defined using 20 decision variables: thelengths, L1 L8, of each of the eight catalyst beds (the length of the ninth catalystbed is calculated by using of the total specified reactor length), lengths, S1 S8,of the eight intermediate inert beds, concentration, cin, of o-xylene (OX) in the feedper m3 air at NTP, the temperature, TF;in, of the feed, the mass flow rate, Pm, of theco-currently flowing coolant, and the feed temperature, Tc;in, of the coolant. Thetwo objective functions (with penalty functions) to be optimized for this reactor aremaximization of the yield, XPA, of PA and minimization of the total length, Lcat, ofthe (actual) catalyst bed [10]:

    Objective functions:

    max I1u D XPA C w1

    1 XPAXPA;ref

    2

    C w2

    1 LcatLcat;ref

    2

    C w3 (3.9)

    max I2u D 11C Lcat Cw1

    1 XPAXPA;ref

    2

    Cw2

    1 LcatLcat;ref

    2

    Cw3 (3.10)

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 69

    Fig. 3.3 (a) Reactor setup with nine catalyst zones for PA production and (b) the reaction scheme[10]

    where, uD cin; TF;in; Tc;in; Pm;S1; S2; : : : ::; S8; L1; L2; : : : ::; L8T and LcatD9X

    iD1

    Li

    (3.11)

    Subject to:Constraints:

    Tmax 510 C (3.12)

  • 70 S.K. Gupta and M. Ramteke

    Total length of the reactor tube, Ltube D 3:5m (3.13)

    L9 D 3:50 8X

    iD1Li

    8X

    iD1Si (3.14)

    Model equations (Bhat and Gupta [10]) (3.15)

    Bounds:

    65 cin 85g OX/(m3air at NTP) (3.16)

    147 C TF;in 287 C (3.17)

    337 C Tc;in 447 C (3.18)

    0:001 Pm 0:005 (kg coolant)/s (3.19)

    0:2 Si 0:45m, i D 1; 2; : : : ::; 7 (3.20)

    0:1 S8 0:45m, (3.21)

    0:05 L1 0:9m, (3.22)

    0:01 Li 0:2m, i D 2; 3; : : : ::; 8 (3.23)

    The values of XPA;ref D 1:2 and Lcat;ref D 3:6m are used as reference values inthe penalty functions (somewhat arbitrarily). The values of L1 L8 and S1 S8are selected optimally by the optimization algorithm. L9 is then computed usingEq. (3.14). The weighting functions in the penalties used for constraint-violationsare selected as:

    if XPA 1:1;w1 D 500I else w1 D 0 (3.24)

    if L9 0m;w2 D 3000I else w2 D 0 (3.25)

    if Tmax 510 C in bed i; i D 1; 2; : : : ::; 9Iw3 D 0I else D 3000C 250.i 1/(3.26)

    if Li 0:01 m; i D 1; 2; : : : ::; 9Iw3 D 0I else w3 D 300 (3.27)

    The system parameters are given as: diameter of each reactor tube D 25mm,mass flux, G D 19; 455 kg m2h1, and diameter of the V2O5 TiO2 catalystparticles D 3mm. This optimization problem has been solved by Bhat and Gupta[10] using a slightly adapted version of NSGA-II-aJG. Also, the same problem has

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 71

    Fig. 3.4 Pareto-optimal frontfor the two-objectiveoptimization problem(Eqs. 3.93.27) of PAproduction in a nine-bedtubular catalytic reactor (25generations) (adapted fromRamteke and Gupta [59])

    been used as a test problem for the Alt-NSGA-II-aJG [58]. The results obtainedby the latter algorithm were better than those from NSGA-II-aJG. Further, thestudy was extended by Ramteke and Gupta [59] using the Biogenetic-NSGA-II-aJG adaptation in which the seed solutions from the 7-catalyst bed MOO problem(with 16 decision variables) are used. This gave optimal solutions in around 25generations (see Fig. 3.4) whereas NSGA-II-aJG (without seeds) took around 71generations.

    Maleic anhydride (MA) is used for the production of unsaturated polyester resins.It is commercially produced using fixed bed catalytic reactors with VPO catalyst andwith n-butane as the raw material. An improved model has been developed recentlyby Chaudhari and Gupta [14]. The model incorporates LangmuirHinshelwoodkinetics. This model is similar to that of PA reactors and comprises of ODE-IVPsfor the gas phase and ODE-BVPs for the porous catalyst phase. The ODE-BVPs ofcatalyst phase are converted into nonlinear algebraic equations using the orthogonalcollocation (OC) technique. The ODE-IVPs are solved using Gears algorithm(D02EJF subroutine from NAG library) whereas the nonlinear algebraic equationsare solved using the modified Powells hybrid method (C05NBF from NAG library).The MO optimization problem comprises of combinations of several objectivefunctions chosen from among maximization of the productivity, minimization of theoperating cost, and minimization of the pollution. The MOO problems are solvedusing NSGA-II-aJG. The problems are also solved using the Alt-NSGA-II-aJG.Interestingly, the latter algorithm was found to be superior to NSGA-II-aJG forthis two-objective problem but inferior for the three-objective problems. Details areavailable in Chaudhari and Gupta. [14]

    Steam reforming is used for the production of synthesis gas and hydrogen fromnatural gas. A typical steam reforming unit consists of the reforming reactor, a shiftconverter, and a pressure swing adsorption (PSA) unit. The reaction scheme [51,56]for a feed of methane is given by:

  • 72 S.K. Gupta and M. Ramteke

    Reforming: CH4 C H2O COC 3H2I4Hr D 8:623 105 .kJ/kmol/ (3.28)

    Shift: COC H2O CO2 C H2I4Hr D 1:7196 105 .kJ/kmol/ (3.29)

    Reforming: CH4 C 2H2O CO2 C 4H2I4Hr D 6:906 105 .kJ/kmol/ (3.30)

    Methane is mixed with steam and recycle hydrogen in the reforming reactor whereit is converted to CO and H2. This processed gas mixture is cooled by exchangingheat with the boiler feed water. The cooled gas is further processed in the two-stage adiabatic shift converter where CO is converted to CO2 and more H2 isproduced. The exothermic heat of reaction increases the temperature of the gasmixture. The heated gas leaving the shift converter again releases heat to the boilerfeed water to produce very high pressure (VHP) steam. This cooled gas mixture isthen treated in a pressure swing adsorption (PSA) unit to separate out the hydrogenand the off-gases. These off-gases with additional fuel are burned in the furnaceassociated with the reforming reactor to supply the required endothermic heats ofreaction. This operation has been optimized using multiple objectives by Rajesh etal. [56]. The objectives of the study were the minimization of methane feed rate andmaximization of the flow rate of carbon monoxide in the synthesis gas for a fixedrate of production of hydrogen. A Pareto optimal front is obtained. The details of themodel and the results can be obtained from Rajesh et al. [56]. This study has beenextended for the dynamic operation of steam reformers by Nandasana et al. [51].The problem comprises of the minimization of the cumulative disturbances of H2and CO production for a given (simulated) disturbance in the input feed flow rate ofmethane. The details of the formulation and the results are available in Nandasanaet al. [51].

    Fluidized-bed catalytic cracking (FCC) is another important conversion operationin most integrated refineries. The FCC unit (FCCU) comprises two importantequipment: the riser reactor to catalytically crack heavy crude-fractions like gasoil to gasoline or naphtha, and the regenerator to burn off the deposited coke insidethe porous catalyst particles. An industrial FCCU has been modeled and optimizedby Kasat et al. [38] using NSGA-II. The model comprises a five-lump kineticscheme. Several MOO problems were formulated. The objective functions used aremaximization of gasoline yield, minimization of air flow rate, and minimizationof the percent CO present in the flue gas. The study was further extended usingthe jumping gene adaptation, NSGA-II-JG. Indeed, this was the first applicationof this adaptation in chemical engineering. The results obtained using NSGA-II-JG were found to be superior and the new algorithm was faster as compared tothe original NSGA-II. The details of the model and results can be obtained fromthe respective references. The MOO of hydrocracking reactors using NSGA-II wasstudied by Bhutani et al. [12]. Several such studies have been listed extensively byMasuduzzaman and Rangaiah [45] and Sharma and Rangaiah [68].

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 73

    3.4 Optimization of Separation Equipment and NetworkProblems

    Separations equipment and networks play an important role in chemical engineer-ing. The overall cost effectiveness of chemical plants depends significantly on theeffective application of separations units. This leads to several interesting optimiza-tion studies. These include the optimization of scrubbers, cyclones, adsorbers, frothfloatation units, etc. Among these, the optimization of froth floatation is describedin detail and the MOOs of other units, including heat exchanger networks (HENs),are described briefly.

    Froth floatation is used for separating valuable minerals, associated minerals,and gangue from their finely ground ores. The process utilizes differences in thesurface properties of the minerals involved. In this process, finely ground orecomprising of a mixture of minerals is suspended in an aerated liquid in which afroth is generated using frothing agents (surfactants). In such conditions, one of theconstituents from the mixture, which is more difficult to wet by the liquid, tends toadhere preferentially to the gas bubbles. The gas bubble-particle aggregates havinga lower effective density rise to the surface and float leading to a froth rich in agiven constituent. The froth is continuously discharged through an overflow weiras a concentrated stream. Efficient separation depends on the percent loading ofthe solids, type of frothing agent, rate of aeration, the pH of the aerated liquid,etc. A typical flotation cell comprises of a single feed stream and two exit streams,a mineral-rich concentrated stream, and gangue-rich tailings. The performance ofthe flotation cell is measured using two parameters: recovery, Rc , and grade, G.The former is the ratio of the flow rates of the solid in the concentrated stream tothat in the feed stream, whereas the latter is the fraction of the valuable mineralin the concentrated stream. Usually, several floatation cells are used in combination(circuit) to increase the overall separation efficiency. A common strategy to improvethe grade is to introduce the feed to a rougher cell and the concentrate streamfrom it is further refloated subsequently in more and more cleaner cells, whereasthe gangue-rich tailings are refloated in scavenger cells. Mineral beneficiation is abillion dollar business and is used for the processing of several thousands of tons ofore per year. Clearly, the marginal improvement in the overall efficiency can havesignificant economic impact. Moreover, minerals are gradually depleting and it isbecoming increasingly important to extract lower-quality ores. This gives significantimpetus for the optimal design of froth flotation circuits.

    Guria et al. [30] modeled and optimized (for SOO and MOO) a generalcircuit [46] having two flotation units. The schematic of such a simple circuit isshown in Fig. 3.5. A feed of just two species, valuable and gangue, is fed to thecircuit. The objective of the study is to maximize the overall recovery (Rc) ofthe concentrate while maintaining the desired grade (Gd ) of the concentrate andhaving a desired value of the total volume (Vd ) of the cells. The decision variablesare the two residence times, . 1; 2T /, and the cell-linkage parameters,. 10; 11; 12; 20; 21; 22T / and . F1; F 2; 10; 11; 12; 20; 21; 22T /.The SOO formulation is given as:

  • 74 S.K. Gupta and M. Ramteke

    Fig. 3.5 Generalized froth flotation circuit (adapted from Guria et al. [30]) with two cells

    Objective function:

    max I1; ; D Rc (3.31)

    Subject to:Constraints:

    G D Gd IV D Vd (3.32)mX

    iD1F;i D 1:0 (3.33)

    mX

    iD1k;i D 1:0I k D 1; 2 : : : ::m (3.34)

    mX

    iD1k;i D 1:0I k D 1; 2 : : : ::m (3.35)

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 75

    Fig. 3.6 Pareto-optimal front(adapted from Guria et al.[30]) for the MOO of a frothflotation circuit

    Model equations [30] (3.36)

    Bounds:

    0 1I 0 1 (3.37)

    i;L i i;U I i D 1; 2 (3.38)

    Here, m represents the number of cells (D 2). Equations (3.33)(3.35) ensure thesum of the fractions of the split streams to be equal to 1. The problem comprises ofa total of 16 variables and five constraints. The other details can be obtained fromGuria et al. [30]. These workers have solved this problem using NSGA-II-mJG.The results obtained were found to be superior to those of Mehrotra and Kapoor[46] using the conventional Luus Jaakola [43] technique (it is clear that the latteralgorithm led to local optimal solutions instead of a global optimum). The studywas extended for multiple objectives with the maximization of the overall grade (G)being the second objective function. The Pareto optimal front for this MOO studyis shown in Fig. 3.6. The study was further extended to the MOO of an industrialfluorspar beneficiation plant [32].

    Cyclone separators (or just cyclones) are frequently used for vapor-solid separa-tions in the chemical industry. Most of the early designs were based on practicalexperience. These crude designs, however, required further refinement. Also, inthe competitive industrial environment of today, optimization of these units isdesired. One such study was carried out by Ravi et al. [63] to optimize a trainof several industrial cyclone units using NSGA. The MOO problem comprisesof two objectives, the maximization of the overall collection efficiency and theminimization of the pressure drop. The decision variables used were the numberof cyclones and some geometric parameters of the cyclones, e.g., diameter of thecyclones, diameter of the exit pipe, diameter of the base of the cyclone, total heightof the cyclone, depth of the exit pipe, height of the cylindrical portion of the cyclone,

  • 76 S.K. Gupta and M. Ramteke

    Fig. 3.7 A typical simpleheat exchanger network(adapted from Agarwal andGupta [2])

    height of the cyclone inlet, width of the cyclone inlet, and inlet vapor velocity.Pareto optimal fronts were obtained using NSGA. The study gave important insightsthat the diameters of the cyclones and vortex finder, and the number of cyclonesused in parallel, are critical parameters in deciding the optimal performance. Morerecently, the MOO of cyclone separators using GA was investigated by severalresearchers [55, 65]. Ravi et al. [64] further extended their study for the MOOof venturi scrubbers. These are used for the separation of gaseous pollutants. Theobjectives used were the maximization of the overall collection efficiency and theminimization of the pressure drop. Pareto optimal fronts for this system were againobtained. Details can be had from the respective references.

    The principle of adsorption is used in a variety of processes for separation.Adsorption in a chromatographic process such as a simulated moving bed (SMB)system or the Varicol process is one such interesting operation. These are used inthe pharmaceutical sector for the separation of large-scale fractionation of xyleneisomers or sugar isomers. One of the earliest MOO studies of this operation wasreported by Zang et al. [77, 78] and Subramani et al. [71]. The objectives usedwere the simultaneous maximization of the productivity and of the purity. Paretooptimal fronts were obtained using NSGA. The results reported by Subramani et al.[71] show significant improvement for both the SMB and Varicol processes. Thesestudies were extended by several other researchers from same group.

    GA has also been applied to several other separation processes such as membraneseparations [76], desalination [31], and crude distillation [37]. Details may beobtained from the respective references.

    Network systems similar to the froth flotation circuits discussed above, arequite commonly encountered in chemical engineering. Important examples are heatexchanger networks (HENs) and networks incorporating several heat exchangersand distillation units. Optimal designing of heat exchanger networks is an importantchemical engineering problem. Recently, Agarwal and Gupta [2] optimized heatexchanger networks using NSGA-II-sJG and NSGA-II-saJG. A typical simple heatexchanger network studied by these workers is shown in Fig. 3.7. It comprises ofthree cold streams (the upper three horizontal lines with the arrows pointed towardthe left) and three hot process streams (lower three horizontal lines with arrowspointed to the right). The heat exchange between these streams is shown by vertical

  • 3 Applications of Genetic Algorithms in Chemical Engineering II: Case Studies 77

    Fig. 3.8 Pareto-optimal frontfor MOO of heat exchangernetwork (adapted fromAgarwal and Gupta [2])

    connecting lines. The objectives of the study are minimization of the annualizedcost and minimization of the total hot and cold utility (water and steam) required,the latter fast becoming a scarce natural resource. The decision variables used arethe number of intermediate heat exchangers (which decrease the utility requirementsbut increase the capital cost) and the intermediate temperatures of the streams. Animportant point in such problems is that since the number of heat exchangers is not afixed value, the number of decision variables is different for different chromosomes.That is, the length of the chromosome changes dynamically. The sJG and saJGadaptations handle such unequal lengths of chromosomes. One of the formulationssolved by Agarwal and Gupta [2] is:

    Objective function:

    max I1 D Annual cost (3.39)min I2 D Total requirement of the hot + cold utility (kW) (3.40)

    Subject to:Constraints:

    Model equations (Agarwal and Gupta [2]) (3.41)

    Complete details of the model can be obtained from Agarwal and Gupta [2]. TheMOO results are shown in Fig. 3.8. The results are also compared with SOO resultsobtained by the heuristic approach of Linnhoff and Ahmed [42]. The MOO resultsgive an important insight for selecting the operating point, which in this case isto operate the system at a slightly higher value of the annual cost since it reducesthe total utility requirement from about 58,000 kW for the single-objective solution(min cost) to about 54,000 kW (see Fig 3.8). Several studies on separation unitshave been listed extensively by Masuduzzaman and Rangaiah [45] and Sharma andRangaiah [68].

  • 78 S.K. Gupta and M. Ramteke

    Fig. 3.9 A simple crude oil scheduling problem (adapted from Ramteke and Srinivasan [62])

    3.5 Planning and Scheduling Optimization of ChemicalProcesses

    Scheduling and planning problems are conventionally optimized [47] using linearprogramming (LP) solvers. Recently, however, the use of evolutionary algorithmssuch as GA has become popular for solving such problems due to the distinctadvantages they offer for solving problems involving multiple objectives. Also,several studies have been reported using hybrid LP-GA. The crude oil schedulingoptimization of a marine-access refinery is explained in detail.

    Crude oil scheduling in marine-access refineries involves the unloading of crudefrom ships into storage tanks and thereafter charging the crude to distillation units(CDUs). Usually, crudes arrive in very large crude carriers (VLCCs) with com-partments for multiple crude parcels, or in smaller, single-parcel ships. VLCCs aredocked off-shore at a single buoy mooring (SBM) station whereas small ships can beunloaded directly at on-shore jetties. The operation involves simultaneous unloadingof several ships. The continuous fluctuation in oil prices makes the refinery businesshighly agile in nature. Also, l