Home :: Books :: Computers & Internet  

Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet

Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical
Reference
Religion & Spirituality
Romance
Science
Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
Practical Optimization Methods: With Mathematica Applications

Practical Optimization Methods: With Mathematica Applications

List Price: $89.95
Your Price: $64.59
Product Info Reviews

<< 1 >>

Rating: 5 stars
Summary: My Optimisation Companion
Review: Practical Optimization Methods - M.Asghar Bhatti

This is my favorite optimisation book. I recommend it to anyone interested in the application of optimisation techniques, in particular for those in industry. This book has been a constant companion in my optimisation adventure and unlike other books; it has helped me firmly establish a solid foundation and understanding on the various optimisation techniques and the theories behind them. Believe me, I can even read those books which I have shelved in the past because they were complicated with too many cryptic mathematical statements. They don't scare me anymore.

Bhatti wisely used Mathematica as the teaching platform and the accompanying OptimizationToolbox software allows one to brush aside the cryptic mathematical statements. The reader can now concentrate on the concepts, relegating the mathematics manipulations to Mathematica and the functions of the OptimisationToolbox. What I like about this book is that it also shows how the Taylor Series, the Quadratic Form and convexity requirements are put into practice to create an iterative scheme to solve a system of non-linear equations. The OptimisationToolbox and the internal Mathematica functions seamlessly pace the reader through the mathematical preliminaries. By the end of Chapter 3, the reader should now be a good shape to go to the more serious stuffs.

Chapter 4 deals with the subject of optimality conditions starting first with the optimality conditions for unconstrained optimisation problems. These conditions, albeit slightly more involved in computation, are essentially the same as the optimality conditions for single variable functions of the high school days. The "slightly involved" computations are those of the Grad (1st Order and Necessary Condition) and the Hessian (2nd Order and Necessary). Mathematica graphics are put to great effect to help visualize the meaning of these conditions.

The additive property of constraints, which was dealt with in graphic detail, extends the earlier ideas behind the optimality conditions for an unconstrained optimisation to that for constrained optimisation problems.

The introduction to Chapter 5 gives an excellent overview of issues in solving unconstrained problems. Basically, all solution schemes covered in this chapter involve two steps. The first step is a simple iterative scheme, which requires a direction and a step length. The second step is a termination condition, taken as when the gradient of objective function, which should be zero at the optimal point, is sufficiently close to within a specified tolerance to zero.

The process of computing the step length in for a particular search direction is known as the line search. The line search methods (including Mathematica algorithms) covered include analytical line search, equal interval search, section search, the Golden Section search, the Quadratic Interpolation Method and the Approximate Line Search based on Armijo's rule.

As for the search direction, one obvious choice would be along the direction of greatest negative change - the Steepest Descent Method. The performance of this method can suffer badly as it zigzag search scheme slows down to a crawl as it approaches the optimal point. One improvement would be to retain some potion of the previous search direction, so the resultant search pattern is not successively perpendicular to each other but somewhere in between. This approach of adding some potion of the previous direction is known as the Conjugate Gradient Method. The two "some previous direction potion" schemes covered and included as Mathematica functions are the Fletcher-Reeves and the Polak-Ribiere schemes. Other numerical methods covered include the Modified Newton and the Quais-Newton Methods. One drawback of latter approach is the computation of the Hessian Matrix at each iteration step. The Quasi-Newton Methods do not require the computation of the Hessian Matrix. Instead they use some inverse Hessian update methods. Two such methods covered are the DFP (Davidon, Fletcher, and Powel) Update and the BFGS (Broyden, Fletcher, Goldfarb, and Shannon) Update. Don't be intimidated by all these jargons, Mathematica functions including graphic functions are provided to provide a step-by-step explanation and presentations of the various concepts are provided.

The section on Linear Programming is extensive, in comparison to other chapters. I was tempted to skim over this LP section because the technique is well known and there are many industry standard LP algorithms on the market so why spend too much time on it. However, my curiosity got the better of me and I must confess that the combination of the accompanying OptimisationToolbox and Mathematica Graphics makes the revision on Linear Programming entertaining and interesting. The section started with an overview of issues involved in solving an underdetermined system of linear equations; going over the Gauss-Jordan, LU decomposition and introduction of slack variables to convert the LP problem into its standard form. The simplex algorithm is introduced in three styles: Simplex Tableau, Basic Simplex and Revised Simplex. The first two simplex styles, as Mathematica functions by the way, are intended to show the sequence of steps of the simplex algorithm. For large problems, however, the above LP methods may take a long time and researchers have developed better search methods such as the interior point method. The interior point method, as its name implies, starts from an interior feasible point and takes appropriate steps alone descent directions towards the optimal point.

Chapters 8 & 9 adequately covered the subject of quadratic programming and constrained nonlinear problems. However, they concentrated only on local optimisation techniques. Inclusion of global optimisation methods such as Simulated Annealing (SA), Genetic Algorithms (GA), Discrete Gradient Methods (DGM), Hooke-Jeeves, Nelder and Mead, and Powell methods would have made the book a complete guide to practical optimisation.

My favorite Optimisation Book - Clear and Useful


Rating: 5 stars
Summary: My Optimisation Companion
Review: Practical Optimization Methods - M.Asghar Bhatti

This is my favorite optimisation book. I recommend it to anyone interested in the application of optimisation techniques, in particular for those in industry. This book has been a constant companion in my optimisation adventure and unlike other books; it has helped me firmly establish a solid foundation and understanding on the various optimisation techniques and the theories behind them. Believe me, I can even read those books which I have shelved in the past because they were complicated with too many cryptic mathematical statements. They don't scare me anymore.

Bhatti wisely used Mathematica as the teaching platform and the accompanying OptimizationToolbox software allows one to brush aside the cryptic mathematical statements. The reader can now concentrate on the concepts, relegating the mathematics manipulations to Mathematica and the functions of the OptimisationToolbox. What I like about this book is that it also shows how the Taylor Series, the Quadratic Form and convexity requirements are put into practice to create an iterative scheme to solve a system of non-linear equations. The OptimisationToolbox and the internal Mathematica functions seamlessly pace the reader through the mathematical preliminaries. By the end of Chapter 3, the reader should now be a good shape to go to the more serious stuffs.

Chapter 4 deals with the subject of optimality conditions starting first with the optimality conditions for unconstrained optimisation problems. These conditions, albeit slightly more involved in computation, are essentially the same as the optimality conditions for single variable functions of the high school days. The "slightly involved" computations are those of the Grad (1st Order and Necessary Condition) and the Hessian (2nd Order and Necessary). Mathematica graphics are put to great effect to help visualize the meaning of these conditions.

The additive property of constraints, which was dealt with in graphic detail, extends the earlier ideas behind the optimality conditions for an unconstrained optimisation to that for constrained optimisation problems.

The introduction to Chapter 5 gives an excellent overview of issues in solving unconstrained problems. Basically, all solution schemes covered in this chapter involve two steps. The first step is a simple iterative scheme, which requires a direction and a step length. The second step is a termination condition, taken as when the gradient of objective function, which should be zero at the optimal point, is sufficiently close to within a specified tolerance to zero.

The process of computing the step length in for a particular search direction is known as the line search. The line search methods (including Mathematica algorithms) covered include analytical line search, equal interval search, section search, the Golden Section search, the Quadratic Interpolation Method and the Approximate Line Search based on Amigo's rule.

As for the search direction, one obvious choice would be along the direction of greatest negative change - the Steepest Descent Method. The performance of this method can suffer badly as it zigzag search scheme slows down to a crawl as it approaches the optimal point. One improvement would be to retain some potion of the previous search direction, so the resultant search pattern is not successively perpendicular to each other but somewhere in between. This approach of adding some potion of the previous direction is known as the Conjugate Gradient Method. The two "some previous direction potion" schemes covered and included as Mathematica functions are the Fletcher-Reeves and the Polak-Ribiere schemes. Other numerical methods covered include the Modified Newton and the Quais-Newton Methods. One drawback of latter approach is the computation of the Hessian Matrix at each iteration step. The Quasi-Newton Methods do not require the computation of the Hessian Matrix. Instead they use some inverse Hessian update methods. Two such methods covered are the DFP (Davidon, Fletcher, and Powel) Update and the BFGS (Broyden, Fletcher, Goldfarb, and Shannon) Update. Don't be intimidated by all these jargons, Mathematica functions including graphic functions are provided to provide a step-by-step explanation and presentations of the various concepts are provided.

The section on Linear Programming is extensive, in comparison to other chapters. I was tempted to skim over this LP section because the technique is well known and there are many industry standard LP algorithms on the market so why spend too much time on it. However, my curiosity got the better of me and I must confess that the combination of the accompanying OptimisationToolbox and Mathematica Graphics makes the revision on Linear Programming entertaining and interesting. The section started with an overview of issues involved in solving an underdetermined system of linear equations; going over the Gauss-Jordan, LU decomposition and introduction of slack variables to convert the LP problem into its standard form. The simplex algorithm is introduced in three styles: Simplex Tableau, Basic Simplex and Revised Simplex. The first two simplex styles, as Mathematica functions by the way, are intended to show the sequence of steps of the simplex algorithm. For large problems, however, the above LP methods may take a long time and researchers have developed better search methods such as the interior point method. The interior point method, as its name implies, starts from an interior feasible point and takes appropriate steps alone descent directions towards the optimal point.

Chapters 8 & 9 adequately covered the subject of quadratic programming and constrained nonlinear problems. However, they concentrated only on local optimisation techniques. Inclusion of global optimisation methods such as Simulated Annealing (SA), Genetic Algorithms (GA), Discrete Gradient Methods (DGM), Hooke-Jeeves, Nelder and Mead, and Powell methods would have made the book a complete guide to practical optimisation.

My favorite Optimisation Book - Clear and Useful


<< 1 >>

© 2004, ReviewFocus or its affiliates