Mathematics – Optimization and Control
Scientific paper
2003-01-10
Mathematics
Optimization and Control
34 pages
Scientific paper
For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods sequentially minimize a Lagrangian function subject to linearized constraints. These methods converge rapidly near a solution but may not be reliable from arbitrary starting points. The well known example \MINOS\ has proven effective on many large problems. Its success motivates us to propose a globally convergent variant. Our stabilized LCL method possesses two important properties: the subproblems are always feasible, and they may be solved inexactly. These features are present in \MINOS only as heuristics. The new algorithm has been implemented in \Matlab, with the option to use either the \MINOS or \SNOPT Fortran codes to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a nonlinear subset of the \COPS, \CUTE, and HS test-problem sets, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.
Friedlander Michael P.
Saunders Michael A.
No associations
LandOfFree
A Globally Convergent LCL Method for Nonlinear Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with A Globally Convergent LCL Method for Nonlinear Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and A Globally Convergent LCL Method for Nonlinear Optimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-583914