Active-set method
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem.
An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints
- [math]\displaystyle{ g_1(x) \ge 0, \dots, g_k(x) \ge 0 }[/math]
that define the feasible region, that is, the set of all x to search for the optimal solution. Given a point [math]\displaystyle{ x }[/math] in the feasible region, a constraint
- [math]\displaystyle{ g_i(x) \ge 0 }[/math]
is called active at [math]\displaystyle{ x_0 }[/math] if [math]\displaystyle{ g_i(x_0) = 0 }[/math], and inactive at [math]\displaystyle{ x }[/math] if [math]\displaystyle{ g_i(x_0) \gt 0. }[/math] Equality constraints are always active. The active set at [math]\displaystyle{ x_0 }[/math] is made up of those constraints [math]\displaystyle{ g_i(x_0) }[/math] that are active at the current point (Nocedal Wright).
The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
Active-set methods
In general an active-set algorithm has the following structure:
- Find a feasible starting point
- repeat until "optimal enough"
- solve the equality problem defined by the active set (approximately)
- compute the Lagrange multipliers of the active set
- remove a subset of the constraints with negative Lagrange multipliers
- search for infeasible constraints
- end repeat
Methods that can be described as active-set methods include:[1]
- Successive linear programming (SLP)
- Sequential quadratic programming (SQP)
- Sequential linear-quadratic programming (SLQP)
- Reduced gradient method (RG)
- Generalized reduced gradient method (GRG)
Performance
Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in m, like the Simplex method. However, its practical behaviour is typically much better.[2]:{{{1}}}
References
- ↑ Nocedal & Wright 2006, pp. 467–480
- ↑ Nemirovsky and Ben-Tal (2023). "Optimization III: Convex Optimization". http://www2.isye.gatech.edu/~nemirovs/OPTIIILN2023Spring.pdf.
Bibliography
- Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. 3. Berlin: Heldermann Verlag. pp. xlviii+629 pp. ISBN 3-88538-403-5. http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/. Retrieved 2010-04-03.
- Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-30303-1.
Original source: https://en.wikipedia.org/wiki/Active-set method.
Read more |