# Revised simplex method

In mathematical optimization, the **revised simplex method** is a variant of George Dantzig's simplex method for linear programming.
The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations.^{[1]}

## Problem formulation

For the rest of the discussion, it is assumed that a linear programming problem has been converted into the following standard form:

- [math]\displaystyle{ \begin{array}{rl} \text{minimize} & \boldsymbol{c}^{\mathrm{T}} \boldsymbol{x} \\ \text{subject to} & \boldsymbol{Ax} = \boldsymbol{b}, \boldsymbol{x} \ge \boldsymbol{0} \end{array} }[/math]

where * A* ∈ ℝ

^{m×n}. Without loss of generality, it is assumed that the constraint matrix

*has full row rank and that the problem is feasible, i.e., there is at least one*

**A***≥*

**x****0**such that

*=*

**Ax***. If*

**b***is rank-deficient, either there are redundant constraints, or the problem is infeasible. Both situations can be handled by a presolve step.*

**A**## Algorithmic description

### Optimality conditions

For linear programming, the Karush–Kuhn–Tucker conditions are both necessary and sufficient for optimality. The KKT conditions of a linear programming problem in the standard form is

- [math]\displaystyle{ \begin{align} \boldsymbol{Ax} & = \boldsymbol{b}, \\ \boldsymbol{A}^{\mathrm{T}} \boldsymbol{\lambda} + \boldsymbol{s} & = \boldsymbol{c}, \\ \boldsymbol{x} & \ge \boldsymbol{0}, \\ \boldsymbol{s} & \ge \boldsymbol{0}, \\ \boldsymbol{s}^{\mathrm{T}} \boldsymbol{x} & = 0 \end{align} }[/math]

where * λ* and

*are the Lagrange multipliers associated with the constraints*

**s***=*

**Ax***and*

**b***≥*

**x****0**, respectively.

^{[2]}The last condition, which is equivalent to

*s*= 0 for all 1 <

_{i}x_{i}*i*<

*n*, is called the

*complementary slackness condition*.

By what is sometimes known as the *fundamental theorem of linear programming*, a vertex * x* of the feasible polytope can be identified by being a basis

*of*

**B***chosen from the latter's columns.*

**A**^{[lower-alpha 1]}Since

*has full rank,*

**A***is nonsingular. Without loss of generality, assume that*

**B***= [*

**A**

**B***]. Then*

**N***is given by*

**x**- [math]\displaystyle{ \boldsymbol{x} = \begin{bmatrix} \boldsymbol{x_B} \\ \boldsymbol{x_N} \end{bmatrix} = \begin{bmatrix} \boldsymbol{B}^{-1} \boldsymbol{b} \\ \boldsymbol{0} \end{bmatrix} }[/math]

where * x_{B}* ≥

**0**. Partition

*and*

**c***accordingly into*

**s**- [math]\displaystyle{ \begin{align} \boldsymbol{c} & = \begin{bmatrix} \boldsymbol{c_B} \\ \boldsymbol{c_N} \end{bmatrix}, \\ \boldsymbol{s} & = \begin{bmatrix} \boldsymbol{s_B} \\ \boldsymbol{s_N} \end{bmatrix}. \end{align} }[/math]

To satisfy the complementary slackness condition, let * s_{B}* =

**0**. It follows that

- [math]\displaystyle{ \begin{align} \boldsymbol{B}^{\mathrm{T}} \boldsymbol{\lambda} & = \boldsymbol{c_B}, \\ \boldsymbol{N}^{\mathrm{T}} \boldsymbol{\lambda} + \boldsymbol{s_N} & = \boldsymbol{c_N}, \end{align} }[/math]

which implies that

- [math]\displaystyle{ \begin{align} \boldsymbol{\lambda} & = (\boldsymbol{B}^{\mathrm{T}})^{-1} \boldsymbol{c_B}, \\ \boldsymbol{s_N} & = \boldsymbol{c_N} - \boldsymbol{N}^{\mathrm{T}} \boldsymbol{\lambda}. \end{align} }[/math]

If * s_{N}* ≥

**0**at this point, the KKT conditions are satisfied, and thus

*is optimal.*

**x**### Pivot operation

If the KKT conditions are violated, a *pivot operation* consisting of introducing a column of * N* into the basis at the expense of an existing column in

*is performed. In the absence of degeneracy, a pivot operation always results in a strict decrease in*

**B**

**c**^{T}

*. Therefore, if the problem is bounded, the revised simplex method must terminate at an optimal vertex after repeated pivot operations because there are only a finite number of vertices.*

**x**^{[4]}

Select an index *m* < *q* ≤ *n* such that *s _{q}* < 0 as the

*entering index*. The corresponding column of

*,*

**A***, will be moved into the basis, and*

**A**_{q}*x*will be allowed to increase from zero. It can be shown that

_{q}- [math]\displaystyle{ \frac{\partial (\boldsymbol{c}^{\mathrm{T}} \boldsymbol{x})}{\partial x_q} = s_q, }[/math]

i.e., every unit increase in *x _{q}* results in a decrease by −

*s*in

_{q}

**c**^{T}

*.*

**x**^{[5]}Since

- [math]\displaystyle{ \boldsymbol{B x_B} + \boldsymbol{A}_q x_q = \boldsymbol{b}, }[/math]

* x_{B}* must be correspondingly decreased by Δ

*=*

**x**_{B}

**B**^{−1}

*subject to*

**A**_{q}x_{q}*− Δ*

**x**_{B}*≥*

**x**_{B}**0**. Let

*=*

**d**

**B**^{−1}

*. If*

**A**_{q}*≤*

**d****0**, no matter how much

*x*is increased,

_{q}*− Δ*

**x**_{B}*will stay nonnegative. Hence,*

**x**_{B}

**c**^{T}

*can be arbitrarily decreased, and thus the problem is unbounded. Otherwise, select an index*

**x***p*= argmin

_{1≤i≤m}{

*x*/

_{i}*d*|

_{i}*d*> 0} as the

_{i}*leaving index*. This choice effectively increases

*x*from zero until

_{q}*x*is reduced to zero while maintaining feasibility. The pivot operation concludes with replacing

_{p}*with*

**A**_{p}*in the basis.*

**A**_{q}## Numerical example

Consider a linear program where

- [math]\displaystyle{ \begin{align} \boldsymbol{c} & = \begin{bmatrix} -2 & -3 & -4 & 0 & 0 \end{bmatrix}^{\mathrm{T}}, \\ \boldsymbol{A} & = \begin{bmatrix} 3 & 2 & 1 & 1 & 0 \\ 2 & 5 & 3 & 0 & 1 \end{bmatrix}, \\ \boldsymbol{b} & = \begin{bmatrix} 10 \\ 15 \end{bmatrix}. \end{align} }[/math]

Let

- [math]\displaystyle{ \begin{align} \boldsymbol{B} & = \begin{bmatrix} \boldsymbol{A}_4 & \boldsymbol{A}_5 \end{bmatrix}, \\ \boldsymbol{N} & = \begin{bmatrix} \boldsymbol{A}_1 & \boldsymbol{A}_2 & \boldsymbol{A}_3 \end{bmatrix} \end{align} }[/math]

initially, which corresponds to a feasible vertex * x* = [0 0 0 10 15]

^{T}. At this moment,

- [math]\displaystyle{ \begin{align} \boldsymbol{\lambda} & = \begin{bmatrix} 0 & 0 \end{bmatrix}^{\mathrm{T}}, \\ \boldsymbol{s_N} & = \begin{bmatrix} -2 & -3 & -4 \end{bmatrix}^{\mathrm{T}}. \end{align} }[/math]

Choose *q* = 3 as the entering index. Then * d* = [1 3]

^{T}, which means a unit increase in

*x*

_{3}results in

*x*

_{4}and

*x*

_{5}being decreased by 1 and 3, respectively. Therefore,

*x*

_{3}is increased to 5, at which point

*x*

_{5}is reduced to zero, and

*p*= 5 becomes the leaving index.

After the pivot operation,

- [math]\displaystyle{ \begin{align} \boldsymbol{B} & = \begin{bmatrix} \boldsymbol{A}_3 & \boldsymbol{A}_4 \end{bmatrix}, \\ \boldsymbol{N} & = \begin{bmatrix} \boldsymbol{A}_1 & \boldsymbol{A}_2 & \boldsymbol{A}_5 \end{bmatrix}. \end{align} }[/math]

Correspondingly,

- [math]\displaystyle{ \begin{align} \boldsymbol{x} & = \begin{bmatrix} 0 & 0 & 5 & 5 & 0 \end{bmatrix}^{\mathrm{T}}, \\ \boldsymbol{\lambda} & = \begin{bmatrix} 0 & -4/3 \end{bmatrix}^{\mathrm{T}}, \\ \boldsymbol{s_N} & = \begin{bmatrix} 2/3 & 11/3 & 4/3 \end{bmatrix}^{\mathrm{T}}. \end{align} }[/math]

A positive * s_{N}* indicates that

*is now optimal.*

**x**## Practical issues

### Degeneracy

Because the revised simplex method is mathematically equivalent to the simplex method, it also suffers from degeneracy, where a pivot operation does not result in a decrease in **c**^{T}* x*, and a chain of pivot operations causes the basis to cycle. A perturbation or lexicographic strategy can be used to prevent cycling and guarantee termination.

^{[6]}

### Basis representation

Two types of linear systems involving * B* are present in the revised simplex method:

- [math]\displaystyle{ \begin{align} \boldsymbol{B z} & = \boldsymbol{y}, \\ \boldsymbol{B}^{\mathrm{T}} \boldsymbol{z} & = \boldsymbol{y}. \end{align} }[/math]

Instead of refactorizing * B*, usually an LU factorization is directly updated after each pivot operation, for which purpose there exist several strategies such as the Forrest−Tomlin and Bartels−Golub methods. However, the amount of data representing the updates as well as numerical errors builds up over time and makes periodic refactorization necessary.

^{[1]}

^{[7]}

## Notes and references

### Notes

- ↑ The same theorem also states that the feasible polytope has at least one vertex and that there is at least one vertex which is optimal.
^{[3]}

### References

- ↑
^{1.0}^{1.1}Morgan 1997, §2. - ↑ Nocedal & Wright 2006, p. 358, Eq. 13.4.
- ↑ Nocedal & Wright 2006, p. 363, Theorem 13.2.
- ↑ Nocedal & Wright 2006, p. 370, Theorem 13.4.
- ↑ Nocedal & Wright 2006, p. 369, Eq. 13.24.
- ↑ Nocedal & Wright 2006, p. 381, §13.5.
- ↑ Nocedal & Wright 2006, p. 372, §13.4.

### Bibliography

- Morgan, S. S. (1997).
*A Comparison of Simplex Method Algorithms*(MSc thesis). University of Florida. Archived from the original on 7 August 2011. - Nocedal, J.; Wright, S. J. (2006). Mikosch, T. V.; Resnick, S. I.; Robinson, S. M.. eds.
*Numerical Optimization*. Springer Series in Operations Research and Financial Engineering (2nd ed.). New York, NY, USA: Springer. ISBN 978-0-387-30303-1. https://www.springer.com/mathematics/book/978-0-387-30303-1.

Original source: https://en.wikipedia.org/wiki/Revised simplex method.
Read more |