Big M method

From HandWiki
Short description: Method of solving linear programming problems

In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating the constraints with large negative constants which would not be part of any optimal solution, if it exists.

Algorithm

The simplex algorithm is the original and still one of the most widely used methods for solving linear maximization problems. However, to apply it, the origin (all variables equal to 0) must be a feasible point. This condition is satisfied only when all the constraints (except non-negativity) are less-than constraints and with positive constant on the right-hand side. The Big M method introduces surplus and artificial variables to convert all inequalities into that form. The "Big M" refers to a large number associated with the artificial variables, represented by the letter M.

The steps in the algorithm are as follows:

  1. Multiply the inequality constraints to ensure that the right hand side is positive.
  2. If the problem is of minimization, transform to maximization by multiplying the objective by −1.
  3. For any greater-than constraints, introduce surplus si and artificial variables ai (as shown below).
  4. Choose a large positive Value M and introduce a term in the objective of the form −M multiplying the artificial variables.
  5. For less-than or equal constraints, introduce slack variables si so that all constraints are equalities.
  6. Solve the problem using the usual simplex method.

For example, x + y ≤  100 becomes x + y + s1 = 100, whilst x + y ≥ 100 becomes x + y − s1 + a1 = 100. The artificial variables must be shown to be 0. The function to be maximised is rewritten to include the sum of all the artificial variables. Then row reductions are applied to gain a final solution.

The value of M must be chosen sufficiently large so that the artificial variable would not be part of any feasible solution.

For a sufficiently large M, the optimal solution contains any artificial variables in the basis (i.e. positive values) if and only if the problem is not feasible.

Other usage

When used in the objective function, the Big M method sometimes refers to formulations of linear optimization problems in which violations of a constraint or set of constraints are associated with a large positive penalty constant, M.

When used in the constraints themselves, one of the many uses of Big M, for example, refers to ensuring equality of variables only when a certain binary variable takes on one value, but to leave the variables "open" if the binary variable takes on its opposite value. One instance of this is as follows: for a sufficiently large M and z binary variable (0 or 1), the constraints

[math]\displaystyle{ x - y \leq M z }[/math]
[math]\displaystyle{ x - y \geq -M z }[/math]

ensure that when [math]\displaystyle{ z = 0 }[/math] then [math]\displaystyle{ x = y }[/math]. Otherwise, when [math]\displaystyle{ z = 1 }[/math], then [math]\displaystyle{ -M \leq x - y \leq M }[/math], indicating that the variables x and y can have any values so long as the absolute value of their difference is bounded by [math]\displaystyle{ M }[/math] (hence the need for M to be "large enough.")

See also

  • Two phase method (linear programming) another approach for solving problems with >= constraints
  • Karush–Kuhn–Tucker conditions, which apply to Non-Linear Optimization problems with inequality constraints.

References and external links

Bibliography

  • Griva, Igor; Nash, Stephan G.; Sofer, Ariela (26 March 2009). Linear and Nonlinear Optimization (2nd ed.). Society for Industrial Mathematics. ISBN 978-0-89871-661-0. 

Discussion