Hardness of approximation

From HandWiki

In computer science, hardness of approximation is a field that studies the algorithmic complexity of finding near-optimal solutions to optimization problems.

Scope

Hardness of approximation complements the study of approximation algorithms by proving, for certain problems, a limit on the factors with which their solution can be efficiently approximated. Typically such limits show a factor of approximation beyond which a problem becomes NP-hard, implying that finding a polynomial time approximation for the problem is impossible unless NP=P. Some hardness of approximation results, however, are based on other hypotheses, a notable one among which is the unique games conjecture.

History

Since the early 1970s it was known that many optimization problems could not be solved in polynomial time unless P = NP, but in many of these problems the optimal solution could be efficiently approximated to a certain degree. In the 1970s, Teofilo F. Gonzalez and Sartaj Sahni began the study of hardness of approximation, by showing that certain optimization problems were NP-hard even to approximate to within a given approximation ratio. That is, for these problems, there is a threshold such that any polynomial-time approximation with approximation ratio beyond this threshold could be used to solve NP-complete problems in polynomial time.[1] In the early 1990s, with the development of PCP theory, it became clear that many more approximation problems were hard to approximate, and that (unless P = NP) many known approximation algorithms achieved the best possible approximation ratio.

Hardness of approximation theory deals with studying the approximation threshold of such problems.

Examples

For an example of an NP-hard optimization problem that is hard to approximate, see set cover.

See also

References

  1. "P-complete approximation problems", Journal of the ACM 23 (3): 555–565, 1976, doi:10.1145/321958.321975 .

Further reading

External links