♯P

From HandWiki
Revision as of 22:44, 6 February 2024 by Dennis Ross (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Complexity class


In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

Relation to decision problems

An NP decision problem is often of the form "Are there any solutions that satisfy certain constraints?" For example:

The corresponding #P function problems ask "how many" rather than "are there any". For example:

  • How many subsets of a list of integers add up to zero?
  • How many Hamiltonian cycles in a given graph have cost less than 100?
  • How many variable assignments satisfy a given CNF formula?
  • How many roots of a univariate real polynomial are positive?

Related complexity classes

Clearly, a #P problem must be at least as hard as the corresponding NP problem. If it's easy to count answers, then it must be easy to tell whether there are any answers—just count them and see whether the count is greater than zero. Some of these problems, such as root finding, are easy enough to be in FP, while others are #P-complete.

One consequence of Toda's theorem is that a polynomial-time machine with a #P oracle (P#P) can solve all problems in PH, the entire polynomial hierarchy. In fact, the polynomial-time machine only needs to make one #P query to solve any problem in PH. This is an indication of the extreme difficulty of solving #P-complete problems exactly.

Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For more information on this, see #P-complete.

The closest decision problem class to #P is PP, which asks whether a majority (more than half) of the computation paths accept. This finds the most significant bit in the #P problem answer. The decision problem class ⊕P (pronounced "Parity-P") instead asks for the least significant bit of the #P answer.

Formal definitions

#P is formally defined as follows:

#P is the set of all functions [math]\displaystyle{ f:\{0,1\}^* \to \mathbb{N} }[/math] such that there is a polynomial time nondeterministic Turing machine [math]\displaystyle{ M }[/math] such that for all [math]\displaystyle{ x \in \{0,1\}^* }[/math], [math]\displaystyle{ f(x) }[/math] equals the number of accepting branches in [math]\displaystyle{ M }[/math]'s computation graph on [math]\displaystyle{ x }[/math].[1]

#P can also be equivalently defined in terms of a verifer. A decision problem is in NP if there exists a polynomial-time checkable certificate to a given problem instance—that is, NP asks whether there exists a proof of membership for the input that can be checked for correctness in polynomial time. The class #P asks how many certificates there exist for a problem instance that can be checked for correctness in polynomial time.[1] In this context, #P is defined as follows:

#P is the set of functions [math]\displaystyle{ f: \{0,1\}^* \to \mathbb{N} }[/math] such that there exists a polynomial [math]\displaystyle{ p: \mathbb{N} \to \mathbb{N} }[/math] and a polynomial-time deterministic Turing machine [math]\displaystyle{ V }[/math], called the verifier, such that for every [math]\displaystyle{ x \in \{0,1\}^* }[/math], [math]\displaystyle{ f(x)=\Big| \big \{y \in \{0,1\}^{p(|x|)} : V(x,y)=1 \big \} \Big| }[/math].[2] (In other words, [math]\displaystyle{ f(x) }[/math] equals the size of the set containing all of the polynomial-size certificates).

History

The complexity class #P was first defined by Leslie Valiant in a 1979 article on the computation of the permanent of a square matrix, in which he proved that permanent is #P-complete.[3]

Larry Stockmeyer has proved that for every #P problem [math]\displaystyle{ P }[/math] there exists a randomized algorithm using an oracle for SAT, which given an instance [math]\displaystyle{ a }[/math] of [math]\displaystyle{ P }[/math] and [math]\displaystyle{ \epsilon \gt 0 }[/math] returns with high probability a number [math]\displaystyle{ x }[/math] such that [math]\displaystyle{ (1-\epsilon) P(a) \leq x \leq (1+\epsilon) P(a) }[/math].[4] The runtime of the algorithm is polynomial in [math]\displaystyle{ a }[/math] and [math]\displaystyle{ 1/ \epsilon }[/math]. The algorithm is based on the leftover hash lemma.

See also

References

  1. 1.0 1.1 Barak, Boaz (Spring 2006). "Complexity of counting". Princeton University. https://www.cs.princeton.edu/courses/archive/spring06/cos522/count.pdf. 
  2. Arora, Sanjeev; Barak, Boaz (2009). Computational Complexity: A Modern Approach. Cambridge University Press. p. 344. ISBN 978-0-521-42426-4. 
  3. Leslie G. Valiant (1979). "The Complexity of Computing the Permanent". Theoretical Computer Science (Elsevier) 8 (2): 189–201. doi:10.1016/0304-3975(79)90044-6. 
  4. Stockmeyer, Larry (November 1985). "On Approximation Algorithms for #P". SIAM Journal on Computing 14 (4): 849. doi:10.1137/0214060. http://www.geocities.com/stockmeyer@sbcglobal.net/approx_sp.pdf. 

External links