Circuit complexity

From HandWiki
Short description: Model of computational complexity
Example Boolean circuit. The [math]\displaystyle{ \wedge }[/math] nodes are AND gates, the [math]\displaystyle{ \vee }[/math] nodes are OR gates, and the [math]\displaystyle{ \neg }[/math] nodes are NOT gates

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits [math]\displaystyle{ C_{1},C_{2},\ldots }[/math] (see below).

Proving lower bounds on size of Boolean circuits computing explicit Boolean functions is a popular approach to separating complexity classes. For example, a prominent circuit class P/poly consists of Boolean functions computable by circuits of polynomial size. Proving that [math]\displaystyle{ \mathsf{NP}\not\subseteq \mathsf{P/poly} }[/math] would separate P and NP (see below).

Complexity classes defined in terms of Boolean circuits include AC0, AC, TC0, NC1, NC, and P/poly.

Size and depth

A Boolean circuit with [math]\displaystyle{ n }[/math] input bits is a directed acyclic graph in which every node (usually called gates in this context) is either an input node of in-degree 0 labelled by one of the [math]\displaystyle{ n }[/math] input bits, an AND gate, an OR gate, or a NOT gate. One of these gates is designated as the output gate. Such a circuit naturally computes a function of its [math]\displaystyle{ n }[/math] inputs. The size of a circuit is the number of gates it contains and its depth is the maximal length of a path from an input gate to the output gate.

There are two major notions of circuit complexity[1] The circuit-size complexity of a Boolean function [math]\displaystyle{ f }[/math] is the minimal size of any circuit computing [math]\displaystyle{ f }[/math]. The circuit-depth complexity of a Boolean function [math]\displaystyle{ f }[/math] is the minimal depth of any circuit computing [math]\displaystyle{ f }[/math].

These notions generalize when one considers the circuit complexity of any language that contains strings with different bit lengths, especially infinite formal languages. Boolean circuits, however, only allow a fixed number of input bits. Thus, no single Boolean circuit is capable of deciding such a language. To account for this possibility, one considers families of circuits [math]\displaystyle{ C_{1},C_{2},\ldots }[/math] where each [math]\displaystyle{ C_{n} }[/math] accepts inputs of size [math]\displaystyle{ n }[/math]. Each circuit family will naturally generate the language by circuit [math]\displaystyle{ C_{n} }[/math] outputting [math]\displaystyle{ 1 }[/math] when a length [math]\displaystyle{ n }[/math] string is a member of the family, and [math]\displaystyle{ 0 }[/math] otherwise. We say that a family of circuits is size minimal if there is no other family that decides on inputs of any size, [math]\displaystyle{ n }[/math], with a circuit of smaller size than [math]\displaystyle{ C_n }[/math] (respectively for depth minimal families). Thus, circuit complexity is meaningful even for non-recursive languages. The notion of a uniform family enables variants of circuit complexity to be related to algorithm based complexity measures of recursive languages. However, the non-uniform variant is helpful to find lower bounds on how complex any circuit family must be in order to decide given languages.

Hence, the circuit-size complexity of a formal language [math]\displaystyle{ A }[/math] is defined as the function [math]\displaystyle{ t:\mathbb{N}\to\mathbb{N} }[/math], that relates a bit length of an input, [math]\displaystyle{ n }[/math], to the circuit-size complexity of a minimal circuit [math]\displaystyle{ C_{n} }[/math] that decides whether inputs of that length are in [math]\displaystyle{ A }[/math]. The circuit-depth complexity is defined similarly.

Uniformity

Boolean circuits are one of the prime examples of so-called non-uniform models of computation in the sense that inputs of different lengths are processed by different circuits, in contrast with uniform models such as Turing machines where the same computational device is used for all possible input lengths. An individual computational problem is thus associated with a particular family of Boolean circuits [math]\displaystyle{ C_1, C_2, \dots }[/math] where each [math]\displaystyle{ C_n }[/math] is the circuit handling inputs of n bits. A uniformity condition is often imposed on these families, requiring the existence of some possibly resource-bounded Turing machine that, on input n, produces a description of the individual circuit [math]\displaystyle{ C_n }[/math]. When this Turing machine has a running time polynomial in n, the circuit family is said to be P-uniform. The stricter requirement of DLOGTIME-uniformity is of particular interest in the study of shallow-depth circuit-classes such as AC0 or TC0. When no resource bounds are specified, a language is recursive (i.e., decidable by a Turing machine) if and only if the language is decided by a uniform family of Boolean circuits.

Polynomial-time uniform

A family of Boolean circuits [math]\displaystyle{ \{C_n:n \in \mathbb{N}\} }[/math] is polynomial-time uniform if there exists a deterministic Turing machine M, such that

  • M runs in polynomial time
  • For all [math]\displaystyle{ n \in \mathbb{N} }[/math], M outputs a description of [math]\displaystyle{ C_n }[/math] on input [math]\displaystyle{ 1^n }[/math]

Logspace uniform

A family of Boolean circuits [math]\displaystyle{ \{C_n:n \in \mathbb{N}\} }[/math] is logspace uniform if there exists a deterministic Turing machine M, such that

  • M runs in logarithmic space
  • For all [math]\displaystyle{ n \in \mathbb{N} }[/math], M outputs a description of [math]\displaystyle{ C_n }[/math] on input [math]\displaystyle{ 1^n }[/math]

History

Circuit complexity goes back to Shannon in 1949,[2] who proved that almost all Boolean functions on n variables require circuits of size Θ(2n/n). Despite this fact, complexity theorists have so far been unable to prove a superlinear lower bound for any explicit function.

Superpolynomial lower bounds have been proved under certain restrictions on the family of circuits used. The first function for which superpolynomial circuit lower bounds were shown was the parity function, which computes the sum of its input bits modulo 2. The fact that parity is not contained in AC0 was first established independently by Ajtai in 1983[3][4] and by Furst, Saxe and Sipser in 1984.[5] Later improvements by Håstad in 1987[6] established that any family of constant-depth circuits computing the parity function requires exponential size. Extending a result of Razborov,[7] Smolensky in 1987[8] proved that this is true even if the circuit is augmented with gates computing the sum of its input bits modulo some odd prime p.

The k-clique problem is to decide whether a given graph on n vertices has a clique of size k. For any particular choice of the constants n and k, the graph can be encoded in binary using [math]\displaystyle{ {n \choose 2} }[/math] bits, which indicate for each possible edge whether it is present. Then the k-clique problem is formalized as a function [math]\displaystyle{ f_k:\{0,1\}^{{n \choose 2}}\to\{0,1\} }[/math] such that [math]\displaystyle{ f_k }[/math] outputs 1 if and only if the graph encoded by the string contains a clique of size k. This family of functions is monotone and can be computed by a family of circuits, but it has been shown that it cannot be computed by a polynomial-size family of monotone circuits (that is, circuits with AND and OR gates but without negation). The original result of Razborov in 1985[7] was later improved to an exponential-size lower bound by Alon and Boppana in 1987.[9] In 2008, Rossman[10] showed that constant-depth circuits with AND, OR, and NOT gates require size [math]\displaystyle{ \Omega(n^{k/4}) }[/math] to solve the k-clique problem even in the average case. Moreover, there is a circuit of size [math]\displaystyle{ n^{k/4+O(1)} }[/math] that computes [math]\displaystyle{ f_k }[/math].

In 1999, Raz and McKenzie later showed that the monotone NC hierarchy is infinite.[11]

The Integer Division Problem lies in uniform TC0.[12]

Circuit lower bounds

Circuit lower bounds are generally difficult. Known results include

  • Parity is not in nonuniform AC0, proved by Ajtai in 1983[3][4] as well as by Furst, Saxe and Sipser in 1984.[5]
  • Uniform TC0 is strictly contained in PP, proved by Allender.[13]
  • The classes SP2, PP[nb 1] and MA/1[14] (MA with one bit of advice) are not in SIZE(nk) for any constant k.
  • While it is suspected that the nonuniform class ACC0 does not contain the majority function, it was only in 2010 that Williams proved that [math]\displaystyle{ \mathsf{NEXP} \not \subseteq \mathsf{ACC}^0 }[/math].[15]

It is open whether NEXPTIME has nonuniform TC0 circuits.

Proofs of circuit lower bounds are strongly connected to derandomization. A proof that [math]\displaystyle{ \mathsf{P} = \mathsf{BPP} }[/math] would imply that either [math]\displaystyle{ \mathsf{NEXP} \not \subseteq \mathsf{P/poly} }[/math] or that permanent cannot be computed by nonuniform arithmetic circuits (polynomials) of polynomial size and polynomial degree.[16]

In 1997, Razborov and Rudich showed that many known circuit lower bounds for explicit Boolean functions imply the existence of so called natural properties useful against the respective circuit class.[17] On the other hand, natural properties useful against P/poly would break strong pseudorandom generators. This is often interpreted as a "natural proofs" barrier for proving strong circuit lower bounds. In 2016, Carmosino, Impagliazzo, Kabanets and Kolokolova proved that natural properties can be also used to construct efficient learning algorithms.[18]

Complexity classes

Many circuit complexity classes are defined in terms of class hierarchies. For each non-negative integer i, there is a class NCi, consisting of polynomial-size circuits of depth [math]\displaystyle{ O(\log^i(n)) }[/math], using bounded fan-in AND, OR, and NOT gates. The union NC of all of these classes is a subject to discussion. By considering unbounded fan-in gates, the classes ACi and AC (which is equal to NC) can be constructed. Many other circuit complexity classes with the same size and depth restrictions can be constructed by allowing different sets of gates.

Relation to time complexity

If a certain language, [math]\displaystyle{ A }[/math], belongs to the time-complexity class [math]\displaystyle{ \text{TIME}(t(n)) }[/math] for some function [math]\displaystyle{ t:\mathbb{N}\to\mathbb{N} }[/math], then [math]\displaystyle{ A }[/math] has circuit complexity [math]\displaystyle{ \mathcal{O}(t(n) \log t(n)) }[/math]. If the Turing Machine that accepts the language is oblivious (meaning that it reads and writes the same memory cells regardless of input), then [math]\displaystyle{ A }[/math] has circuit complexity [math]\displaystyle{ \mathcal{O}(t(n)) }[/math].[19]

Monotone circuits

A monotone Boolean circuit is one that has only AND and OR gates, but no NOT gates. A monotone circuit can only compute a monotone Boolean function, which is a function [math]\displaystyle{ f:\{0,1\}^n \to \{0,1\} }[/math] where for every [math]\displaystyle{ x,y \in \{0,1\}^n }[/math], [math]\displaystyle{ x \leq y \implies f(x) \leq f(y) }[/math], where [math]\displaystyle{ x\leq y }[/math] means that [math]\displaystyle{ x_i \leq y_i }[/math] for all [math]\displaystyle{ i \in \{1,\ldots,n\} }[/math].

See also

  • Circuit minimization

Notes

  1. See proof.

References

  1. Introduction to the theory of computation (1 ed.). Boston, USA: PWS Publishing Company. 1997. p. 324. 
  2. "The synthesis of two-terminal switching circuits". Bell System Technical Journal 28 (1): 59–98. 1949. doi:10.1002/j.1538-7305.1949.tb03624.x. 
  3. 3.0 3.1 "[math]\displaystyle{ \Sigma^1_1 }[/math]-formulae on finite structures". Annals of Pure and Applied Logic 24: 1–24. 1983. doi:10.1016/0168-0072(83)90038-6. 
  4. 4.0 4.1 An 0(n log n) sorting network. 1983. 1–9. ISBN 978-0-89791-099-6. 
  5. 5.0 5.1 "Parity, circuits, and the polynomial-time hierarchy". Mathematical Systems Theory 17 (1): 13–27. 1984. doi:10.1007/BF01744431. 
  6. Computational limitations of small depth circuits (Ph.D. thesis). Massachusetts Institute of Technology. 1987. http://www.nada.kth.se/~johanh/thesis.pdf. 
  7. 7.0 7.1 "Lower bounds on the monotone complexity of some Boolean functions". Soviet Mathematics - Doklady 31: 354–357. 1985. ISSN 0197-6788. 
  8. "Algebraic methods in the theory of lower bounds for Boolean circuit complexity". Association for Computing Machinery. 1987. pp. 77–82. doi:10.1145/28395.28404. 
  9. "The monotone circuit complexity of Boolean functions". Combinatorica 7 (1): 1–22. 1987. doi:10.1007/bf02579196. 
  10. "On the constant-depth complexity of k-clique". Association for Computing Machinery. 2008. pp. 721–730. doi:10.1145/1374376.1374480. 
  11. "Separation of the monotone NC hierarchy". Combinatorica 19 (3): 403–435. 1999. doi:10.1007/s004930050062. 
  12. "Division is in uniform TC0". Springer Verlag. 2001. pp. 104–114. 
  13. Allender, Eric Warren, ed (1997). "Circuit Complexity before the Dawn of the New Millennium". http://ftp.cs.rutgers.edu/pub/allender/fsttcs.pdf. [yes|permanent dead link|dead link}}] [1] (NB. A 1997 survey of the field by Eric Allender.)
  14. "Circuit lower bounds for Merlin-Arthur classes". 2007. pp. 275–283. doi:10.1145/1250790.1250832. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.111.1811. 
  15. "Non-Uniform ACC Circuit Lower Bounds". 2011. pp. 115–125. doi:10.1109/CCC.2011.36. http://www.stanford.edu/~rrwill/acc-lbs.pdf. 
  16. "Derandomizing polynomial identity tests means proving circuit lower bounds". Computational Complexity 13 (1): 1–46. 2004. doi:10.1007/s00037-004-0182-6. 
  17. "Natural proofs". Journal of Computer and System Sciences 55: pp. 24–35. 1997. 
  18. "Learning algorithms from natural proofs". Computational Complexity Conference. 2016. 
  19. Pippenger, Nicholas; Fischer, Michael J. (1979). "Relations Among Complexity Measures". Journal of the ACM 26 (3): 361–381. doi:10.1145/322123.322138. 

Further reading