Computational hardness assumption
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where efficiently typically means "in polynomial time"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
Computational hardness assumptions are of particular importance in cryptography. A major goal in cryptography is to create cryptographic primitives with provable security. In some cases, cryptographic protocols are found to have information theoretic security; the one-time pad is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure assuming that any adversaries are computationally limited, as all adversaries are in practice.
Computational hardness assumptions are also useful for guiding algorithm designers: a simple algorithm is unlikely to refute a well-studied computational hardness assumption such as P ≠ NP.
Comparing hardness assumptions
Computer scientists have different ways of assessing which hardness assumptions are more reliable.
Strength of hardness assumptions
We say that assumption [math]\displaystyle{ A }[/math] is stronger than assumption [math]\displaystyle{ B }[/math] when [math]\displaystyle{ A }[/math] implies [math]\displaystyle{ B }[/math] (and the converse is false or not known). In other words, even if assumption [math]\displaystyle{ A }[/math] were false, assumption [math]\displaystyle{ B }[/math] may still be true, and cryptographic protocols based on assumption [math]\displaystyle{ B }[/math] may still be safe to use. Thus when devising cryptographic protocols, one hopes to be able to prove security using the weakest possible assumptions.
Average-case vs. worst-case assumptions
An average-case assumption says that a specific problem is hard on most instances from some explicit distribution, whereas a worst-case assumption only says that the problem is hard on some instances. For a given problem, average-case hardness implies worst-case hardness, so an average-case hardness assumption is stronger than a worst-case hardness assumption for the same problem. Furthermore, even for incomparable problems, an assumption like the Exponential Time Hypothesis is often considered preferable to an average-case assumption like the planted clique conjecture.[1] However, for cryptographic applications, knowing that a problem has some hard instance (the problem is hard in the worst-case) is useless because it does not provide us with a way of generating hard instances.[2] Fortunately, many average-case assumptions used in cryptography (including RSA, discrete log, and some lattice problems) can be based on worst-case assumptions via worst-case-to-average-case reductions.[3]
Falsifiability
A desired characteristic of a computational hardness assumption is falsifiability, i.e. that if the assumption were false, then it would be possible to prove it. In particular, (Naor 2003) introduced a formal notion of cryptographic falsifiability.[4] Roughly, a computational hardness assumption is said to be falsifiable if it can be formulated in terms of a challenge: an interactive protocol between an adversary and an efficient verifier, where an efficient adversary can convince the verifier to accept if and only if the assumption is false.
Common cryptographic hardness assumptions
There are many cryptographic hardness assumptions in use. This is a list of some of the most common ones, and some cryptographic protocols that use them.
Integer factorization
Given a composite integer [math]\displaystyle{ n }[/math], and in particular one which is the product of two large primes [math]\displaystyle{ n = p\cdot q }[/math], the integer factorization problem is to find [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] (more generally, find primes [math]\displaystyle{ p_1,\dots,p_k }[/math] such that [math]\displaystyle{ n = \prod_i p_i }[/math]). It is a major open problem to find an algorithm for integer factorization that runs in time polynomial in the size of representation ([math]\displaystyle{ \log n }[/math]). The security of many cryptographic protocols rely on the assumption that integer factorization is hard (i.e. cannot be solved in polynomial time). Cryptosystems whose security is equivalent to this assumption include the Rabin cryptosystem and the Okamoto–Uchiyama cryptosystem. Many more cryptosystems rely on stronger assumptions such as RSA, Residuosity problems, and Phi-hiding.
RSA problem
Given a composite number [math]\displaystyle{ n }[/math], exponent [math]\displaystyle{ e }[/math] and number [math]\displaystyle{ c := m^e (\mathrm{mod}\; n) }[/math], the RSA problem is to find [math]\displaystyle{ m }[/math]. The problem is conjectured to be hard, but becomes easy given the factorization of [math]\displaystyle{ n }[/math]. In the RSA cryptosystem, [math]\displaystyle{ (n,e) }[/math] is the public key, [math]\displaystyle{ c }[/math] is the encryption of message [math]\displaystyle{ m }[/math], and the factorization of [math]\displaystyle{ n }[/math] is the secret key used for decryption.
Residuosity problems
Given a composite number [math]\displaystyle{ n }[/math] and integers [math]\displaystyle{ y,d }[/math], the residuosity problem is to determine whether there exists (alternatively, find an) [math]\displaystyle{ x }[/math] such that
- [math]\displaystyle{ x^d \equiv y \pmod{n}. }[/math]
Important special cases include the Quadratic residuosity problem and the Decisional composite residuosity problem. As in the case of RSA, this problem (and its special cases) are conjectured to be hard, but become easy given the factorization of [math]\displaystyle{ n }[/math]. Some cryptosystems that rely on the hardness of residuousity problems include:
- Goldwasser–Micali cryptosystem (quadratic residuosity problem)
- Blum Blum Shub generator (quadratic residuosity problem)
- Paillier cryptosystem (decisional composite residuosity problem)
- Benaloh cryptosystem (higher residuosity problem)
- Naccache–Stern cryptosystem (higher residuosity problem)
Phi-hiding assumption
For a composite number [math]\displaystyle{ m }[/math], it is not known how to efficiently compute its Euler's totient function [math]\displaystyle{ \phi(m) }[/math]. The Phi-hiding assumption postulates that it is hard to compute [math]\displaystyle{ \phi(m) }[/math], and furthermore even computing any prime factors of [math]\displaystyle{ \phi(m) }[/math] is hard. This assumption is used in the Cachin–Micali–Stadler PIR protocol.[5]
Discrete log problem (DLP)
Given elements [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] from a group [math]\displaystyle{ G }[/math], the discrete log problem asks for an integer [math]\displaystyle{ k }[/math] such that [math]\displaystyle{ a = b^k }[/math]. The discrete log problem is not known to be comparable to integer factorization, but their computational complexities are closely related.
Most cryptographic protocols related to the discrete log problem actually rely on the stronger Diffie–Hellman assumption: given group elements [math]\displaystyle{ g, g^a, g^b }[/math], where [math]\displaystyle{ g }[/math] is a generator and [math]\displaystyle{ a,b }[/math] are random integers, it is hard to find [math]\displaystyle{ g^{a\cdot b} }[/math]. Examples of protocols that use this assumption include the original Diffie–Hellman key exchange, as well as the ElGamal encryption (which relies on the yet stronger Decisional Diffie–Hellman (DDH) variant).
Multilinear maps
A multilinear map is a function [math]\displaystyle{ e: G_1 ,\dots,G_n \rightarrow G_T }[/math] (where [math]\displaystyle{ G_1 ,\dots,G_n,G_T }[/math] are groups) such that for any [math]\displaystyle{ g_1, \dots, g_n \in G_1, \dots G_n }[/math] and [math]\displaystyle{ a_1, \dots, a_n }[/math],
- [math]\displaystyle{ e(g_1^{a_1},\dots,g_n^{a_n}) = e(g_1,\dots,g_n)^{a_1\cdots a_n} }[/math].
For cryptographic applications, one would like to construct groups [math]\displaystyle{ G_1,\dots,G_n,G_T }[/math] and a map [math]\displaystyle{ e }[/math] such that the map and the group operations on [math]\displaystyle{ G_1,\dots,G_n,G_T }[/math] can be computed efficiently, but the discrete log problem on [math]\displaystyle{ G_1,\dots,G_n }[/math] is still hard.[6] Some applications require stronger assumptions, e.g. multilinear analogs of Diffie-Hellman assumptions.
For the special case of [math]\displaystyle{ n=2 }[/math], bilinear maps with believable security have been constructed using Weil pairing and Tate pairing.[7] For [math]\displaystyle{ n\gt 2 }[/math] many constructions have been proposed in recent years, but many of them have also been broken, and currently there is no consensus about a safe candidate.[8]
Some cryptosystems that rely on multilinear hardness assumptions include:
- Boneh-Franklin scheme (bilinear Diffie-Hellman)
- Boneh–Lynn–Shacham (bilinear Diffie-Hellman)
- Garg-Gentry-Halevi-Raykova-Sahai-Waters candidate for indistinguishability obfuscation and functional encryption (multilinear jigsaw puzzles)[9]
Lattice problems
The most fundamental computational problem on lattices is the shortest vector problem (SVP): given a lattice [math]\displaystyle{ L }[/math], find the shortest non-zero vector [math]\displaystyle{ v \in L }[/math]. Most cryptosystems require stronger assumptions on variants of SVP, such as shortest independent vectors problem (SIVP), GapSVP,[10] or Unique-SVP.[11]
The most useful lattice hardness assumption in cryptography is for the learning with errors (LWE) problem: Given samples to [math]\displaystyle{ (x,y) }[/math], where [math]\displaystyle{ y = f(x) }[/math] for some linear function [math]\displaystyle{ f(\cdot) }[/math], it is easy to learn [math]\displaystyle{ f(\cdot) }[/math] using linear algebra. In the LWE problem, the input to the algorithm has errors, i.e. for each pair [math]\displaystyle{ y\neq f(x) }[/math] with some small probability. The errors are believed to make the problem intractable (for appropriate parameters); in particular, there are known worst-case to average-case reductions from variants of SVP.[12]
For quantum computers, Factoring and Discrete Log problems are easy, but lattice problems are conjectured to be hard.[13] This makes some lattice-based cryptosystems candidates for post-quantum cryptography.
Some cryptosystems that rely on hardness of lattice problems include:
- NTRU (both NTRUEncrypt and NTRUSign)
- Most candidates for fully homomorphic encryption
Non-cryptographic hardness assumptions
As well as their cryptographic applications, hardness assumptions are used in computational complexity theory to provide evidence for mathematical statements that are difficult to prove unconditionally. In these applications, one proves that the hardness assumption implies some desired complexity-theoretic statement, instead of proving that the statement is itself true. The best-known assumption of this type is the assumption that P ≠ NP,[14] but others include the exponential time hypothesis,[15] the planted clique conjecture, and the unique games conjecture.[16]
C-hard problems
Many worst-case computational problems are known to be hard or even complete for some complexity class [math]\displaystyle{ C }[/math], in particular NP-hard (but often also PSPACE-hard, PPAD-hard, etc.). This means that they are at least as hard as any problem in the class [math]\displaystyle{ C }[/math]. If a problem is [math]\displaystyle{ C }[/math]-hard (with respect to polynomial time reductions), then it cannot be solved by a polynomial-time algorithm unless the computational hardness assumption [math]\displaystyle{ P \neq C }[/math] is false.
Exponential Time Hypothesis (ETH) and variants
The Exponential Time Hypothesis (ETH) is a strengthening of [math]\displaystyle{ P \neq NP }[/math] hardness assumption, which conjectures that not only does the Boolean satisfiability problem (SAT) not have a polynomial time algorithm, it furthermore requires exponential time ([math]\displaystyle{ 2^{\Omega(n)} }[/math]).[17] An even stronger assumption, known as the Strong Exponential Time Hypothesis (SETH) conjectures that [math]\displaystyle{ k }[/math]-SAT requires [math]\displaystyle{ 2^{(1-\varepsilon_k)n} }[/math] time, where [math]\displaystyle{ \lim_{k \rightarrow \infty} \varepsilon_k = 0 }[/math]. ETH, SETH, and related computational hardness assumptions allow for deducing fine-grained complexity results, e.g. results that distinguish polynomial time and quasi-polynomial time,[1] or even [math]\displaystyle{ n^{1.99} }[/math] versus [math]\displaystyle{ n^2 }[/math].[18] Such assumptions are also useful in parametrized complexity.[19]
Average-case hardness assumptions
Some computational problems are assumed to be hard on average over a particular distribution of instances. For example, in the planted clique problem, the input is a random graph sampled, by sampling an Erdős–Rényi random graph and then "planting" a random [math]\displaystyle{ k }[/math]-clique, i.e. connecting [math]\displaystyle{ k }[/math] uniformly random nodes (where [math]\displaystyle{ 2\log_2 n \ll k \ll \sqrt n }[/math]), and the goal is to find the planted [math]\displaystyle{ k }[/math]-clique (which is unique w.h.p.).[20] Another important example is Feige's Hypothesis, which is a computational hardness assumption about random instances of 3-SAT (sampled to maintain a specific ratio of clauses to variables).[21] Average-case computational hardness assumptions are useful for proving average-case hardness in applications like statistics, where there is a natural distribution over inputs.[22] Additionally, the planted clique hardness assumption has also been used to distinguish between polynomial and quasi-polynomial worst-case time complexity of other problems,[23] similarly to the Exponential Time Hypothesis.
Unique Games
The Unique Label Cover problem is a constraint satisfaction problem, where each constraint [math]\displaystyle{ C }[/math] involves two variables [math]\displaystyle{ x,y }[/math], and for each value of [math]\displaystyle{ x }[/math] there is a unique value of [math]\displaystyle{ y }[/math] that satisfies [math]\displaystyle{ C }[/math]. Determining whether all the constraints can be satisfied is easy, but the Unique Game Conjecture (UGC) postulates that determining whether almost all the constraints ([math]\displaystyle{ (1-\varepsilon) }[/math]-fraction, for any constant [math]\displaystyle{ \varepsilon\gt 0 }[/math]) can be satisfied or almost none of them ([math]\displaystyle{ \varepsilon }[/math]-fraction) can be satisfied is NP-hard.[16] Approximation problems are often known to be NP-hard assuming UGC; such problems are referred to as UG-hard. In particular, assuming UGC there is a semidefinite programming algorithm that achieves optimal approximation guarantees for many important problems.[24]
Small Set Expansion
Closely related to the Unique Label Cover problem is the Small Set Expansion (SSE) problem: Given a graph [math]\displaystyle{ G = (V,E) }[/math], find a small set of vertices (of size [math]\displaystyle{ n/\log(n) }[/math]) whose edge expansion is minimal. It is known that if SSE is hard to approximate, then so is Unique Label Cover. Hence, the Small Set Expansion Hypothesis, which postulates that SSE is hard to approximate, is a stronger (but closely related) assumption than the Unique Game Conjecture.[25] Some approximation problems are known to be SSE-hard[26] (i.e. at least as hard as approximating SSE).
The 3SUM Conjecture
Given a set of [math]\displaystyle{ n }[/math] numbers, the 3SUM problem asks whether there is a triplet of numbers whose sum is zero. There is a quadratic-time algorithm for 3SUM, and it has been conjectured that no algorithm can solve 3SUM in "truly sub-quadratic time": the 3SUM Conjecture is the computational hardness assumption that there are no [math]\displaystyle{ O(n^{2-\varepsilon}) }[/math]-time algorithms for 3SUM (for any constant [math]\displaystyle{ \varepsilon \gt 0 }[/math]). This conjecture is useful for proving near-quadratic lower bounds for several problems, mostly from computational geometry.[27]
See also
References
- ↑ 1.0 1.1 Braverman, Mark; Ko, Young Kun; Weinstein, Omri (2015). "Symposium on Discrete Algorithms (SODA)". Society for Industrial and Applied Mathematics. pp. 970–982. doi:10.1137/1.9781611973730.66. ISBN 978-1-61197-374-7.
- ↑ J. Katz and Y. Lindell, Introduction to Modern Cryptography (Chapman and Hall/CRC Cryptography and Network Security Series), Chapman and Hall/CRC, 2007.
- ↑ Goldwasser, Shafi; Kalai, Yael Tauman (2016). "Theory of Cryptography Conference (TCC) 2016". Springer. pp. 505–522. doi:10.1007/978-3-662-49096-9_21.
- ↑ Naor, Moni (2003). "Advances in Cryptology – CRYPTO 2003: 23rd Annual International Cryptology Conference, Santa Barbara, California, USA, August 17-21, 2003, Proceedings". in Boneh, Dan. 2729. Berlin: Springer. pp. 96–109. doi:10.1007/978-3-540-45146-4_6.
- ↑ Cachin, Christian; Micali, Silvio; Stadler, Markus (1999). "Computationally Private Information Retrieval with Polylogarithmic Communication". in Stern, Jacques. Advances in Cryptology — EUROCRYPT '99. Lecture Notes in Computer Science. 1592. Springer. pp. 402–414. doi:10.1007/3-540-48910-X_28. ISBN 978-3-540-65889-4.
- ↑ Boneh, Dan; Silverberg, Alice (2002). "Applications of Multilinear Forms to Cryptography". Cryptology ePrint Archive. https://eprint.iacr.org/2002/080.
- ↑ Dutta, Ratna; Barua, Rana; Sarkar, Palash (2004). "Pairing-Based Cryptographic Protocols : A Survey". Cryptology ePrint Archive. https://eprint.iacr.org/2004/064.
- ↑ Albrecht, Martin R.. "Are Graded Encoding Scheme broken yet?". http://malb.io/are-graded-encoding-schemes-broken-yet.html.
- ↑ Garg, Sanjam; Gentry, Craig; Halevi, Shai; Raykova, Mariana; Sahai, Amit; Waters, Brent (2016). "Candidate Indistinguishability Obfuscation and Functional Encryption for All Circuits". SIAM Journal on Computing (SIAM) 45 (3): 882–929. doi:10.1137/14095772X. https://eprint.iacr.org/2013/451.pdf.
- ↑ Peikert, Chris (2009). "Proceedings on 41st Annual ACM Symposium on Theory of Computing (STOC)". pp. 333–342. doi:10.1145/1536414.1536461.
- ↑ "Proceedings on 29th Annual ACM Symposium on Theory of Computing (STOC)". 1997. pp. 284–293. doi:10.1145/258533.258604.
- ↑ Regev, Oded (2010). "Conference on Computational Complexity (CCC) 2010". pp. 191–204. doi:10.1109/CCC.2010.26.
- ↑ Peikert, Chris (2016). "A Decade of Lattice Cryptography". Foundations and Trends in Theoretical Computer Science 10 (4): 283–424. doi:10.1561/0400000074. https://eprint.iacr.org/2015/939.
- ↑ Fortnow, Lance (2009). "The status of the P versus NP problem". Communications of the ACM 52 (9): 78–86. doi:10.1145/1562164.1562186. http://www.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf..
- ↑ "Exact algorithms for NP-hard problems: A survey". Combinatorial Optimization — Eureka, You Shrink!. 2570. Springer-Verlag. 2003. pp. 185–207. doi:10.1007/3-540-36478-1_17..
- ↑ 16.0 16.1 Khot, Subhash (2010). "Proc. 25th IEEE Conference on Computational Complexity". pp. 99–121. doi:10.1109/CCC.2010.19. http://cs.nyu.edu/~khot/papers/UGCSurvey.pdf..
- ↑ "Proc. 14th IEEE Conf. on Computational Complexity". 1999. pp. 237–240. doi:10.1109/CCC.1999.766282.
- ↑ Abboud, Amir (2014). "Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014". pp. 39–51. doi:10.1007/978-3-662-43948-7_4.
- ↑ Lokshtanov, Daniel; Marx, Daniel; Saurabh, Saket (2011). "Lower bounds based on the Exponential Time Hypothesis". Bulletin of the EATCS 105: 41–72. http://albcom.lsi.upc.edu/ojs/index.php/beatcs/article/view/96.
- ↑ Arora, Sanjeev; Barak, Boaz (2009). Computational Complexity: A Modern Approach. Cambridge University Press. pp. 362–363. ISBN 9780521424264. https://books.google.com/books?id=8Wjqvsoo48MC&pg=PA362..
- ↑ "Proceedings on 34th Annual ACM Symposium on Theory of Computing (STOC)". 2002. pp. 534–543. doi:10.1145/509907.509985.
- ↑ Berthet, Quentin; Rigollet, Philippe (2013). "COLT 2013". pp. 1046–1066. http://jmlr.org/proceedings/papers/v30/Berthet13.html.
- ↑ Hazan, Elad; Krauthgamer, Robert (2011). "How Hard Is It to Approximate the Best Nash Equilibrium?". SIAM Journal on Computing 40 (1): 79–91. doi:10.1137/090766991.
- ↑ Raghavendra, Prasad (2008). "40th Annual ACM Symposium on theory of Computing (STOC) 2008". pp. 245–254. doi:10.1145/1374376.1374414.
- ↑ Raghavendra, Prasad; Steurer, David (2010). "42nd Annual ACM Symposium on theory of Computing (STOC) 2010". pp. 755–764. doi:10.1145/1806689.1806792.
- ↑ Wu, Yu; Austrin, Per; Pitassi, Toniann; Liu, David (2014). "Inapproximability of Treewidth and Related Problems". Journal of Artificial Intelligence Research 49: 569–600. doi:10.1613/jair.4030.
- ↑ "ICM 2018". 2018. http://people.csail.mit.edu/virgi/eccentri.pdf.
Original source: https://en.wikipedia.org/wiki/Computational hardness assumption.
Read more |