DPLL algorithm: Difference between revisions

From HandWiki
imported>MainAI
fix
 
change
 
Line 9: Line 9:
|best-time=<math>O(1)</math> (constant)|space=<math>O(n)</math> (basic algorithm)
|best-time=<math>O(1)</math> (constant)|space=<math>O(n)</math> (basic algorithm)
}}
}}
In [[Logic|logic]] and [[Computer science|computer science]], the '''Davis–Putnam–Logemann–Loveland''' ('''DPLL''') '''algorithm''' is a [[Philosophy:Completeness (logic)|complete]], [[Backtracking|backtracking]]-based [[Search algorithm|search algorithm]] for [[Boolean satisfiability problem|deciding the satisfiability]] of propositional logic formulae in [[Conjunctive normal form|conjunctive normal form]], i.e. for solving the [[Boolean satisfiability problem|CNF-SAT]] problem.
In [[Logic|logic]] and [[Computer science|computer science]], the '''Davis–Putnam–Logemann–Loveland''' ('''DPLL''') '''algorithm''' is a [[Philosophy:Completeness (logic)|complete]], [[Backtracking|backtracking]]-based [[Search algorithm|search algorithm]] for [[Boolean satisfiability problem|deciding the satisfiability]] of [[Propositional logic|propositional logic formulae]] in [[Conjunctive normal form|conjunctive normal form]], i.e. for solving the [[Boolean satisfiability problem|CNF-SAT]] problem.


It was introduced in 1961 by [[Biography:Martin Davis (mathematician)|Martin Davis]], [[Biography:George Logemann|George Logemann]] and [[Biography:Donald W. Loveland|Donald W. Loveland]] and is a refinement of the earlier [[Davis–Putnam algorithm]], which is a [[Philosophy:Resolution (logic)|resolution]]-based procedure developed by Davis and [[Biography:Hilary Putnam|Hilary Putnam]] in 1960. Especially in older publications, the Davis–Logemann–Loveland algorithm is often referred to as the "Davis–Putnam method" or the "DP algorithm". Other common names that maintain the distinction are DLL and DPLL.
It was introduced in 1961 by [[Biography:Martin Davis (mathematician)|Martin Davis]], [[Biography:George Logemann|George Logemann]] and [[Biography:Donald W. Loveland|Donald W. Loveland]] and is a refinement of the earlier [[Davis–Putnam algorithm]], which is a [[Philosophy:Resolution (logic)|resolution]]-based procedure developed by Davis and [[Biography:Hilary Putnam|Hilary Putnam]] in 1960. Especially in older publications, the Davis–Logemann–Loveland algorithm is often referred to as the "Davis–Putnam method" or the "DP algorithm". Other common names that maintain the distinction are DLL and DPLL.
Line 26: Line 26:
==The algorithm==
==The algorithm==


The basic backtracking algorithm runs by choosing a literal, assigning a [[Philosophy:Truth value|truth value]] to it, simplifying the formula and then recursively checking if the simplified formula is satisfiable; if this is the case, the original formula is satisfiable; otherwise, the same recursive check is done assuming the opposite truth value. This is known as the ''splitting rule'', as it splits the problem into two simpler sub-problems. The simplification step essentially removes all clauses that become true under the assignment from the formula, and all literals that become false from the remaining clauses.
The basic backtracking algorithm runs by choosing a literal, assigning a [[Truth value|truth value]] to it, simplifying the formula and then recursively checking if the simplified formula is satisfiable; if this is the case, the original formula is satisfiable; otherwise, the same recursive check is done assuming the opposite truth value. This is known as the ''splitting rule'', as it splits the problem into two simpler sub-problems. The simplification step essentially removes all clauses that become true under the assignment from the formula, and all literals that become false from the remaining clauses.


The DPLL algorithm enhances over the backtracking algorithm by the eager use of the following rules at each step:
The DPLL algorithm enhances over the backtracking algorithm by the eager use of the following rules at each step:
Line 66: Line 66:


The Davis–Logemann–Loveland algorithm depends on the choice of ''branching literal'', which is the literal considered in the backtracking step. As a result, this is not exactly an algorithm, but rather a family of algorithms, one for each possible way of choosing the branching literal. Efficiency is strongly affected by the choice of the branching literal: there exist instances for which the running time is constant or exponential depending on the choice of the branching literals. Such choice functions are also called heuristic functions or branching heuristics.<ref>{{Cite book | last1 = Marques-Silva | first1 = João P.| chapter = The Impact of Branching Heuristics in Propositional Satisfiability Algorithms | doi = 10.1007/3-540-48159-1_5 | title = Progress in Artificial Intelligence: 9th Portuguese Conference on Artificial Intelligence, EPIA '99 Évora, Portugal, September 21–24, 1999 Proceedings | editor1-first = Pedro | editor1-last = Barahona | editor2-first = José J. | editor2-last = Alferes| series = [[Lecture Notes in Computer Science|LNCS]] | volume = 1695 | pages = 62–63 | year = 1999 | isbn = 978-3-540-66548-9 }}</ref>
The Davis–Logemann–Loveland algorithm depends on the choice of ''branching literal'', which is the literal considered in the backtracking step. As a result, this is not exactly an algorithm, but rather a family of algorithms, one for each possible way of choosing the branching literal. Efficiency is strongly affected by the choice of the branching literal: there exist instances for which the running time is constant or exponential depending on the choice of the branching literals. Such choice functions are also called heuristic functions or branching heuristics.<ref>{{Cite book | last1 = Marques-Silva | first1 = João P.| chapter = The Impact of Branching Heuristics in Propositional Satisfiability Algorithms | doi = 10.1007/3-540-48159-1_5 | title = Progress in Artificial Intelligence: 9th Portuguese Conference on Artificial Intelligence, EPIA '99 Évora, Portugal, September 21–24, 1999 Proceedings | editor1-first = Pedro | editor1-last = Barahona | editor2-first = José J. | editor2-last = Alferes| series = [[Lecture Notes in Computer Science|LNCS]] | volume = 1695 | pages = 62–63 | year = 1999 | isbn = 978-3-540-66548-9 }}</ref>
=== Formalization ===
The [[Sequent calculus|sequent calculus]]-similar notation can be used to formalize many rewriting algorithms, including DPLL. The following are the 5 rules a DPLL solver can apply in order to either find or fail to find a satisfying assignment, i.e. <math>A = (l_1, \neg l_2, l_3, ...)</math>.<ref>{{Cite journal |last=Nieuwenhuis |first=Robert |last2=Oliveras |first2=Albert |last3=Tinelli |first3=Cesare |date=2006-11-01 |title=Solving SAT and SAT Modulo Theories: From an abstract Davis--Putnam--Logemann--Loveland procedure to DPLL(T) |url=https://dl.acm.org/doi/10.1145/1217856.1217859 |journal=J. ACM |volume=53 |issue=6 |pages=937–977 |doi=10.1145/1217856.1217859 |issn=0004-5411|url-access=subscription }}</ref><ref>{{Cite journal |last=Krstić |first=Sava |last2=Goel |first2=Amit |date=2007 |editor-last=Konev |editor-first=Boris |editor2-last=Wolter |editor2-first=Frank |title=Architecting Solvers for SAT Modulo Theories: Nelson-Oppen with DPLL |url=https://link.springer.com/chapter/10.1007/978-3-540-74621-8_1 |journal=Frontiers of Combining Systems |language=en |location=Berlin, Heidelberg |publisher=Springer |pages=1–27 |doi=10.1007/978-3-540-74621-8_1 |isbn=978-3-540-74621-8|url-access=subscription }}</ref>
If a clause in the formula <math>\Phi</math> has exactly one unassigned literal <math>l</math> in <math>A</math>, with all other literals in the clause appearing negatively, extend <math>A</math> with <math>l</math>. ''This rule represents the idea a currently false clause with only one unset variable left forces that variable to be set in such a way as to make the entire clause true, otherwise the formula will not be satisfied.''<math display="block">\frac{
  \begin{array}{c}
    \{l_1, \dots, l_n, l\} \in \Phi \;\;\;
    \neg l_1, \dots, \neg l_n \in A \;\;\;\;\;
    l, \neg l \notin A
  \end{array}
}{
  A := A \; l
} \text{ (Propagate)}</math>If a literal <math>l</math> appears in the formula <math>\Phi</math> but its negation <math>\neg l</math> does not, and <math>l</math> and <math>\neg l</math> are not in <math>A</math>, extend <math>A</math> with <math>l</math>. ''This rule represents the idea that if a variable appears only purely positively or purely negatively in a formula, all the instances can be set to true or false to make their corresponding clauses true.''<math display="block">\frac{
  \begin{array}{c}
    l \text{ literal of } \Phi \;\;\;
    \neg l \text{ not literal of } \Phi \;\;\;\;\;
    l, \neg l \notin A
  \end{array}
}{
  A := A \; l
} \text{ (Pure)}</math>If a literal <math>l</math> is in the set of literals of <math>\Phi</math> and neither <math>l</math> nor <math>\neg l</math> is in <math>A</math>, then decide on the truth value of <math>l</math> and extend <math>A</math> with the decision literal <math>\bullet l</math>. ''This rule represents the idea that if you aren't forced to do an assignment, you must choose a variable to assign and make note which assignment was a choice so you can go back if the choice didn't result in a satisfying assignment.''<math display="block">\frac{
  \begin{array}{c}
    l \in \text{Lits}(\Phi) \;\;\;
    l, \neg l \notin A
  \end{array}
}{
  A := A \;\bullet\; l
} \text{ (Decide)}</math>If a clause <math>\{l_1, \dots, l_n\}</math> is in <math>\Phi</math>, and their negations <math>\neg l_1, \dots, \neg l_n</math> are in <math>A</math>, and <math>A</math> can be represented as <math>A = A' \;\bullet\; l \; N</math> where <math>\bullet \notin N</math>, then backtrack by setting <math>A</math> to <math>A' \; \neg l</math>. ''This rule represents the idea that if you reach a contradiction in trying to find a valid assignment, you need to go back to where you previously did a decision between two assignments and pick the other one.''<math display="block">\frac{
  \begin{array}{c}
    \{l_1, \dots, l_n\} \in \Phi \;\;\;
    \neg l_1, \dots, \neg l_n \in A \;\;\;\;\;
    A = A' \;\bullet\; l \; N \;\;\;\;\;
    \bullet \notin N
  \end{array}
}{
  A := A' \; \neg l
} \text{ (Backtrack)}</math>If a clause <math>\{l_1, \dots, l_n\}</math> is in <math>\Phi</math>, and their negations <math>\neg l_1, \dots, \neg l_n</math> are in <math>A</math>, and there is no conflict marker <math>\bullet</math> in <math>A</math>, then the DPLL algorithm fails. ''This rule represents the idea that if you reach a contradiction but there wasn't anything you could have done differently on the way there, the formula is unsatisfiable.''<math display="block">\frac{
  \begin{array}{c}
    \{l_1, \dots, l_n\} \in \Phi \;\;\;
    \neg l_1, \dots, \neg l_n \in A \;\;\;\;\;
    \bullet \notin A
  \end{array}
}{
  \text{Fail}
} \text{ (Fail)}</math>


===Visualization===
===Visualization===
Line 75: Line 120:
* It ''does not'' use learning or non-chronological backtracking (introduced in 1996).
* It ''does not'' use learning or non-chronological backtracking (introduced in 1996).
An example with visualization of a DPLL algorithm having chronological backtracking:
An example with visualization of a DPLL algorithm having chronological backtracking:
<gallery>
<gallery perrow="4" widths="200px">
Image:Dpll1.png|All clauses making a CNF formula
Image:Dpll1.png|All clauses making a CNF formula
Image:Dpll2.png|Pick a variable
Image:Dpll2.png|Pick a variable
Line 90: Line 135:


==Related algorithms==
==Related algorithms==
Since 1986, (Reduced ordered) [[Binary decision diagram|binary decision diagram]]s have also been used for SAT solving.{{citation needed|date=December 2023}}


In 1989-1990, Stålmarck's method for formula verification was presented and patented. It has found some use in industrial applications.<ref>{{cite journal |last1=Stålmarck |first1=G. |last2=Säflund |first2=M. |title=Modeling and Verifying Systems and Software in Propositional Logic |journal=IFAC Proceedings Volumes |date=October 1990 |volume=23 |issue=6 |pages=31–36 |doi=10.1016/S1474-6670(17)52173-4}}</ref>
In 1989-1990, Stålmarck's method for formula verification was presented and patented. It has found some use in industrial applications.<ref>{{cite journal |last1=Stålmarck |first1=G. |last2=Säflund |first2=M. |title=Modeling and Verifying Systems and Software in Propositional Logic |journal=IFAC Proceedings Volumes |date=October 1990 |volume=23 |issue=6 |pages=31–36 |doi=10.1016/S1474-6670(17)52173-4}}</ref>
Line 139: Line 183:
| issue=7
| issue=7
| pages = 394–397
| pages = 394–397
| year=1961
| year=1962
| url=https://archive.org/details/machineprogramfo00davi
| url=https://archive.org/details/machineprogramfo00davi
| doi=10.1145/368273.368557| hdl=2027/mdp.39015095248095
| doi=10.1145/368273.368557| hdl=2027/mdp.39015095248095

Latest revision as of 21:59, 11 February 2026

DPLL
After 5 fruitless attempts (red), choosing the variable assignment a=1, b=1 leads, after unit propagation (bottom), to success (green): the top left CNF formula is satisfiable.
ClassBoolean satisfiability problem
Data structureBinary tree
Worst-case performanceO(2n)
Best-case performanceO(1) (constant)
Worst-case space complexityO(n) (basic algorithm)

In logic and computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem.

It was introduced in 1961 by Martin Davis, George Logemann and Donald W. Loveland and is a refinement of the earlier Davis–Putnam algorithm, which is a resolution-based procedure developed by Davis and Hilary Putnam in 1960. Especially in older publications, the Davis–Logemann–Loveland algorithm is often referred to as the "Davis–Putnam method" or the "DP algorithm". Other common names that maintain the distinction are DLL and DPLL.

Implementations and applications

The SAT problem is important both from theoretical and practical points of view. In complexity theory it was the first problem proved to be NP-complete, and can appear in a broad variety of applications such as model checking, automated planning and scheduling, and diagnosis in artificial intelligence.

As such, writing efficient SAT solvers has been a research topic for many years. GRASP (1996-1999) was an early implementation using DPLL.[1] In the international SAT competitions, implementations based around DPLL such as zChaff[2] and MiniSat[3] were in the first places of the competitions in 2004 and 2005.[4]

Another application that often involves DPLL is automated theorem proving or satisfiability modulo theories (SMT), which is a SAT problem in which propositional variables are replaced with formulas of another mathematical theory.

The algorithm

The basic backtracking algorithm runs by choosing a literal, assigning a truth value to it, simplifying the formula and then recursively checking if the simplified formula is satisfiable; if this is the case, the original formula is satisfiable; otherwise, the same recursive check is done assuming the opposite truth value. This is known as the splitting rule, as it splits the problem into two simpler sub-problems. The simplification step essentially removes all clauses that become true under the assignment from the formula, and all literals that become false from the remaining clauses.

The DPLL algorithm enhances over the backtracking algorithm by the eager use of the following rules at each step:

Unit propagation
If a clause is a unit clause, i.e. it contains only a single unassigned literal, this clause can only be satisfied by assigning the necessary value to make this literal true. Thus, no choice is necessary. Unit propagation consists in removing every clause containing a unit clause's literal and in discarding the complement of a unit clause's literal from every clause containing that complement. In practice, this often leads to deterministic cascades of units, thus avoiding a large part of the naive search space.
Pure literal elimination
If a propositional variable occurs with only one polarity in the formula, it is called pure. A pure literal can always be assigned in a way that makes all clauses containing it true. Thus, when it is assigned in such a way, these clauses do not constrain the search anymore, and can be deleted.

Unsatisfiability of a given partial assignment is detected if one clause becomes empty, i.e. if all its variables have been assigned in a way that makes the corresponding literals false. Satisfiability of the formula is detected either when all variables are assigned without generating the empty clause, or, in modern implementations, if all clauses are satisfied. Unsatisfiability of the complete formula can only be detected after exhaustive search.

The DPLL algorithm can be summarized in the following pseudocode, where Φ is the CNF formula:

Algorithm DPLL
    Input: A set of clauses Φ.
    Output: A truth value indicating whether Φ is satisfiable.
function DPLL(Φ)
    // unit propagation:
    while there is a unit clause {l} in Φ do
        Φ ← unit-propagate(l, Φ);
    // pure literal elimination:
    while there is a literal l that occurs pure in Φ do
        Φ ← pure-literal-assign(l, Φ);
    // stopping conditions:
    if Φ is empty then
        return true;
    if Φ contains an empty clause then
        return false;
    // DPLL procedure:
    lchoose-literal(Φ);
    return DPLL {l}) or DPLL {¬l});
  • "←" denotes assignment. For instance, "largestitem" means that the value of largest changes to the value of item.
  • "return" terminates the algorithm and outputs the following value.

In this pseudocode, unit-propagate(l, Φ) and pure-literal-assign(l, Φ) are functions that return the result of applying unit propagation and the pure literal rule, respectively, to the literal l and the formula Φ. In other words, they replace every occurrence of l with "true" and every occurrence of not l with "false" in the formula Φ, and simplify the resulting formula. The or in the return statement is a short-circuiting operator. Φ {l} denotes the simplified result of substituting "true" for l in Φ.

The algorithm terminates in one of two cases. Either the CNF formula Φ is empty, i.e., it contains no clause. Then it is satisfied by any assignment, as all its clauses are vacuously true. Otherwise, when the formula contains an empty clause, the clause is vacuously false because a disjunction requires at least one member that is true for the overall set to be true. In this case, the existence of such a clause implies that the formula (evaluated as a conjunction of all clauses) cannot evaluate to true and must be unsatisfiable.

The pseudocode DPLL function only returns whether the final assignment satisfies the formula or not. In a real implementation, the partial satisfying assignment typically is also returned on success; this can be derived by keeping track of branching literals and of the literal assignments made during unit propagation and pure literal elimination.

The Davis–Logemann–Loveland algorithm depends on the choice of branching literal, which is the literal considered in the backtracking step. As a result, this is not exactly an algorithm, but rather a family of algorithms, one for each possible way of choosing the branching literal. Efficiency is strongly affected by the choice of the branching literal: there exist instances for which the running time is constant or exponential depending on the choice of the branching literals. Such choice functions are also called heuristic functions or branching heuristics.[5]

Formalization

The sequent calculus-similar notation can be used to formalize many rewriting algorithms, including DPLL. The following are the 5 rules a DPLL solver can apply in order to either find or fail to find a satisfying assignment, i.e. A=(l1,¬l2,l3,...).[6][7]

If a clause in the formula Φ has exactly one unassigned literal l in A, with all other literals in the clause appearing negatively, extend A with l. This rule represents the idea a currently false clause with only one unset variable left forces that variable to be set in such a way as to make the entire clause true, otherwise the formula will not be satisfied.{l1,,ln,l}Φ¬l1,,¬lnAl,¬lAA:=Al (Propagate)If a literal l appears in the formula Φ but its negation ¬l does not, and l and ¬l are not in A, extend A with l. This rule represents the idea that if a variable appears only purely positively or purely negatively in a formula, all the instances can be set to true or false to make their corresponding clauses true.l literal of Φ¬l not literal of Φl,¬lAA:=Al (Pure)If a literal l is in the set of literals of Φ and neither l nor ¬l is in A, then decide on the truth value of l and extend A with the decision literal l. This rule represents the idea that if you aren't forced to do an assignment, you must choose a variable to assign and make note which assignment was a choice so you can go back if the choice didn't result in a satisfying assignment.lLits(Φ)l,¬lAA:=Al (Decide)If a clause {l1,,ln} is in Φ, and their negations ¬l1,,¬ln are in A, and A can be represented as A=AlN where N, then backtrack by setting A to A¬l. This rule represents the idea that if you reach a contradiction in trying to find a valid assignment, you need to go back to where you previously did a decision between two assignments and pick the other one.{l1,,ln}Φ¬l1,,¬lnAA=AlNNA:=A¬l (Backtrack)If a clause {l1,,ln} is in Φ, and their negations ¬l1,,¬ln are in A, and there is no conflict marker in A, then the DPLL algorithm fails. This rule represents the idea that if you reach a contradiction but there wasn't anything you could have done differently on the way there, the formula is unsatisfiable.{l1,,ln}Φ¬l1,,¬lnAAFail (Fail)

Visualization

Davis, Logemann, Loveland (1961) had developed this algorithm. Some properties of this original algorithm are:

  • It is based on search.
  • It is the basis for almost all modern SAT solvers.
  • It does not use learning or non-chronological backtracking (introduced in 1996).

An example with visualization of a DPLL algorithm having chronological backtracking:

In 1989-1990, Stålmarck's method for formula verification was presented and patented. It has found some use in industrial applications.[8]

DPLL has been extended for automated theorem proving for fragments of first-order logic by way of the DPLL(T) algorithm.[1]

In the 2010-2019 decade, work on improving the algorithm has found better policies for choosing the branching literals and new data structures to make the algorithm faster, especially the part on unit propagation. However, the main improvement has been a more powerful algorithm, Conflict-Driven Clause Learning (CDCL), which is similar to DPLL but after reaching a conflict "learns" the root causes (assignments to variables) of the conflict, and uses this information to perform non-chronological backtracking (aka backjumping) in order to avoid reaching the same conflict again. Most state-of-the-art SAT solvers are based on the CDCL framework as of 2019.[9]

Relation to other notions

Runs of DPLL-based algorithms on unsatisfiable instances correspond to tree resolution refutation proofs.[10]

See also

References

General

Specific

  1. 1.0 1.1 Nieuwenhuis, Robert; Oliveras, Albert; Tinelli, Cesar (2004), "Abstract DPLL and Abstract DPLL Modulo Theories", Proceedings Int. Conf. on Logic for Programming, Artificial Intelligence, and Reasoning, LPAR 2004, pp. 36–50, http://www.lsi.upc.edu/~roberto/papers/lpar04.pdf 
  2. zChaff website, http://www.princeton.edu/~chaff/zchaff.html 
  3. "Minisat website". http://minisat.se/. 
  4. The international SAT Competitions web page, sat! live, http://www.satcompetition.org/ 
  5. Marques-Silva, João P. (1999). "The Impact of Branching Heuristics in Propositional Satisfiability Algorithms". in Barahona, Pedro; Alferes, José J.. Progress in Artificial Intelligence: 9th Portuguese Conference on Artificial Intelligence, EPIA '99 Évora, Portugal, September 21–24, 1999 Proceedings. LNCS. 1695. pp. 62–63. doi:10.1007/3-540-48159-1_5. ISBN 978-3-540-66548-9. 
  6. Nieuwenhuis, Robert; Oliveras, Albert; Tinelli, Cesare (2006-11-01). "Solving SAT and SAT Modulo Theories: From an abstract Davis--Putnam--Logemann--Loveland procedure to DPLL(T)". J. ACM 53 (6): 937–977. doi:10.1145/1217856.1217859. ISSN 0004-5411. https://dl.acm.org/doi/10.1145/1217856.1217859. 
  7. Krstić, Sava; Goel, Amit (2007). Konev, Boris; Wolter, Frank. eds. "Architecting Solvers for SAT Modulo Theories: Nelson-Oppen with DPLL" (in en). Frontiers of Combining Systems (Berlin, Heidelberg: Springer): 1–27. doi:10.1007/978-3-540-74621-8_1. ISBN 978-3-540-74621-8. https://link.springer.com/chapter/10.1007/978-3-540-74621-8_1. 
  8. Stålmarck, G.; Säflund, M. (October 1990). "Modeling and Verifying Systems and Software in Propositional Logic". IFAC Proceedings Volumes 23 (6): 31–36. doi:10.1016/S1474-6670(17)52173-4. 
  9. Möhle, Sibylle; Biere, Armin (2019). "Backing Backtracking". Theory and Applications of Satisfiability Testing – SAT 2019. Lecture Notes in Computer Science. 11628. pp. 250–266. doi:10.1007/978-3-030-24258-9_18. ISBN 978-3-030-24257-2. http://fmv.jku.at/papers/MoehleBiere-SAT19.pdf. 
  10. Van Beek, Peter (2006). "Backtracking search algorithms". in Rossi, Francesca; Van Beek, Peter; Walsh, Toby. Handbook of constraint programming. Elsevier. p. 122. ISBN 978-0-444-52726-4. https://books.google.com/books?id=Kjap9ZWcKOoC&pg=PA122. 

Further reading

  • Malay Ganai; Aarti Gupta; Dr. Aarti Gupta (2007). SAT-based scalable formal verification solutions. Springer. pp. 23–32. ISBN 978-0-387-69166-4. 
  • Gomes, Carla P.; Kautz, Henry; Sabharwal, Ashish; Selman, Bart (2008). "Satisfiability Solvers". in Van Harmelen, Frank; Lifschitz, Vladimir; Porter, Bruce. Handbook of knowledge representation. Foundations of Artificial Intelligence. 3. Elsevier. pp. 89–134. doi:10.1016/S1574-6526(07)03002-7. ISBN 978-0-444-52211-5.