Hoare logic
Hoare logic (also known as Floyd–Hoare logic or Hoare rules) is a formal system with a set of logical rules for reasoning rigorously about the correctness of computer programs. It was proposed in 1969 by the British computer scientist and logician Tony Hoare, and subsequently refined by Hoare and other researchers.[1] The original ideas were seeded by the work of Robert W. Floyd, who had published a similar system[2] for flowcharts.
Hoare triple
The central feature of Hoare logic is the Hoare triple. A triple describes how the execution of a piece of code changes the state of the computation. A Hoare triple is of the form
- [math]\displaystyle{ \{P\} C \{Q\} }[/math]
where [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] are assertions and [math]\displaystyle{ C }[/math] is a command.[note 1] [math]\displaystyle{ P }[/math] is named the precondition and [math]\displaystyle{ Q }[/math] the postcondition: when the precondition is met, executing the command establishes the postcondition. Assertions are formulae in predicate logic.
Hoare logic provides axioms and inference rules for all the constructs of a simple imperative programming language. In addition to the rules for the simple language in Hoare's original paper, rules for other language constructs have been developed since then by Hoare and many other researchers. There are rules for concurrency, procedures, jumps, and pointers.
Partial and total correctness
Using standard Hoare logic, only partial correctness can be proven. Total correctness additionally requires termination, which can be proven separately or with an extended version of the While rule.[3] Thus the intuitive reading of a Hoare triple is: Whenever [math]\displaystyle{ P }[/math] holds of the state before the execution of [math]\displaystyle{ C }[/math], then [math]\displaystyle{ Q }[/math] will hold afterwards, or [math]\displaystyle{ C }[/math] does not terminate. In the latter case, there is no "after", so [math]\displaystyle{ Q }[/math] can be any statement at all. Indeed, one can choose [math]\displaystyle{ Q }[/math] to be false to express that [math]\displaystyle{ C }[/math] does not terminate.
"Termination" here and in the rest of this article is meant in the broader sense that computation will eventually be finished, that is it implies the absence of infinite loops; it does not imply the absence of implementation limit violations (e.g. division by zero) stopping the program prematurely. In his 1969 paper, Hoare used a narrower notion of termination which also entailed the absence of implementation limit violations, and expressed his preference for the broader notion of termination as it keeps assertions implementation-independent:[4]
Another deficiency in the axioms and rules quoted above is that they give no basis for a proof that a program successfully terminates. Failure to terminate may be due to an infinite loop; or it may be due to violation of an implementation-defined limit, for example, the range of numeric operands, the size of storage, or an operating system time limit. Thus the notation “[math]\displaystyle{ P\{Q\}R }[/math]” should be interpreted “provided that the program successfully terminates, the properties of its results are described by [math]\displaystyle{ R }[/math].” It is fairly easy to adapt the axioms so that they cannot be used to predict the “results” of nonterminating programs; but the actual use of the axioms would now depend on knowledge of many implementation-dependent features, for example, the size and speed of the computer, the range of numbers, and the choice of overflow technique. Apart from proofs of the avoidance of infinite loops, it is probably better to prove the “conditional” correctness of a program and rely on an implementation to give a warning if it has had to abandon execution of the program as a result of violation of an implementation limit.
Rules
Empty statement axiom schema
The empty statement rule asserts that the skip statement does not change the state of the program, thus whatever holds true before skip also holds true afterwards.[note 2]
- [math]\displaystyle{ \dfrac{}{\{P\}\texttt{skip}\{P\}} }[/math]
Assignment axiom schema
The assignment axiom states that, after the assignment, any predicate that was previously true for the right-hand side of the assignment now holds for the variable. Formally, let P be an assertion in which the variable x is free. Then:
- [math]\displaystyle{ \dfrac{}{\{P[E/x]\} x := E \{P\}} }[/math]
where [math]\displaystyle{ P[E/x] }[/math] denotes the assertion P in which each free occurrence of x has been replaced by the expression E.
The assignment axiom scheme means that the truth of [math]\displaystyle{ P[E/x] }[/math] is equivalent to the after-assignment truth of P. Thus were [math]\displaystyle{ P[E/x] }[/math] true prior to the assignment, by the assignment axiom, then P would be true subsequent to which. Conversely, were [math]\displaystyle{ P[E/x] }[/math] false (i.e. [math]\displaystyle{ \neg P[E/x] }[/math] true) prior to the assignment statement, P must then be false afterwards.
Examples of valid triples include:
- [math]\displaystyle{ \{ x+1 = 43 \} y := x + 1 \{ y = 43 \} }[/math]
- [math]\displaystyle{ \{ x + 1 \leq N \} x := x + 1 \{ x \leq N \} }[/math]
All preconditions that are not modified by the expression can be carried over to the postcondition. In the first example, assigning [math]\displaystyle{ y:=x+1 }[/math] does not change the fact that [math]\displaystyle{ x+1=43 }[/math], so both statements may appear in the postcondition. Formally, this result is obtained by applying the axiom schema with P being ([math]\displaystyle{ y=43 }[/math] and [math]\displaystyle{ x+1=43 }[/math]), which yields [math]\displaystyle{ P[(x+1)/y] }[/math] being ([math]\displaystyle{ x+1=43 }[/math] and [math]\displaystyle{ x+1=43 }[/math]), which can in turn be simplified to the given precondition [math]\displaystyle{ x+1=43 }[/math].
The assignment axiom scheme is equivalent to saying that to find the precondition, first take the post-condition and replace all occurrences of the left-hand side of the assignment with the right-hand side of the assignment. Be careful not to try to do this backwards by following this incorrect way of thinking: [math]\displaystyle{ \{P\} x:=E \{P[E/x]\} }[/math]; this rule leads to nonsensical examples like:
- [math]\displaystyle{ \{ x = 5 \} x := 3 \{ 3 = 5 \} }[/math]
Another incorrect rule looking tempting at first glance is [math]\displaystyle{ \{P\} x:=E \{P \wedge x=E\} }[/math]; it leads to nonsensical examples like:
- [math]\displaystyle{ \{ x = 5 \} x := x + 1 \{ x = 5 \wedge x = x + 1 \} }[/math]
While a given postcondition P uniquely determines the precondition [math]\displaystyle{ P[E/x] }[/math], the converse is not true. For example:
- [math]\displaystyle{ \{ 0 \leq y\cdot y \wedge y\cdot y \leq 9 \} x := y \cdot y \{ 0 \leq x \wedge x \leq 9 \} }[/math],
- [math]\displaystyle{ \{ 0 \leq y\cdot y \wedge y\cdot y \leq 9 \} x := y \cdot y \{ 0 \leq x \wedge y\cdot y \leq 9 \} }[/math],
- [math]\displaystyle{ \{ 0 \leq y\cdot y \wedge y\cdot y \leq 9 \} x := y \cdot y \{ 0 \leq y\cdot y \wedge x \leq 9 \} }[/math], and
- [math]\displaystyle{ \{ 0 \leq y\cdot y \wedge y\cdot y \leq 9 \} x := y \cdot y \{ 0 \leq y\cdot y \wedge y\cdot y \leq 9 \} }[/math]
are valid instances of the assignment axiom scheme.
The assignment axiom proposed by Hoare does not apply when more than one name may refer to the same stored value. For example,
- [math]\displaystyle{ \{ y = 3 \} x := 2 \{ y = 3 \} }[/math]
is wrong if x and y refer to the same variable (aliasing), although it is a proper instance of the assignment axiom scheme (with both [math]\displaystyle{ \{P\} }[/math] and [math]\displaystyle{ \{P[2/x]\} }[/math] being [math]\displaystyle{ \{y=3\} }[/math]).
Rule of composition
Verifying swap-code without auxiliary variables | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Hoare's rule of composition applies to sequentially executed programs S and T, where S executes prior to T and is written [math]\displaystyle{ S;T }[/math] (Q is called the midcondition):[5]
- [math]\displaystyle{ \dfrac{\{P\} S \{Q\}\quad,\quad \{Q\} T \{R\}}{\{P\} S;T \{R\}} }[/math]
For example, consider the following two instances of the assignment axiom:
- [math]\displaystyle{ \{ x + 1 = 43 \} y := x + 1 \{ y = 43 \} }[/math]
and
- [math]\displaystyle{ \{ y = 43 \} z := y \{ z = 43 \} }[/math]
By the sequencing rule, one concludes:
- [math]\displaystyle{ \{ x + 1 = 43 \} y := x + 1; z := y \{ z = 43 \} }[/math]
Another example is shown in the right box.
Conditional rule
- [math]\displaystyle{ \dfrac{\{B \wedge P\} S \{Q\}\quad,\quad \{\neg B \wedge P \} T \{Q\}}{\{P\} \texttt{if}\ B\ \texttt{then}\ S\ \texttt{else}\ T\ \texttt{endif} \{Q\}} }[/math]
The conditional rule states that a postcondition Q common to then and else part is also a postcondition of the whole if...endif statement.[6] In the then and the else part, the unnegated and negated condition B can be added to the precondition P, respectively. The condition, B, must not have side effects. An example is given in the next section.
This rule was not contained in Hoare's original publication.[1] However, since a statement
- [math]\displaystyle{ \texttt{if}\ B\ \texttt{then}\ S\ \texttt{else}\ T\ \texttt{endif} }[/math]
has the same effect as a one-time loop construct
- [math]\displaystyle{ \texttt{bool}\ b:=\texttt{true}; \texttt{while}\ B\wedge b\ \texttt{do}\ S; b:=\texttt{false}\ \texttt{done}; b:=\texttt{true}; \texttt{while}\ \neg B\wedge b\ \texttt{do}\ T; b:=\texttt{false}\ \texttt{done} }[/math]
the conditional rule can be derived from the other Hoare rules. In a similar way, rules for other derived program constructs, like for loop, do...until loop, switch, break, continue can be reduced by program transformation to the rules from Hoare's original paper.
Consequence rule
- [math]\displaystyle{ \dfrac{P_1 \rightarrow P_2\quad ,\quad \{P_2\} S \{Q_2\}\quad ,\quad Q_2 \rightarrow Q_1}{\{P_1\} S \{Q_1\}} }[/math]
This rule allows to strengthen the precondition [math]\displaystyle{ P_2 }[/math] and/or to weaken the postcondition [math]\displaystyle{ Q_2 }[/math]. It is used e.g. to achieve literally identical postconditions for the then and the else part.
For example, a proof of
- [math]\displaystyle{ \{0 \leq x \leq 15 \}\texttt{if}\ x\lt 15\ \texttt{then}\ x:=x+1\ \texttt{else}\ x:=0\ \texttt{endif} \{0 \leq x \leq 15 \} }[/math]
needs to apply the conditional rule, which in turn requires to prove
- [math]\displaystyle{ \{0 \leq x \leq 15 \wedge x \lt 15 \} x:=x+1 \{ 0 \leq x \leq 15 \} }[/math], or simplified
- [math]\displaystyle{ \{0 \leq x \lt 15 \} x:=x+1 \{0 \leq x \leq 15 \} }[/math]
for the then part, and
- [math]\displaystyle{ \{0 \leq x \leq 15 \wedge x \geq 15\} x:=0 \{0 \leq x \leq 15\} }[/math], or simplified
- [math]\displaystyle{ \{x=15\} x:=0 \{0 \leq x \leq 15 \} }[/math]
for the else part.
However, the assignment rule for the then part requires to choose P as [math]\displaystyle{ 0\leq x \leq 15 }[/math]; rule application hence yields
- [math]\displaystyle{ \{0 \leq x+1 \leq 15\} x:=x+1 \{0 \leq x \leq 15\} }[/math], which is logically equivalent to
- [math]\displaystyle{ \{-1 \leq x \lt 15\} x:=x+1 \{0 \leq x \leq 15\} }[/math].
The consequence rule is needed to strengthen the precondition [math]\displaystyle{ \{-1 \leq x \lt 15\} }[/math] obtained from the assignment rule to [math]\displaystyle{ \{0 \leq x \lt 15\} }[/math] required for the conditional rule.
Similarly, for the else part, the assignment rule yields
- [math]\displaystyle{ \{0 \leq 0 \leq 15\} x:=0 \{0 \leq x \leq 15\} }[/math], or equivalently
- [math]\displaystyle{ \{\texttt{true}\} x:=0 \{0 \leq x \leq 15\} }[/math],
hence the consequence rule has to be applied with [math]\displaystyle{ P_1 }[/math] and [math]\displaystyle{ P_2 }[/math] being [math]\displaystyle{ \{x=15\} }[/math] and [math]\displaystyle{ \{\texttt{true}\} }[/math], respectively, to strengthen again the precondition. Informally, the effect of the consequence rule is to "forget" that [math]\displaystyle{ \{x=15\} }[/math] is known at the entry of the else part, since the assignment rule used for the else part doesn't need that information.
While rule
- [math]\displaystyle{ \dfrac{\{P \wedge B\} S \{P\}}{\{P\} \texttt{while}\ B\ \texttt{do}\ S\ \texttt{done} \{\neg B \wedge P\}} }[/math]
Here P is the loop invariant, which is to be preserved by the loop body S. After the loop is finished, this invariant P still holds, and moreover [math]\displaystyle{ \neg B }[/math] must have caused the loop to end. As in the conditional rule, B must not have side effects.
For example, a proof of
- [math]\displaystyle{ \{x \leq 10\} \texttt{while}\ x\lt 10\ \texttt{do}\ x:=x+1\ \texttt{done} \{\neg x \lt 10 \wedge x \leq 10\} }[/math]
by the while rule requires to prove
- [math]\displaystyle{ \{x \leq 10 \wedge x \lt 10\} x := x + 1 \{x \leq 10 \} }[/math], or simplified
- [math]\displaystyle{ \{x \lt 10\} x := x + 1 \{x \leq 10 \} }[/math],
which is easily obtained by the assignment rule. Finally, the postcondition [math]\displaystyle{ \{\neg x \lt 10 \wedge x\leq 10\} }[/math] can be simplified to [math]\displaystyle{ \{x=10\} }[/math].
For another example, the while rule can be used to formally verify the following strange program to compute the exact square root x of an arbitrary number a—even if x is an integer variable and a is not a square number:
- [math]\displaystyle{ \{\texttt{true}\} \texttt{while}\ x\cdot x \neq a\ \texttt{do}\ \texttt{skip}\ \texttt{done} \{x \cdot x = a \wedge \texttt{true}\} }[/math]
After applying the while rule with P being true, it remains to prove
- [math]\displaystyle{ \{\texttt{true} \wedge x\cdot x \neq a\} \texttt{skip} \{\texttt{true}\} }[/math],
which follows from the skip rule and the consequence rule.
In fact, the strange program is partially correct: if it happened to terminate, it is certain that x must have contained (by chance) the value of a's square root. In all other cases, it will not terminate; therefore it is not totally correct.
While rule for total correctness
If the above ordinary while rule is replaced by the following one, the Hoare calculus can also be used to prove total correctness, i.e. termination as well as partial correctness. Commonly, square brackets are used here instead of curly braces to indicate the different notion of program correctness.
- [math]\displaystyle{ \dfrac{\lt \ \text{is a well-founded ordering on the set}\ D\quad,\quad [P \wedge B \wedge t \in D \wedge t = z] S [P \wedge t \in D \wedge t \lt z ]}{[P \wedge t \in D] \texttt{while}\ B\ \texttt{do}\ S\ \texttt{done} [\neg B \wedge P \wedge t \in D]} }[/math]
In this rule, in addition to maintaining the loop invariant, one also proves termination by way of an expression t, called the loop variant, whose value strictly decreases with respect to a well-founded relation < on some domain set D during each iteration. Since < is well-founded, a strictly decreasing chain of members of D can have only finite length, so t cannot keep decreasing forever. (For example, the usual order < is well-founded on positive integers [math]\displaystyle{ \mathbb{N} }[/math], but neither on the integers [math]\displaystyle{ \mathbb{Z} }[/math] nor on positive real numbers [math]\displaystyle{ \mathbb{R}^+ }[/math]; all these sets are meant in the mathematical, not in the computing sense, they are all infinite in particular.)
Given the loop invariant P, the condition B must imply that t is not a minimal element of D, for otherwise the body S could not decrease t any further, i.e. the premise of the rule would be false. (This is one of various notations for total correctness.) [note 3]
Resuming the first example of the previous section, for a total-correctness proof of
- [math]\displaystyle{ [x \leq 10]\texttt{while}\ x \lt 10\ \texttt{do}\ x:=x+1\ \texttt{done} [\neg x \lt 10 \wedge x \leq 10] }[/math]
the while rule for total correctness can be applied with e.g. D being the non-negative integers with the usual order, and the expression t being [math]\displaystyle{ 10 - x }[/math], which then in turn requires to prove
- [math]\displaystyle{ [x \leq 10 \wedge x \lt 10 \wedge 10-x \geq 0 \wedge 10-x = z] x:= x+1 [x \leq 10 \wedge 10-x \geq 0 \wedge 10-x \lt z] }[/math]
Informally speaking, we have to prove that the distance [math]\displaystyle{ 10-x }[/math] decreases in every loop cycle, while it always remains non-negative; this process can go on only for a finite number of cycles.
The previous proof goal can be simplified to
- [math]\displaystyle{ [x \lt 10 \wedge 10-x = z] x:=x+1 [x \leq 10 \wedge 10-x \lt z] }[/math],
which can be proven as follows:
- [math]\displaystyle{ [x+1 \leq 10 \wedge 10-x-1 \lt z] x:=x+1 [x \leq 10 \wedge 10-x \lt z] }[/math] is obtained by the assignment rule, and
- [math]\displaystyle{ [x+1 \leq 10 \wedge 10-x-1 \lt z] }[/math] can be strengthened to [math]\displaystyle{ [x \lt 10 \wedge 10-x = z] }[/math] by the consequence rule.
For the second example of the previous section, of course no expression t can be found that is decreased by the empty loop body, hence termination cannot be proved.
See also
Notes
- ↑ Hoare originally wrote "[math]\displaystyle{ P\{C\}Q }[/math]" rather than "[math]\displaystyle{ \{P\}C\{Q\} }[/math]".
- ↑ This article uses a natural deduction style notation for rules. For example, [math]\displaystyle{ \dfrac{\alpha,\beta}{\phi} }[/math] informally means "If both α and β hold, then also φ holds"; α and β are called antecedents of the rule, φ is called its succedent. A rule without antecedents is called an axiom, and written as [math]\displaystyle{ \dfrac{}{\quad\phi\quad} }[/math].
- ↑ Hoare's 1969 paper didn't provide a total correctness rule; cf. his discussion on p.579 (top left). For example Reynolds' textbook[3] gives the following version of a total correctness rule: [math]\displaystyle{ \dfrac{P \wedge B \rightarrow 0\leq t\quad ,\quad [P \wedge B \wedge t=z] S [P \wedge t\lt z]}{[P] \texttt{while}\ B\ \texttt{do}\ S\ \texttt{done} [P \wedge \neg B]} }[/math] when z is an integer variable that doesn't occur free in P, B, S, or t, and t is an integer expression (Reynolds' variables renamed to fit with this article's settings).
References
- ↑ 1.0 1.1 Hoare, C. A. R. (October 1969). "An axiomatic basis for computer programming". Communications of the ACM 12 (10): 576–580. doi:10.1145/363235.363259. https://dl.acm.org/doi/pdf/10.1145/363235.363259.
- ↑ R. W. Floyd. "Assigning meanings to programs." Proceedings of the American Mathematical Society Symposia on Applied Mathematics. Vol. 19, pp. 19–31. 1967.
- ↑ 3.0 3.1 John C. Reynolds (2009). Theories of Programming Languages. Cambridge University Press.) Here: Sect. 3.4, p. 64.
- ↑ Hoare (1969), p.578-579
- ↑ Huth, Michael; Ryan, Mark (2004-08-26). Logic in Computer Science (second ed.). CUP. pp. 276. ISBN 978-0521543101. http://www.cs.bham.ac.uk/research/projects/lics/.
- ↑ Apt, Krzysztof R.; Olderog, Ernst-Rüdiger (December 2019). "Fifty years of Hoare's logic". Formal Aspects of Computing 31 (6): 759. doi:10.1007/s00165-019-00501-3. https://ir.cwi.nl/pub/29146.
Further reading
- Robert D. Tennent. Specifying Software (a textbook that includes an introduction to Hoare logic, written in 2002) ISBN:0-521-00401-2
External links
- KeY-Hoare is a semi-automatic verification system built on top of the KeY theorem prover. It features a Hoare calculus for a simple while language.
- j-Algo-modul Hoare calculus — A visualisation of the Hoare calculus in the algorithm visualisation program j-Algo
Original source: https://en.wikipedia.org/wiki/Hoare logic.
Read more |