SKI combinator calculus
The SKI combinator calculus is a combinatory logic system and a computational system. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It can be likened to a reduced version of the untyped lambda calculus. It was introduced by Moses Schönfinkel[1] and Haskell Curry.[2]
All operations in lambda calculus can be encoded via abstraction elimination into the SKI calculus as binary trees whose leaves are one of the three symbols S, K, and I (called combinators).
Notation
Although the most formal representation of the objects in this system requires binary trees, for simpler typesetting they are often represented as parenthesized expressions, as a shorthand for the tree they represent. Any subtrees may be parenthesized, but often only the right-side subtrees are parenthesized, with left associativity implied for any unparenthesized applications. For example, ISK means ((IS)K). Using this notation, a tree whose left subtree is the tree KS and whose right subtree is the tree SK can be written as KS(SK). If more explicitness is desired, the implied parentheses can be included as well: ((KS)(SK)).
Informal description
Informally, and using programming language jargon, a tree (xy) can be thought of as a function x applied to an argument y. When evaluated (i.e., when the function is "applied" to the argument), the tree "returns a value", i.e., transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed.
The evaluation operation is defined as follows:
(x, y, and z represent expressions made from the functions S, K, and I, and set values):
I returns its argument:
- Ix = x
K, when applied to any argument x, yields a one-argument constant function Kx, which, when applied to any argument y, returns x:
- Kxy = x
S is a substitution operator. It takes three arguments and then returns the first argument applied to the third, which is then applied to the result of the second argument applied to the third. More clearly:
- Sxyz = xz(yz)
Example computation: SKSK evaluates to KK(SK) by the S-rule. Then if we evaluate KK(SK), we get K by the K-rule. As no further rule can be applied, the computation halts here.
For all trees x and all trees y, SKxy will always evaluate to y in two steps, Ky(xy) = y, so the ultimate result of evaluating SKxy will always equal the result of evaluating y. We say that SKx and I are "functionally equivalent" because they always yield the same result when applied to any y.
From these definitions it can be shown that SKI calculus is not the minimum system that can fully perform the computations of lambda calculus, as all occurrences of I in any expression can be replaced by (SKK) or (SKS) or (SK whatever) and the resulting expression will yield the same result. So the "I" is merely syntactic sugar. Since I is optional, the system is also referred as SK calculus or SK combinator calculus.
It is possible to define a complete system using only one (improper) combinator. An example is Chris Barker's iota combinator, which can be expressed in terms of S and K as follows:
- ιx = xSK
It is possible to reconstruct S, K, and I from the iota combinator. Applying ι to itself gives ιι = ιSK = SSKK = SK(KK) which is functionally equivalent to I. K can be constructed by applying ι twice to I (which is equivalent to application of ι to itself): ι(ι(ιι)) = ι(ιιSK) = ι(ISK) = ι(SK) = SKSK = K. Applying ι one more time gives ι(ι(ι(ιι))) = ιK = KSK = S.
Formal definition
The terms and derivations in this system can also be more formally defined:
Terms: The set T of terms is defined recursively by the following rules.
- S, K, and I are terms.
- If τ1 and τ2 are terms, then (τ1τ2) is a term.
- Nothing is a term if not required to be so by the first two rules.
Derivations: A derivation is a finite sequence of terms defined recursively by the following rules (where α and ι are words over the alphabet {S, K, I, (, )} while β, γ and δ are terms):
- If Δ is a derivation ending in an expression of the form α(Iβ)ι, then Δ followed by the term αβι is a derivation.
- If Δ is a derivation ending in an expression of the form α((Kβ)γ)ι, then Δ followed by the term αβι is a derivation.
- If Δ is a derivation ending in an expression of the form α(((Sβ)γ)δ)ι, then Δ followed by the term α((βδ)(γδ))ι is a derivation.
Assuming a sequence is a valid derivation to begin with, it can be extended using these rules. All derivations of length 1 are valid derivations.
SKI expressions
Self-application and recursion
SII is an expression that takes an argument and applies that argument to itself:
- SIIα = Iα(Iα) = αα
This is also known as U combinator, Ux = xx. One interesting property of it is that its self-application is irreducible:
- SII(SII) = I(SII)(I(SII)) = SII(I(SII)) = SII(SII)
Or, using the equation as its definition directly, we immediately get U U = U U.
Another thing is that it allows one to write a function that applies one thing to the self application of another thing:
- (S(Kα)(SII))β = Kαβ(SIIβ) = α(Iβ(Iβ)) = α(ββ)
or it can be seen as defining yet another combinator directly, Hxy = x(yy).
This function can be used to achieve recursion. If β is the function that applies α to the self application of something else,
- β = Hα = S(Kα)(SII)
then the self-application of this β is the fixed point of that α:
- SIIβ = ββ = α(ββ) = α(α(ββ)) = [math]\displaystyle{ \ldots }[/math]
Or, directly again from the derived definition, Hα(Hα) = α(Hα(Hα)).
If α expresses a "computational step" computed by αρν for some ρ and ν, that assumes ρν' expresses "the rest of the computation" (for some ν' that α will "compute" from ν), then its fixed point ββ expresses the whole recursive computation, since using the same function ββ for the "rest of computation" call (with ββν = α(ββ)ν) is the very definition of recursion: ρν' = ββν' = α(ββ)ν' = ... . α will have to employ some kind of conditional to stop at some "base case" and not make the recursive call then, to avoid divergence.
This can be formalized, with
- β = Hα = S(Kα)(SII) = S(KS)Kα(SII) = S(S(KS)K)(K(SII)) α
as
- Yα = SIIβ = SII(Hα) = S(K(SII))H α = S(K(SII))(S(S(KS)K)(K(SII))) α
which gives us one possible encoding of the Y combinator.
This becomes much shorter with the use of the B and C combinators, as the equivalent
- Yα = S(KU)(SB(KU))α = U(BαU) = BU(CBU)α
or directly, as
- Hαβ = α(ββ) = BαUβ = CBUαβ
- Yα = U(Hα) = BU(CBU)α
And with a pseudo-Haskell syntax it becomes the exceptionally short Y = U . (. U).
The reversal expression
S(K(SI))K reverses the following two terms:
- S(K(SI))Kαβ →
- K(SI)α(Kα)β →
- SI(Kα)β →
- Iβ(Kαβ) →
- Iβα →
- βα
Boolean logic
SKI combinator calculus can also implement Boolean logic in the form of an if-then-else structure. An if-then-else structure consists of a Boolean expression that is either true (T) or false (F) and two arguments, such that:
- Txy = x
and
- Fxy = y
The key is in defining the two Boolean expressions. The first works just like one of our basic combinators:
- T = K
- Kxy = x
The second is also fairly simple:
- F = SK
- SKxy = Ky(xy) = y
Once true and false are defined, all Boolean logic can be implemented in terms of if-then-else structures.
Boolean NOT (which returns the opposite of a given Boolean) works the same as the if-then-else structure, with F and T as the second and third values, so it can be implemented as a postfix operation:
- NOT = (F)(T) = (SK)(K)
If this is put in an if-then-else structure, it can be shown that this has the expected result
- (T)NOT = T(F)(T) = F
- (F)NOT = F(F)(T) = T
Boolean OR (which returns T if either of the two Boolean values surrounding it is T) works the same as an if-then-else structure with T as the second value, so it can be implemented as an infix operation:
- OR = T = K
If this is put in an if-then-else structure, it can be shown that this has the expected result:
- (T)OR(T) = T(T)(T) = T
- (T)OR(F) = T(T)(F) = T
- (F)OR(T) = F(T)(T) = T
- (F)OR(F) = F(T)(F) = F
Boolean AND (which returns T if both of the two Boolean values surrounding it are T) works the same as an if-then-else structure with F as the third value, so it can be implemented as a postfix operation:
- AND = F = SK
If this is put in an if-then-else structure, it can be shown that this has the expected result:
- (T)(T)AND = T(T)(F) = T
- (T)(F)AND = T(F)(F) = F
- (F)(T)AND = F(T)(F) = F
- (F)(F)AND = F(F)(F) = F
Because this defines T, F, NOT (as a postfix operator), OR (as an infix operator), and AND (as a postfix operator) in terms of SKI notation, this proves that the SKI system can fully express Boolean logic.
As the SKI calculus is complete, it is also possible to express NOT, OR and AND as prefix operators:
- NOT = S(SI(KF))(KT) (as S(SI(KF))(KT)x = SI(KF)x(KTx) = Ix(KFx)T = xFT)
- OR = SI(KT) (as SI(KT)xy = Ix(KTx)y = xTy)
- AND = SS(K(KF)) (as SS(K(KF))xy = Sx(K(KF)x)y = xy(KFy) = xyF)
Connection to intuitionistic logic
The combinators K and S correspond to two well-known axioms of sentential logic:
- AK: A → (B → A),
- AS: (A → (B → C)) → ((A → B) → (A → C)).
Function application corresponds to the rule modus ponens:
- MP: from A and A → B, infer B.
The axioms AK and AS, and the rule MP are complete for the implicational fragment of intuitionistic logic. In order for combinatory logic to have as a model:
- The implicational fragment of classical logic, would require the combinatory analog to the law of excluded middle, i.e., Peirce's law;
- Complete classical logic, would require the combinatory analog to the sentential axiom F → A.
This connection between the types of combinators and the corresponding logical axioms is an instance of the Curry–Howard isomorphism.
Examples of reduction
There may be multiple ways to do a reduction. All are equivalent, as long as you don't break order of operations
- [math]\displaystyle{ \textrm{SKI(KIS)} }[/math]
- [math]\displaystyle{ \textrm{SKI(KIS)} \Rightarrow \textrm{K(KIS)(I(KIS))} \Rightarrow \textrm{K(KIS)x} \Rightarrow \textrm{KIS} \Rightarrow \textrm{I} }[/math]
- [math]\displaystyle{ \textrm{SKI(KIS)} \Rightarrow \textrm{SKII} \Rightarrow \textrm{KI(II)} \Rightarrow \textrm{KII} \Rightarrow \textrm{I} }[/math]
- [math]\displaystyle{ \textrm{KS(I(SKSI))} }[/math]
- [math]\displaystyle{ \textrm{KS(I(SKSI))} \Rightarrow \textrm{KS(I(KI(SI)))} \Rightarrow \textrm{KS(I(I))} \Rightarrow \textrm{KS(II)} \Rightarrow \textrm{KSI} \Rightarrow \textrm{S} }[/math]
- [math]\displaystyle{ \textrm{KS(I(SKSI))} \Rightarrow \textrm{KS(x)} \Rightarrow \textrm{S} }[/math]
- [math]\displaystyle{ \textrm{SKIK} \Rightarrow \textrm{KK(IK)} \Rightarrow \textrm{KKK} \Rightarrow \textrm{K} }[/math]
See also
- Combinatory logic
- B, C, K, W system
- Fixed point combinator
- Lambda calculus
- Functional programming
- Unlambda programming language
- The Iota and Jot programming languages, designed to be even simpler than SKI.
- To Mock a Mockingbird
References
- ↑ Schönfinkel, M. (1924). "Über die Bausteine der mathematischen Logik". Mathematische Annalen 92 (3–4): 305–316. doi:10.1007/BF01448013. Translated by Stefan Bauer-Mengelberg as van Heijenoort, Jean, ed (2002). "On the building blocks of mathematical logic". A Source Book in Mathematical Logic 1879–1931. Harvard University Press. pp. 355–366. ISBN 9780674324497. https://books.google.com/books?id=v4tBTBlU05sC&pg=PA355.
- ↑ Curry, Haskell Brooks (1930). "Grundlagen der Kombinatorischen Logik" (in German). American Journal of Mathematics (Johns Hopkins University Press) 52 (3): 509–536. doi:10.2307/2370619.
- Smullyan, Raymond (1985). To Mock a Mockingbird. Knopf. ISBN 0-394-53491-3. A gentle introduction to combinatory logic, presented as a series of recreational puzzles using bird watching metaphors.
External links
- O'Donnell, Mike "The SKI Combinator Calculus as a Universal System."
- Keenan, David C. (2001) "To Dissect a Mockingbird."
- Rathman, Chris, "Combinator Birds."
- ""Drag 'n' Drop Combinators (Java Applet)."
- A Calculus of Mobile Processes, Part I (PostScript) (by Milner, Parrow, and Walker) shows a scheme for combinator graph reduction for the SKI calculus in pages 25–28.
- the Nock programming language may be seen as an assembly language based on SK combinator calculus in the same way that traditional assembly language is based on Turing machines. Nock instruction 2 (the "Nock operator") is the S combinator and Nock instruction 1 is the K combinator. The other primitive instructions in Nock (instructions 0,3,4,5, and the pseudo-instruction "implicit cons") are not necessary for universal computation, but make programming more convenient by providing facilities for dealing with binary tree data structures and arithmetic; Nock also provides 5 more instructions (6,7,8,9,10) that could have been built out of these primitives.
Original source: https://en.wikipedia.org/wiki/SKI combinator calculus.
Read more |