Reduction strategy (lambda calculus)

From HandWiki

In lambda calculus, a branch of mathematical logic concerned with the formal study of functions, a reduction strategy is how a complex expression is reduced to a simple expression by successive reduction steps. It is similar to but subtly different from the notion of evaluation strategy in computer science.

Overview

Roughly, a reduction strategy is a function that maps a lambda calculus term with reducible expressions to one particular reducible expression, the one to be reduced next. Mathematical logicians have studied the properties of this system for decades, and the superficial similarity between the description of evaluation strategies and reduction strategies originally misled programming language researchers to speculate that the two were identical, a belief that is still visible in popular textbooks from the early 1980s;[1] these are, however, different concepts.[citation needed]

Plotkin[2] showed in 1973, however, that a proper model of an evaluation strategy calls for the formulation of a new axiom for function calls, that is, an entirely new calculus. He validates this idea with two different calculi: one for call-by-name and another one for call-by-value, each for purely functional programming languages. He also shows that such a calculus satisfies two natural criteria. First, a calculus defines an evaluation function that maps closed terms (representations of programs) to answers (representations of outputs). This theorem relies on a conventional Church–Rosser theorem for the modified calculus. The evaluation function is defined via the traditional Curry–Feys standardization theorem. Second, the calculus is a sound equational reasoning system with respect to Morris' notion of observational equivalence.[3]

Twenty years later, Crank and Felleisen showed how to scale Plotkin's work to languages with imperative assignment statements.[4] They define calculi for a language with variables, functions, function application and assignment statement that capture the conventional notions of parameter passing and evaluation strategies of a wide array of programming languages. They show that each calculus satisfies Plotkin's criteria, including traditional Church–Rosser and Curry–Feys theorems respectively. In addition, they introduce a calculus that reifies ML's notion of reference cell.

Ariola and Felleisen[5] and independently Maraist, Odersky and Wadler [6] completed this line of work with the design of a lambda calculus that precisely relates the notion of call-by-need aka lazy functional programming to an equational system of calculation. Unlike Plotkin's call-by-value and call-by-name calculi, this call-by-need calculus requires four axioms to characterize function calls. Chang and Felleisen[7] were eventually able to show how to create a by-need calculus with a single, but complex axiom.

See also

  • Call-by-push-value, a semantic paradigm that enables dealing with both call-by-name and call-by-value.
  • The Dynamic Geometry of Interaction abstract machine is an efficient graph-based framework for any evaluation strategy (see the interactive on-line implementation).

References

  1. Structure and Interpretation of Computer Programs by Abelson and Sussman, MIT Press 1983
  2. Call-by-name, call-by-value, and the lambda calculus
  3. "Programming languages and lambda calculus by James Morris, MIT 1968"
  4. Parameter-Passing and the Lambda Calculus by Crank and Felleisen, Principles of Programming Languages 1991
  5. The call-by-need lambda calculus by Ariola and Felleisen, Journal of Functional Programming 1997
  6. The call-by-need lambda calculus by Maraist Odersky and Wadler, Journal of Functional Programming 1999
  7. The call-by-need lambda calculus, revisited by Chang and Felleisen, European Symposium on Programming, 2012