Meta-circular evaluator

From HandWiki
Revision as of 22:28, 6 February 2024 by Corlink (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In computing, a meta-circular evaluator (MCE) or meta-circular interpreter (MCI) is an interpreter which defines each feature of the interpreted language using a similar facility of the interpreter's host language. For example, interpreting a lambda application may be implemented using function application.[1] Meta-circular evaluation is most prominent in the context of Lisp.[1] [2] A self-interpreter is a meta-circular interpreter where the interpreted language is nearly identical to the host language; the two terms are often used synonymously.[3]

History

The dissertation of Corrado Böhm[4] describes the design of a self-hosting compiler. [5] Due to the difficulty of compiling higher-order functions, many languages were instead defined via interpreters, most prominently Lisp.[1][6] The term itself was coined by John C. Reynolds,[1] and popularized through its use in the book Structure and Interpretation of Computer Programs.[3][7]

Self-interpreters

A self-interpreter is a meta-circular interpreter where the host language is also the language being interpreted.[8] A self-interpreter displays a universal function for the language in question, and can be helpful in learning certain aspects of the language.[2] A self-interpreter will provide a circular, vacuous definition of most language constructs and thus provides little insight into the interpreted language's semantics, for example evaluation strategy. Addressing these issues produces the more general notion of a "definitional interpreter".[1]

From self-interpreter to abstract machine

This part is based on Section 3.2.4 of Danvy's thesis. [9]

Here is the core of a self-evaluator for the [math]\displaystyle{ \lambda }[/math] calculus. The abstract syntax of the [math]\displaystyle{ \lambda }[/math] calculus is implemented as follows in OCaml, representing variables with their de Bruijn index, i.e., with their lexical offset (starting from 0):

type term = IND of int    (* de Bruijn index *)
          | ABS of term
          | APP of term * term

The evaluator uses an environment:

type value = FUN of (value -> value)

let rec eval (t : term) (e : value list) : value =
  match t with
    IND n ->
     List.nth e n
  | ABS t' ->
     FUN (fun v -> eval t' (v :: e))
  | APP (t0, t1) ->
     apply (eval t0 e) (eval t1 e)
and apply (FUN f : value) (a : value) =
  f a

let main (t : term) : value =
  eval t []

Values (of type value) conflate expressible values (the result of evaluating an expression in an environment) and denotable values (the values denoted by variables in the environment), a terminology that is due to Christopher Strachey. [10] [11]

Environments are represented as lists of denotable values.

The core evaluator has three clauses:

  • It maps a variable (represented with a de Bruijn index) into the value in the current environment at this index.
  • It maps a syntactic function into a semantic function. (Applying a semantic function to an argument reduces to evaluating the body of the corresponding syntactic function in its lexical environment, extended with the argument.)
  • It maps a syntactic application into a semantic application.

This evaluator is compositional in that each of its recursive calls is made over a proper sub-part of the given term. It is also higher order since the domain of values is a function space.

In "Definitional Interpreters", Reynolds answered the question as to whether such a self-interpreter is well defined. He answered in the negative because the evaluation strategy of the defined language (the source language) is determined by the evaluation strategy of the defining language (the meta-language). If the meta-language follows call by value (as OCaml does), the source language follows call by value. If the meta-language follows call by name (as Algol 60 does), the source language follows call by name. And if the meta-language follows call by need (as Haskell does), the source language follows call by need.

In "Definitional Interpreters", Reynolds made a self-interpreter well defined by making it independent of the evaluation strategy of its defining language. He fixed the evaluation strategy by transforming the self-interpreter into Continuation-Passing Style, which is evaluation-strategy independent, as later captured in Gordon Plotkin's Independence Theorems. [12]

Furthermore, because logical relations had yet to be discovered, Reynolds made the resulting continuation-passing evaluator first order by (1) closure-converting it and (2) defunctionalizing the continuation. He pointed out the "machine-like quality" of the resulting interpreter, which is the origin of the CEK machine since Reynolds's CPS transformation was for call by value. [13] For call by name, these transformations map the self-interpreter to an early instance of the Krivine machine. [14] The SECD machine and many other abstract machines can be inter-derived this way. [15] [16]

It is remarkable that the three most famous abstract machines for the [math]\displaystyle{ \lambda }[/math] calculus functionally correspond to the same self-interpreter.

Self-interpretation in total programming languages

Total functional programming languages that are strongly normalizing cannot be Turing complete, otherwise one could solve the halting problem by seeing if the program type-checks. That means that there are computable functions that cannot be defined in the total language.[17] In particular it is impossible to define a self-interpreter in a total programming language, for example in any of the typed lambda calculi such as the simply typed lambda calculus, Jean-Yves Girard's System F, or Thierry Coquand's calculus of constructions.[18][19] Here, by "self-interpreter" we mean a program that takes a source term representation in some plain format (such as a string of characters) and returns a representation of the corresponding normalized term. This impossibility result does not hold for other definitions of "self-interpreter". For example, some authors have referred to functions of type [math]\displaystyle{ \pi\,\tau \to \tau }[/math] as self-interpreters, where [math]\displaystyle{ \pi\,\tau }[/math] is the type of representations of [math]\displaystyle{ \tau }[/math]-typed terms. To avoid confusion, we will refer to these functions as self-recognizers. Brown and Palsberg showed that self-recognizers could be defined in several strongly-normalizing languages, including System F and System Fω.[20] This turned out to be possible because the types of encoded terms being reflected in the types of their representations prevents constructing a diagonal argument. In their paper, Brown and Palsberg claim to disprove the "conventional wisdom" that self-interpretation is impossible (and they refer to Wikipedia as an example of the conventional wisdom), but what they actually disprove is the impossibility of self-recognizers, a distinct concept. In their follow-up work, they switch to the more specific "self-recognizer" terminology used here, notably distinguishing these from "self-evaluators", of type [math]\displaystyle{ \pi\,\tau \to \pi\,\tau }[/math].[21] They also recognize that implementing self-evaluation seems harder than self-recognition, and leave the implementation of the former in a strongly-normalizing language as an open problem.

Uses

In combination with an existing language implementation, meta-circular interpreters provide a baseline system from which to extend a language, either upwards by adding more features or downwards by compiling away features rather than interpreting them.[22] They are also useful for writing tools that are tightly integrated with the programming language, such as sophisticated debuggers.[citation needed] A language designed with a meta-circular implementation in mind is often more suited for building languages in general, even ones completely different from the host language.[citation needed]

Examples

Many languages have one or more meta-circular implementations. Here below is a partial list.

Some languages with a meta-circular implementation designed from the bottom up, in grouped chronological order:

Some languages with a meta-circular implementation via third parties:

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 Reynolds, John C. (1972). "Proceedings of the ACM annual conference on - ACM '72". 2. pp. 717–740. doi:10.1145/800194.805852. http://www.cs.uml.edu/~giam/91.531/Textbooks/definterp.pdf. Retrieved 14 April 2017. 
  2. 2.0 2.1 Reynolds, John C. (1998). "Definitional Interpreters Revisited". Higher-Order and Symbolic Computation 11 (4): 355–361. doi:10.1023/A:1010075320153. http://homepages.inf.ed.ac.uk/wadler/papers/papers-we-love/reynolds-definitional-interpreters-revisited.pdf. Retrieved 21 March 2023. 
  3. 3.0 3.1 "The Metacircular Evaluator". Structure and Interpretation of Computer Programs. MIT. http://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1. 
  4. Böhm, Corrado (1954). "Calculatrices digitales. Du déchiffrage des formules logico-mathématiques par la machine même dans la conception du programme". Ann. Mat. Pura Appl. 4 (37). 
  5. Knuth, Donald E.; Pardo, Luis Trabb (August 1976). The early development of programming languages. p. 36. https://archive.org/stream/bitsavers_stanfordcs562EarlyDevelPgmgLangAug76_5916830/STAN-CS-76-562_EarlyDevelPgmgLang_Aug76#page/n35/mode/2up. 
  6. McCarthy, John (1961). "A Universal LISP Function". Lisp 1.5 Programmer's Manual. p. 10. http://www.softwarepreservation.org/projects/LISP/book/LISP%201.5%20Programmers%20Manual.pdf. 
  7. Harvey, Brian. "Why Structure and Interpretation of Computer Programs matters". https://people.eecs.berkeley.edu/~bh/sicp.html. Retrieved 14 April 2017. 
  8. Braithwaite, Reginald (2006-11-22). "The significance of the meta-circular interpreter". http://weblog.raganwald.com/2006/11/significance-of-meta-circular_22.html. Retrieved 2011-01-22. 
  9. Danvy, Olivier (2006). An Analytical Approach to Programs as Data Objects (Thesis). doi:10.7146/aul.214.152. ISBN 9788775073948.
  10. Template:Cite tech report
  11. Mosses, Peter D. (2000). "A Foreword to 'Fundamental Concepts in Programming Languages'". Higher-Order and Symbolic Computation 13 (1/2): 7–9. doi:10.1023/A:1010048229036. 
  12. Plotkin, Gordon D. (1975). "Call by name, call by value and the lambda-calculus". Theoretical Computer Science 1 (2): 125–159. 
  13. Felleisen, Matthias; Friedman, Daniel (1986). "Control Operators, the SECD Machine, and the lambda-Calculus". pp. 193–217. 
  14. Schmidt, David A. (1980). "State transition machines for lambda calculus expressions". 94. pp. 415–440. doi:10.1007/3-540-10250-7_32. ISBN 978-3-540-10250-2. 
  15. Danvy, Olivier (2004). "A Rational Deconstruction of Landin's SECD Machine". pp. 52–71. https://www.brics.dk/RS/03/33/BRICS-RS-03-33.pdf. 
  16. Ager, Mads Sig; Biernacki, Dariusz; Danvy, Olivier; Midtgaard, Jan (2003). "A Functional Correspondence between Evaluators and Abstract Machines". Brics Report Series 10 (13): 8–19. doi:10.7146/brics.v10i13.21783. 
  17. Riolo, Rick; Worzel, William P.; Kotanchek, Mark (4 June 2015) (in en). Genetic Programming Theory and Practice XII. Springer. p. 59. ISBN 978-3-319-16030-6. https://books.google.com/books?id=rfDLCQAAQBAJ&pg=PA59. Retrieved 8 September 2021. 
  18. Conor McBride (May 2003), "on termination" (posted to the Haskell-Cafe mailing list).
  19. Andrej Bauer (June 2014), Answer to: A total language that only a Turing complete language can interpret (posted to the Theoretical Computer Science StackExchange site)
  20. Brown, Matt; Palsberg, Jens (11 January 2016). "Breaking through the normalization barrier: A self-interpreter for f-omega". Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. pp. 5–17. doi:10.1145/2837614.2837623. ISBN 9781450335492. http://web.cs.ucla.edu/~palsberg/paper/popl16-full.pdf. 
  21. Brown, Matt; Palsberg, Jens (January 2017). "Typed self-evaluation via intensional type functions". Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages. pp. 415–428. doi:10.1145/3009837.3009853. ISBN 9781450346603. 
  22. Oriol, Manuel; Meyer, Bertrand (2009-06-29) (in en). Objects, Components, Models and Patterns: 47th International Conference, TOOLS EUROPE 2009, Zurich, Switzerland, June 29-July 3, 2009, Proceedings. Springer Science & Business Media. p. 330. ISBN 9783642025716. https://books.google.com/books?id=6RAlcYFn1SAC&dq=meta+circular&pg=PA330. Retrieved 14 April 2017. 
  23. Meta-circular implementation of the Pico programming language

External links