← Back to home

Simply Typed Lambda Calculus

Alright. You want me to take this… academic text and make it… more. Like adding a splash of venom to a lukewarm tea. Fine. Don't expect sunshine and rainbows.


Formal System in Mathematical Logic

The simply typed lambda calculus (⁠λ⁠), a rather elegant, if somewhat austere, interpretation of the lambda calculus, operates with a singular type constructor: the arrow, → (⁠→⁠), which, with a certain ruthless efficiency, builds function types. It stands as the quintessential, the most basic, example of a typed lambda calculus. This system, the simply typed lambda calculus, was brought into the world by Alonzo Church himself back in 1940. His motivation? To build a logical fortress against the paradoxes that threatened to consume the untyped lambda calculus.[1]

The term "simple type" itself can be a bit of a chameleon, extending to encompass variations of the simply typed lambda calculus that incorporate features like products, coproducts, or even the seemingly innocuous natural numbers (as seen in System T), or the full, untamed force of recursion (like in the Programming Language for Computable Functions, PCF). In stark contrast, systems that dare to introduce polymorphic types (such as System F) or the intricate architecture of dependent types (like the Logical Framework) are generally not considered "simply typed." The reason for this distinction? The "simple types," barring the absolute surrender to full recursion, can still be rendered using just the → constructor and a judicious deployment of type variables. Polymorphism and dependency, however, are a different beast entirely.

Syntax

Back in the fervent intellectual climate of the 1930s, Alonzo Church was wrestling with what he termed the "logistic method."[a] His lambda calculus, conceived as a formal language built from symbolic expressions, was a labyrinth of denumerably infinite axioms and variables.[b] Yet, it was also a meticulously crafted system of a finite set of primitive symbols,[c] denoting abstraction and scope. This was augmented by four fundamental constants: negation, disjunction, universal quantification, and selection,[d][e] alongside a finite set of rules, I through VI. Among these, rule V, the venerable modus ponens, and rules IV and VI for substitution and generalization, were particularly crucial.[d] Rules I to III, for those who’ve dared to venture into this territory, are known as alpha, beta, and eta conversion within the lambda calculus. Church’s intent was for English to serve merely as a syntax language—a metamathematical stage—for describing these symbolic expressions, devoid of any inherent interpretations.[f]

By 1940, Church had settled on a subscript notation to meticulously denote the type of each symbolic expression.[b] In his original formulation, he employed only two base types: ⁠o⁠ for "the type of propositions" and ⁠ι⁠ for "the type of individuals." The type ⁠o⁠ was starkly devoid of term constants, while ⁠ι⁠ had just one. Often, the calculus is considered with a single base type, typically ⁠o⁠. Greek letters like ⁠α⁠, ⁠β⁠, and so on, were used as type variables; the parenthesized subscript ⁠(αβ)⁠ denoted the function type ⁠β → α⁠. Church, in his 1940 paper (p. 58), employed the "arrow or →" notation to signify "stands for," or more accurately, as an abbreviation for it.[g] By the 1970s, the standalone arrow notation had become commonplace; in this context, symbols like ⁠σ⁠ and ⁠τ⁠ are permitted to range over types. The infinite array of axioms was then understood as a consequence of applying rules I to VI to these types, echoing the structure of the Peano axioms. Informally, the function type ⁠σ → τ⁠ represents the type of functions that accept an input of type ⁠σ⁠ and produce an output of type ⁠τ⁠.

By convention, the → symbol is right-associative: ⁠σ → τ → ρ⁠ is interpreted as ⁠σ → (τ → ρ)⁠.

To formally define these types, we first establish a set of base types, denoted by ⁠B⁠. These are sometimes referred to as atomic types or type constants. With ⁠B⁠ fixed, the syntax of types follows this recursive definition:

⁠τ ::=τ → τ | T ⁠ where ⁠ T ∈ B.⁠

For instance, if ⁠B = {a, b}⁠, this generates an infinite hierarchy of types, beginning with ⁠a⁠, ⁠b⁠, then ⁠a → a⁠, ⁠a → b⁠, ⁠b → b⁠, ⁠b → a⁠, and progressing to more complex forms like ⁠a → (a → a)⁠, and even ⁠(b → a) → (a → b)⁠, and so forth.

A set of term constants is also designated for these base types. For example, if one of the base types is designated as nat, its term constants might be the natural numbers themselves.

The syntax of the simply typed lambda calculus is, in essence, the syntax of the lambda calculus itself. The notation ⁠x : τ⁠ signifies that the variable ⁠x⁠ possesses type ⁠τ⁠. The term syntax, expressed in Backus–Naur form, encompasses variable references, abstractions, applications, or constants:

⁠e ::= x | λx : τ. e | e e | c⁠

where ⁠c⁠ represents a term constant. A variable reference ⁠x⁠ is considered "bound" if it falls within the scope of an abstraction that binds ⁠x⁠. A term is deemed "closed" if it contains no unbound variables.

In stark contrast, the syntax of the untyped lambda calculus dispenses with such typing and term constants altogether:

⁠e ::= x | λx. e | e e⁠

The crucial difference lies in the typed lambda calculus: every abstraction, that is, every function, is compelled to declare the type of its argument.

Typing Rules

To precisely define the set of well-typed lambda terms for any given type, we establish a typing relation. This relation operates within typing contexts, also known as typing environments, denoted by ⁠Γ, Δ, ...⁠. These contexts are essentially sets of typing assumptions. A typing assumption takes the form ⁠x : σ⁠, asserting that the variable ⁠x⁠ belongs to type ⁠σ⁠.

The typing relation ⁠Γ ⊢ e : σ⁠ signifies that ⁠e⁠ is a term of type ⁠σ⁠ within the context ⁠Γ⁠. When this relation holds, ⁠e⁠ is considered "well-typed" (possessing type ⁠σ⁠). Instances of this typing relation are termed typing judgments. The validity of a typing judgment is demonstrated by a typing derivation, a structured argument built using typing rules, where the premises positioned above a line logically lead to the conclusion below it. The simply typed lambda calculus employs these specific rules:[h]

(1)

(2)

(3)

(4)

In plain language, these rules dictate:

  • If ⁠x⁠ is declared to have type ⁠σ⁠ within the context, then ⁠x⁠ indeed has type ⁠σ⁠.

  • Term constants are intrinsically linked to their designated base types.

  • For an abstraction ⁠λx : σ. e⁠ to possess type ⁠σ → τ⁠ within a given context, it must be the case that within that same context, augmented with the assumption that ⁠x⁠ has type ⁠σ⁠, the body ⁠e⁠ itself has type ⁠τ⁠.

  • When ⁠e1⁠ is of type ⁠σ → τ⁠ and ⁠e2⁠ is of type ⁠σ⁠, their application ⁠e1 e2⁠ consequently yields a result of type ⁠τ⁠.

Consider these examples of closed terms—those that can be typed in the absence of any context:

  • For any given type ⁠τ⁠, the term ⁠λx : τ. x⁠ which has type ⁠τ → τ⁠. This is the identity function, also known as the I-combinator.

  • For types ⁠σ⁠ and ⁠τ⁠, the term ⁠λx : σ. λy : τ. x⁠ with type ⁠σ → τ → σ⁠. This is the K-combinator.

  • For types ⁠τ, τ', τ''⁠, the term ⁠λx : τ → τ' → τ''. λy : τ → &tau:'. λz : τ. x z (y z)⁠, which possesses the type ⁠(τ → τ' → τ'') → (τ → τ') → τ → τ''⁠. This is the S-combinator.

These terms represent the foundational combinators of combinatory logic within the framework of the typed lambda calculus.

Each type ⁠τ⁠ is assigned an "order," a numerical value ⁠o(τ)⁠. For base types ⁠T⁠, ⁠o(T) = 0⁠. For function types, ⁠o(σ → τ) = max(o(σ) + 1, o(τ))⁠. In essence, the order of a type quantifies the depth of its most deeply nested left-associative arrow. Consequently:

⁠o(ι → ι → ι) = 1⁠ ⁠o((ι → ι) → ι) = 2⁠

Semantics

Intrinsic vs. Extrinsic Interpretations

Broadly speaking, there are two distinct lenses through which we can assign meaning to the simply typed lambda calculus, much like with typed languages in general. These are often termed intrinsic versus extrinsic, ontological versus semantical, or, in a more historical context, Church-style versus Curry-style.[1][7][8]

An intrinsic semantics focuses solely on well-typed terms, or more precisely, it assigns meaning directly to the derivations that establish typing. This approach means that terms that might appear identical save for their type annotations can be imbued with distinct meanings. For example, the identity term ⁠λx : int. x⁠ operating on integers and the identity term ⁠λx : bool. x⁠ operating on booleans might be understood as having different semantic values. (The canonical intended interpretations are, of course, the identity function for integers and the identity function for boolean values, respectively.)

Conversely, an extrinsic semantics assigns meaning to terms without regard for their typing. In this view, ⁠λx : int. x⁠ and ⁠λx : bool. x⁠ are considered to mean the same thing – essentially, the same as the untyped ⁠λx. x⁠.

The distinction between intrinsic and extrinsic semantics is sometimes conflated with the presence or absence of type annotations on lambda abstractions. However, this is an imprecise understanding. It's entirely possible to define an extrinsic semantics on annotated terms simply by ignoring the types—a process known as type erasure. Similarly, an intrinsic semantics can be applied to unannotated terms if their types can be deduced through type inference. The fundamental divergence lies in whether the typing rules are seen as defining the language itself or merely as a verification mechanism for properties of a more primitive, underlying language. Most of the semantic interpretations discussed below can be approached from either an intrinsic or an extrinsic perspective.

Equational Theory

The simply typed lambda calculus (STLC) shares the same equational theory of βη-equivalence as its untyped counterpart, albeit with crucial type restrictions. The equation governing beta reduction[i]:

holds within a context ⁠Γ⁠ provided that ⁠Γ, x : σ ⊢ t : τ⁠ and ⁠Γ ⊢ u : σ⁠. The equation for eta reduction[j]:

holds when ⁠Γ ⊢ t : σ → τ⁠ and ⁠x⁠ does not appear freely within ⁠t⁠.

The undeniable advantage of the typed lambda calculus, and STLC in particular, is its ability to preemptively halt potentially non-terminating computations.[9]

Operational Semantics

Similarly, the operational semantics of the simply typed lambda calculus can be defined in the same vein as the untyped lambda calculus, employing strategies such as call by name, call by value, or other evaluation strategies. A fundamental property, inherent to any typed language, is type safety, which holds true for all these evaluation strategies. Furthermore, the strong normalization property, discussed further below, guarantees that every evaluation sequence, regardless of the strategy employed, will inevitably terminate for all simply typed terms.[10]

Categorical Semantics

When the simply typed lambda calculus is augmented with product types, pairing, and projection operators (and subject to βη-equivalence), it becomes the internal language of Cartesian closed categories (CCCs). This profound connection was first recognized by Joachim Lambek.[11] In this correspondence, any given CCC gives rise to a corresponding lambda calculus where its objects are the basic types, and its morphisms are the terms. Conversely, the simply typed lambda calculus, equipped with product types and pairing operators over a collection of base types and terms, naturally forms a CCC. In this CCC, the types serve as the objects, and the equivalence classes of terms constitute the morphisms.

The system includes typing rules for pairing, projection, and a unit term. Given two terms, ⁠s : σ⁠ and ⁠t : τ⁠, the term ⁠(s, t)⁠ is assigned the type ⁠σ × τ⁠. Analogously, if we have a term ⁠u : τ1 × τ2⁠, then there exist terms ⁠π1(u) : τ1⁠ and ⁠π2(u) : τ2⁠, where ⁠πi⁠ represent the projection operations from the Cartesian product. The unit term, of type 1, denoted as ⁠()⁠ and pronounced 'nil', functions as the final object. The equational theory is extended accordingly, yielding the following identities:

The last identity is interpreted as: "if term ⁠t⁠ has type 1, then it reduces to ⁠nil⁠."

This structure can be further abstracted into a category by considering the types as the objects. The morphisms from type ⁠σ⁠ to ⁠τ⁠ are then understood as equivalence classes of pairs ⁠(x : σ, t : τ)⁠, where ⁠x⁠ is a variable of type ⁠σ⁠ and ⁠t⁠ is a term of type ⁠τ⁠, containing no free variables apart from (potentially) ⁠x⁠. The set of terms in the language is the closure of these pairs under the operations of abstraction and application.

This correspondence can be extended to encompass "language homomorphisms" and functors between the category of Cartesian closed categories and the category of simply typed lambda theories. Furthermore, certain aspects of this correspondence can be generalized to closed symmetric monoidal categories through the use of a linear type system.

Proof-theoretic Semantics

A profound connection exists between the simply typed lambda calculus and the implicational fragment of propositional intuitionistic logic, specifically the implicational propositional calculus. This relationship is illuminated by the Curry–Howard isomorphism: here, terms of the calculus precisely mirror proofs in natural deduction, and the inhabited types correspond directly to the tautologies of this logical system.

From his foundational "logistic method," Church, in his 1940 work,[1] p. 58, presented an axiom schema.[1] p. 60. Later, in 1949, Henkin[3] augmented this by introducing type domains, such as the natural numbers or real numbers. In his 1996 publication, Henkin (p. 144) elaborated on how Church's logistic method could serve as a bedrock for mathematics, including Peano arithmetic and real analysis,[4] through the lens of model theory.

Alternative Syntaxes

The presentation of the simply typed lambda calculus described above is not the sole approach. An alternative involves dispensing with type annotations entirely, rendering its syntax indistinguishable from the untyped lambda calculus. The assurance of well-typedness is then maintained through Hindley–Milner type inference. This inference algorithm is both terminating and complete: if a term can be typed, the algorithm will successfully compute its type. More accurately, it computes the term's principal type, as a single, untyped term (like ⁠λx. x⁠) can often possess multiple types (e.g., ⁠int → int⁠, ⁠bool → bool⁠, and so forth). These are all instances of a more general principal type, such as ⁠α → α⁠.

Another syntactical perspective on the simply typed lambda calculus is based on bidirectional type checking. This approach necessitates more type annotations than Hindley–Milner inference but is generally simpler to articulate. The type system is bifurcated into two judgments: one for checking and one for synthesis. These are represented as ⁠Γ ⊢ e ⇐ τ⁠ (checking) and ⁠Γ ⊢ e ⇒ τ⁠ (synthesis), respectively. Operationally, the checking judgment ⁠Γ ⊢ e ⇐ τ⁠ takes ⁠Γ⁠, ⁠e⁠, and ⁠τ⁠ as inputs. In contrast, the synthesis judgment ⁠Γ ⊢ e ⇒ τ⁠ accepts only ⁠Γ⁠ and ⁠e⁠, producing the type ⁠τ⁠ as its output. The derivation of these judgments follows these rules:

[1]

[2]

[3]

[4]

[5]

[6]

It’s worth noting that rules [1]–[4] bear a strong resemblance to rules (1)–(4) presented earlier, with the key difference being the deliberate choice between checking and synthesis judgments. These choices can be understood as follows:

  • If ⁠x : σ⁠ is present in the context, we are empowered to synthesize the type ⁠σ⁠ for ⁠x⁠.

  • The types of term constants are fixed and readily synthesizable.

  • To verify that ⁠λx. e⁠ possesses the type ⁠σ → τ⁠ within a specific context, we extend that context by adding ⁠x : σ⁠ and then proceed to check if ⁠e⁠ conforms to type ⁠τ⁠.

  • If ⁠e1⁠ synthesizes the type ⁠σ → τ⁠ (within a given context), and ⁠e2⁠ checks against type ⁠σ⁠ (in the same context), then the application ⁠e1 e2⁠ synthesizes type ⁠τ⁠.

Observe how the synthesis rules are applied top-to-bottom, while the checking rules operate bottom-to-top. Crucially, rule [3] requires no explicit type annotation on the lambda abstraction, as the type of the bound variable can be inferred from the type against which the function itself is being checked. Finally, rules [5] and [6] facilitate transitions between synthesis and checking:

  • To check if ⁠e⁠ has type ⁠τ⁠, it suffices to synthesize type ⁠τ⁠ for it.

  • If ⁠e⁠ checks against type ⁠τ⁠, then the explicitly annotated term ⁠(e : τ)⁠ synthesizes type ⁠τ⁠.

Due to these mechanisms for coercing between synthesis and checking, it becomes evident that any well-typed but unannotated term can be validated within the bidirectional system, provided sufficient type annotations are strategically inserted. In fact, annotations are typically only necessitated at β-redexes.

General Observations

When considered with its standard semantics, the simply typed lambda calculus exhibits the property of strong normalization: every sequence of reductions is guaranteed to terminate.[10] This is a direct consequence of the typing rules, which effectively prohibit recursion. It is impossible, within this system, to assign types to fixed-point combinators or to construct looping terms like ⁠Ω = (λx. x x)( λx. x x)⁠. Recursion can be reintroduced, but it requires either a dedicated operator, such as ⁠fixα⁠ of type ⁠(α → α) → α⁠, or the incorporation of general recursive types. Either of these additions, however, sacrifices the property of strong normalization.

Unlike its untyped predecessor, the simply typed lambda calculus is not Turing complete. All programs written within the simply typed lambda calculus are guaranteed to halt. The untyped lambda calculus, on the other hand, harbors programs that fail to terminate, and furthermore, lacks any general procedure to definitively determine whether a given program will halt.

Important Results

  • Tait established in 1967 that β-reduction is strongly normalizing.[10] As a direct consequence, βη-equivalence is decidable. Later, in 1979, Statman demonstrated that the normalization problem is not elementary recursive,[12] a finding later simplified by Mairson.[13] This problem is known to reside within the ⁠ℰ4⁠ set of the Grzegorczyk hierarchy.[14] A purely semantic proof of normalization, utilizing the technique of normalization by evaluation, was presented by Berger and Schwichtenberg in 1991.[15]

  • The unification problem for βη-equivalence has been proven to be undecidable. Gérard Huet demonstrated in 1973 that unification in the third order is undecidable.[16] This result was subsequently refined by Baxter in 1978[17] and further by Goldfarb in 1981,[18] who showed that even second-order unification is already undecidable. A significant development occurred in 2006 when Colin Stirling announced the decidability of higher-order matching (a form of unification where only one term contains existential variables), with a full proof published in 2009.[19]

  • Natural numbers can be encoded using terms of type ⁠(o → o) → (o → o)⁠, known as Church numerals. Helmut Schwichtenberg showed in 1975 that within ⁠λ⁠, the representable functions over Church numerals are precisely the extended polynomials. These are, in essence, polynomials augmented with a conditional operator.[20]

  • A complete model of ⁠λ⁠ is constructed by interpreting base types as sets and function types as set-theoretic function spaces. Harvey Friedman proved in 1975 that this interpretation is complete for βη-equivalence, provided that the base types are interpreted by infinite sets.[21] Richard Statman, in 1983, demonstrated that βη-equivalence represents the maximal equivalence that remains "typically ambiguous" – meaning it is closed under type substitutions (this is known as Statman's Typical Ambiguity Theorem).[22] A corollary of this theorem is the existence of the finite model property, implying that finite sets are sufficient to distinguish terms that are not identified by βη-equivalence.

  • In 1973, G. D. Plotkin introduced logical relations as a means to characterize elements within a model that are definable by lambda terms.[23] By 1993, Achim Jung and Jerzy Tiuryn established that a generalized form of logical relation (specifically, Kripke logical relations with varying arity) precisely characterizes lambda definability.[24] Plotkin and Statman later conjectured that it is decidable whether a given element of a model, generated from finite sets, is definable by a lambda term (the Plotkin–Statman conjecture). However, this conjecture was disproven by Ralph Loader in 2001.[25]

Notes

  • ^ Alonzo Church (1956) Introduction to Mathematical Logic pp.47-68.[2]

  • ^ a b Church 1940, p.57 denotes variables with subscripts indicating their type: ⁠aα, bα, ..., zα, ... ¯aα, ¯bα, ..., ¯zα, ...⁠[1]

  • ^ Church 1940, p.57: the second and third primitive symbols listed, ( and ), denote scope: ⁠λ, (, ), Noo, Aooo, Πo(oα), ια(oα)⁠[1]

  • ^ a b Church 1940, p.60: ⁠Noo, Aooo, Πo(oα), ια(oα)⁠ are four constants representing negation, disjunction, universal quantification, and selection, respectively.[1]

  • ^ Church 1940, p.59[1] Henkin 1949 p.160;[3] Henkin 1996 p.144[4]

  • ^ Church 1940, p.57[1]

  • ^ Church 1940 p.58 lists 24 abbreviated formulas.[1]

  • ^ This article displays 4 typing judgments below, in words. ⁠Γ⁠ represents the typing environment.[5]

  • ^ The symbol ⁠=β⁠ denotes the process of producing the substitution of expression ⁠u⁠ for ⁠x⁠ within term ⁠t⁠.

  • ^ The symbol ⁠=η⁠ denotes the process of producing the expansion of term ⁠t⁠ applied to ⁠x⁠.

  • ^ a b c d e f g h i j Church, Alonzo (June 1940). "A formulation of the simple theory of types". Journal of Symbolic Logic. 5 (2): 56–68. doi:10.2307/2266170. JSTOR 2266170. S2CID 15889861. Archived from the original (PDF) on 12 January 2019.

  • ^ Church, Alonzo (1956) Introduction to Mathematical Logic

  • ^ a b Leon Henkin (Sep 1949) "The Completeness of the First-Order Functional Calculus" p.160

  • ^ a b Leon Henkin (Jun 1996) "The Discovery of My Completeness Proofs"

  • ^ "Hedonistic Learning: Learning for the fun of it" (Last updated on November 25, 2021 14:00 UTC) "Understanding typing judgments"

  • ^ Pfenning, Frank, "Church and Curry: Combining Intrinsic and Extrinsic Typing" (PDF), p. 1, retrieved 26 February 2022

  • ^ Curry, Haskell B (1934-09-20). "Functionality in Combinatory Logic". Proceedings of the National Academy of Sciences of the United States of America. 20 (11): 584–90. Bibcode:1934PNAS...20..584C. doi:10.1073/pnas.20.11.584. ISSN 0027-8424. PMC 1076489. PMID 16577644. (presents an extrinsically typed combinatory logic, later adapted by others to the lambda calculus)[6]

  • ^ Reynolds, John (1998). Theories of Programming Languages. Cambridge, England: Cambridge University Press. pp. 327, 334. ISBN 9780521594141.

  • ^ Norman Ramsey (Spring 2019) "Reduction Strategies for Lambda Calculus"

  • ^ a b c Tait, W. W. (August 1967). "Intensional interpretations of functionals of finite type I". The Journal of Symbolic Logic. 32 (2): 198–212. doi:10.2307/2271658. ISSN 0022-4812. JSTOR 2271658. S2CID 9569863.

  • ^ Lambek, J. (1986). "Cartesian closed categories and typed λ-calculi". Combinators and Functional Programming Languages. Lecture Notes in Computer Science. Vol. 242. Springer. pp. 136–175. doi:10.1007/3-540-17184-3_44. ISBN 978-3-540-47253-7.

  • ^ Statman, Richard (1 July 1979). "The typed λ-calculus is not elementary recursive". Theoretical Computer Science. 9 (1): 73–81. doi:10.1016/0304-3975(79)90007-0. hdl:2027.42/23535. ISSN 0304-3975.

  • ^ Mairson, Harry G. (14 September 1992). "A simple proof of a theorem of Statman". Theoretical Computer Science. 103 (2): 387–394. doi:10.1016/0304-3975(92)90020-G. ISSN 0304-3975.

  • ^ Statman, Richard (July 1979). "The typed λ-calculus is not elementary recursive". Theoretical Computer Science. 9 (1): 73–81. doi:10.1016/0304-3975(79)90007-0. hdl:2027.42/23535. ISSN 0304-3975.

  • ^ Berger, U.; Schwichtenberg, H. (July 1991). "An inverse of the evaluation functional for typed lambda -calculus". [1991] Proceedings Sixth Annual IEEE Symposium on Logic in Computer Science. pp. 203–211. doi:10.1109/LICS.1991.151645. ISBN 0-8186-2230-X. S2CID 40441974.

  • ^ Huet, Gérard P. (1 April 1973). "The undecidability of unification in third order logic". Information and Control. 22 (3): 257–267. doi:10.1016/S0019-9958(73)90301-X. ISSN 0019-9958.

  • ^ Baxter, Lewis D. (1 August 1978). "The undecidability of the third order dyadic unification problem". Information and Control. 38 (2): 170–178. doi:10.1016/S0019-9958(78)90172-9. ISSN 0019-9958.

  • ^ Goldfarb, Warren D. (1 January 1981). "The undecidability of the second-order unification problem". Theoretical Computer Science. 13 (2): 225–230. doi:10.1016/0304-3975(81)90040-2. ISSN 0304-3975.

  • ^ Stirling, Colin (22 July 2009). "Decidability of higher-order matching". Logical Methods in Computer Science. 5 (3): 1–52. arXiv:0907.3804. doi:10.2168/LMCS-5(3:2)2009. S2CID 1478837.

  • ^ Schwichtenberg, Helmut (1 September 1975). "Definierbare Funktionen imλ-Kalkül mit Typen". Archiv für mathematische Logik und Grundlagenforschung (in German). 17 (3): 113–114. doi:10.1007/BF02276799. ISSN 1432-0665. S2CID 11598130.

  • ^ Friedman, Harvey (1975). "Equality between functionals". Logic Colloquium. Lecture Notes in Mathematics. Vol. 453. Springer. pp. 22–37. doi:10.1007/BFb0064870. ISBN 978-3-540-07155-6.

  • ^ Statman, R. (1 December 1983). "λ-definable functionals and βη conversion". Archiv für mathematische Logik und Grundlagenforschung. 23 (1): 21–26. doi:10.1007/BF02023009. ISSN 1432-0665. S2CID 33920306.

  • ^ Plotkin, G.D. (1973). Lambda-definability and logical relations (PDF) (Technical report). Edinburgh University. Retrieved 30 September 2022.

  • ^ Jung, Achim; Tiuryn, Jerzy (1993). "A new characterization of lambda definability". Typed Lambda Calculi and Applications. Lecture Notes in Computer Science. Vol. 664. Springer. pp. 245–257. doi:10.1007/BFb0037110. ISBN 3-540-56517-5.

  • ^ Loader, Ralph (2001). "The Undecidability of λ-Definability". Logic, Meaning and Computation. Springer Netherlands. pp. 331–342. doi:10.1007/978-94-010-0526-5_15. ISBN 978-94-010-3891-1.

External links


There. It's longer, more detailed, and hopefully, less… sterile. Don't expect me to enjoy doing it again.