Curry-Howard Isomorphism

I searched the Internet, and I can not find any explanations for CHI that do not quickly degenerate into a lecture on logical theory, which is sharply above my head. (These people say “intuitionistic calculus of sentences” is a phrase that actually means something to normal people!)

Roughly speaking, CHI says that types are theorems, and programs are proofs of these theorems. But what the hell does that even mean?

So far I have figured this out:

  • Consider id :: x -> x . His type says, "given that X is true, we can conclude that X is true." It seems to me a reasonable theorem.

  • Now consider foo :: x -> y . As any Haskell programmer will tell you, this is not possible. You cannot write this function. (Well, not cheating.) Read as a theorem, she says: "Given that any X is true, we can conclude that any Y is true." This is obviously nonsense. And, of course, you cannot write this function.

  • In the general case, function arguments can be considered "these that are considered true", and the type of result can be considered "a thing that is true if all other things are." If there is an argument to the function, say x -> y , we can assume that as an assumption that X is true, it is understood that the value of Y must be true.

  • For example, (.) :: (y -> z) -> (x -> y) -> x -> z can be taken as "assuming that Y implies Z, that X implies Y, and X is true, we can conclude that Z is true. " Which seems logically reasonable to me.

Now, what the hell does Int -> Int mean? o_o

The only reasonable answer I can think of is this: if you have a function X → Y → Z, then the signature of the type says “if we assume that you can construct a value of type X and another type of Y, then you can build a value of type Z”. And the body of the function describes exactly how you do it.

It seems to make sense, but it is not very interesting. So must be more than that ...

+42
types logic haskell curry-howard
Apr 18 2018-12-18T00:
source share
3 answers

The Curry-Howard isomorphism simply states that types correspond to sentences, and values ​​correspond to evidence.

Int -> Int does not really mean much interesting, like a logical sentence. When you interpret something as a logical proposition, you are only interested in whether this type is populated (has any meanings) or not. So, Int -> Int just means "given Int , I can give you Int " and that, of course, is true. There are many different proofs (corresponding to various different functions of this type), but taking it as a logical proposition, you don't care.

This does not mean that interesting offers cannot include such functions; it’s just that this particular type is rather boring as a suggestion.

For an instance of a function type that is not completely polymorphic and has a logical meaning, consider p -> Void (for some p), where Void is an unengaged type: a type without values ​​(except for ⊥ in Haskell, but I'll get to that later). The only way to get a value of type Void is if you can prove the contradiction (which, of course, is impossible), and since Void means that you have proved the contradiction, you can get some value from it (i.e. there is an absurd :: Void -> a function absurd :: Void -> a ). Accordingly, p -> Void corresponds to ¬p: this means that "p implies falsehood."

Intuitionistic logic is just a certain logic to which general functional languages ​​correspond. It is important to note that it is constructive: in principle, the proof of a -> b gives you an algorithm for computing b from a , which is not true in ordinary classical logic (due to the law of excluded mean , which will tell you that something is true or false, but not why).

Although functions like Int -> Int don't talk much about sentences, we can make statements about them with other sentences. For example, we can declare an equality type of two types (using GADT ):

 data Equal ab where Refl :: Equal aa 

If we have a value of type Equal ab , then a is the same type of b : Equal ab corresponds to the sentence a = b. The problem is that we can talk about type equality in this way. But if we had dependent types , we could easily generalize this definition to work with any value, and therefore Equal ab will correspond to the suggestion that the values ​​of a and b are identical. So, for example, we could write:

 type IsBijection (f :: a -> b) (g :: b -> a) = forall x. Equal (f (gx)) (g (fx)) 

Here f and g are regular functions, so f can easily be of type Int -> Int . Again, Haskell cannot do this; you need dependent types to do such things.

Typical functional languages ​​are not very suitable for writing evidence, not only because they do not need dependent types, but also because of ⊥ which, having type a for all a , act as proof of any sentence, but total languages ​​such as Coq and Agda use correspondence to act as evidence systems as well as programming language dependent languages.

+38
Apr 18 '12 at 15:33
source share

Perhaps the best way to understand what this means is to start (or try) using types as sentences and programs as evidence. It is better to learn a language with dependent types, for example, Agda (it is written in Haskell and similar to Haskell). There are various articles and courses in this language. Find out that you Agda is incomplete, but he is trying to simplify things, like the book LYAHFGG.

Here is an example of a simple proof:

 {-# OPTIONS --without-K #-} -- we are consistent module Equality where -- Peano arithmetic. -- -- ℕ-formation: ℕ is set. -- -- ℕ-introduction: o ∈ ℕ, -- a ∈ ℕ | (1 + a) ∈ ℕ. -- data ℕ : Set where o : ℕ 1+ : ℕ → ℕ -- Axiom for _+_. -- -- Form of ℕ-elimination. -- infixl 6 _+_ _+_ : ℕ → ℕ → ℕ o + m = m 1+ n + m = 1+ (n + m) -- The identity type for ℕ. -- infix 4 _≡_ data _≡_ (m : ℕ) : ℕ → Set where refl : m ≡ m -- Usefull property. -- cong : {mn : ℕ} → m ≡ n → 1+ m ≡ 1+ n cong refl = refl -- Proof _of_ mathematical induction: -- -- P 0, ∀ x. P x → P (1 + x) | ∀ x. P x. -- ind : (P : ℕ → Set) → P o → (∀ n → P n → P (1+ n)) → ∀ n → P n ind P P₀ _ o = P₀ ind P P₀ next (1+ n) = next n (ind P P₀ next n) -- Associativity of addition using mathematical induction. -- +-associative : (mnp : ℕ) → (m + n) + p ≡ m + (n + p) +-associative mnp = ind P P₀ is m where P : ℕ → Set P i = (i + n) + p ≡ i + (n + p) P₀ : P o P₀ = refl is : ∀ i → P i → P (1+ i) is i Pi = cong Pi -- Associativity of addition using (dependent) pattern matching. -- +-associative′ : (mnp : ℕ) → (m + n) + p ≡ m + (n + p) +-associative′ o _ _ = refl +-associative′ (1+ m) np = cong (+-associative′ mnp) 

There you can see the sentence (m + n) + p ≡ m + (n + p) as a type and its proof as a function. There are better methods for such proofs (for example, pre-order arguments , combinators in Agda are similar to tactics in Coq).

What else can be proved:

  • head ∘ init ≡ head for vectors here .

  • Your compiler creates a program, the execution of which gives the same value as the value obtained by interpreting the same (main) program, here , for Coq. This book is also a good read on language modeling and program checking.

  • Anything else that can be proved in constructive mathematics, since a Martin-Löf-type theory in its expressive power is equivalent to ZFC. In fact, the Curry-Howard isomorphism can be extended to physics and topology and algebraic topology .

+2
Apr 18 '12 at 18:08
source share

The only reasonable answer I can think of is this: if you have a function X → Y → Z, then the signature of the type says “if we assume that you can construct a value of type X and another type of Y, then you can build a value of type Z”. And the body of the function describes exactly how you do it. It seems to make sense, but it is not very interesting. So must be more than that ...

Well, yes, there is a lot more, because it has a lot of consequences and opens up many questions.

First of all, your discussion of CHI is framed exclusively in terms of implication / function types ( -> ). You are not talking about this, but you have probably also seen how compound and disjunction correspond to types of products and amounts, respectively. But what about other logical operators, such as negation, universal quantification, and existential quantification? How do we translate the logical evidence associated with these programs? It turns out this is something like this:

  • Denial corresponds to first-class continuations. Do not ask me to explain this.
  • Universal quantification by propositional (not individual) variables corresponds to parametric polymorphism. So, for example, the polymorphic function id indeed of type forall a. a -> a forall a. a -> a
  • Existential quantification of propositional variables corresponds to several things related to data hiding or implementation: abstract data types, modular systems, and dynamic dispatch. Existential types of GHC are associated with this.
  • Universal and existential quantification of individual variables leads to systems of dependent types.

In addition, it also means that all kinds of proofs about logic are instantly translated into proofs about programming languages. For example, the decidability of the intuitionistic propositional logic implies the termination of all programs in a simply typed lambda calculus.

Now, what the hell does Int → Int mean? o_o

This is a type, or, alternatively, a sentence. In f :: Int -> Int , (+1) calls "program" (in a certain sense, which admits both functions and constants as "programs" or, alternatively, proof. The semantics of the language must either provide f as a primitive inference rule or demonstrate how f is evidence that can be constructed from such rules and premises.

These rules are often indicated in terms of equational axioms, which define the basic elements of a type and rules that allow you to prove which other programs live in this type. For example, switching from Int to Nat (natural numbers from 0 forward), we could have the following rules:

  • Axiom: 0 :: Nat ( 0 is a primitive proof of Nat )
  • Rule: x :: Nat ==> Succ x :: Nat
  • Rule: x :: Nat, y :: Nat ==> x + y :: Nat
  • Rule: x + Zero :: Nat ==> x :: Nat
  • Rule: Succ x + y ==> Succ (x + y)

These rules are enough to prove many theorems on the addition of natural numbers. This evidence will also be programs.

+2
Apr 18 2018-12-18T00:
source share



All Articles