Approval of composition law

So, I wanted to manually prove the composition law for Maybe Applative, which is:

u <*> (v <*> w) = pure (.) <*> u <*> v <*> w 

I used these steps to prove this:

 u <*> (v <*> w) [Left hand side of the law] = (Just f) <*> (v <*> w) [Assume u ~ Just f] = fmap f (v <*> w) = fmap f (Just g <*> w) [Assume v ~ Just g] = fmap f (fmap gw) = fmap (f . g) w pure (.) <*> u <*> v <*> w [Right hand side of the law] = Just (.) <*> u <*> v <*> w = fmap (.) u <*> v <*> w = fmap (.) (Just f) <*> v <*> w [Replacing u with Just f] = Just (f .) <*> v <*> w = Just (f .) <*> Just g <*> w [Replacing v with Just g] = fmap (f .) (Just g) <*> w = Just (f . g) <*> w = fmap (f . g) w 

How is that right? What really worries me is that I am assuming u and v for some functions built into the Just data constructor to continue my proof. It is acceptable? Is there a better way to prove this?

+6
source share
4 answers

Applicative functor expressions are simply functional applications in the context of a functor. Hence:

 pure f <*> pure a <*> pure b <*> pure c -- is the same as: pure (fabc) 

We want to prove that:

 pure (.) <*> u <*> v <*> w == u <*> (v <*> w) 

Consider:

 u = pure f v = pure g w = pure x 

Therefore, the left side:

 pure (.) <*> u <*> v <*> w pure (.) <*> pure f <*> pure g <*> pure x pure ((.) fgx) pure ((f . g) x) pure (f (gx)) pure f <*> pure (gx) pure f <*> (pure g <*> pure x) u <*> (v <*> w) 

For Maybe we know that pure = Just . Therefore, if u , v and w are Just values, then we know that the law of composition holds.

However, what if any of them are Nothing ? We know that:

 Nothing <*> _ = Nothing _ <*> Nothing = Nothing 

Therefore, if any of them is Nothing , then the whole expression becomes Nothing (except for the second case, if the first argument is undefined ), and since the Nothing == Nothing law still holds.

Finally, what about undefined (aka bottom) values? We know that:

 (Just f) <*> (Just x) = Just (fx) 

Therefore, the following expressions will stop the program:

 (Just f) <*> undefined undefined <*> (Just x) undefined <*> Nothing 

However, the following expression will result in Nothing :

 Nothing <*> undefined 

In any case, the law of composition is maintained.

+9
source

The rules that are generated by the Maybe definition are

 x :: a --------------- Just x :: Maybe a 

and

 a type ----------------- Nothing :: Maybe a 

As well as

 a type ------------------ bottom :: a 

If these are the only rules that lead to Maybe A , then we can always invert them (follow from the bottom up) in the proofs, while we are exhaustive. This is an argument when considering the case of a Maybe A type value.

You did two cases of analysis, but you were not exhaustive. It is possible that u or v is actually Nothing or below.

+3
source

A useful tool to learn when proving that Haskell code is Agda: Here is a short proof of what you want to prove:

 data Maybe (A : Set) : Set where Just : (a : A) -> Maybe A Nothing : Maybe A _<*>_ : {AB : Set} -> Maybe (A -> B) -> Maybe A -> Maybe B Just f <*> Just a = Just (fa) Just f <*> Nothing = Nothing Nothing <*> a = Nothing pure : {A : Set} -> (a : A) -> Maybe A pure a = Just a data _≑_ {A : Set} (x : A) : A β†’ Set where refl : x ≑ x _∘_ : {ABC : Set} -> (B -> C) -> (A -> B) -> A -> C _∘_ fg = Ξ» z β†’ f (gz) maybeAppComp : {ABC : Set} -> (u : Maybe (B -> A)) -> (v : Maybe (C -> B)) -> (w : Maybe C) -> (u <*> (v <*> w)) ≑ (((pure _∘_ <*> u) <*> v) <*> w) maybeAppComp (Just f) (Just g) (Just w) = refl maybeAppComp (Just f) (Just g) Nothing = refl maybeAppComp (Just f) Nothing (Just w) = refl maybeAppComp (Just f) Nothing Nothing = refl maybeAppComp Nothing (Just g) (Just w) = refl maybeAppComp Nothing (Just a) Nothing = refl maybeAppComp Nothing Nothing (Just w) = refl maybeAppComp Nothing Nothing Nothing = refl 

This illustrates a couple of points that others indicated:

  • Which definitions you use are important for proof and should be clear. In my case, I did not want to use the Agda libraries.
  • Case analysis is the key to conducting such evidence.
  • In fact, the proof becomes trivial once the case analysis is completed. The Agda Compira / proof system is able to combine proof for you.
+3
source

You have translated using (<*>) via fmap . Other answers also do some pattern matching.

Usually you need to open up the definition of functions to reason about them, and not just assume what they are doing. (You think that (pure f) <*> x matches fmap fx )

For example, (<*>) is defined as ap for Maybe in Control.Applicative (or it can be proved that it is equivalent to it for any Monad , even if you redefine it) and ap borrowed from Monad , which is defined as liftM2 id , and liftM2 defined as follows:

 liftM2 f m1 m2 = do x <- m1 y <- m2 return $ fxy 

So, reduce both the left and right sides to see that they are equivalent:

 u <*> (v <*> w) = liftM2 id u (liftM2 id vw) = do u1 <- u v1 <- do v1 <- v w1 <- w return $ id v1 w1 return $ id u1 v1 = do u1 <- u v1 <- do v1 <- v w1 <- w return $ v1 w1 return $ u1 v1 -- associativity law: (see [1]) = do u1 <- u v1 <- v w1 <- w x <- return $ v1 w1 return $ u1 x -- right identity: x' <- return x; fx' == fx = do u1 <- u v1 <- v w1 <- w return $ u1 $ v1 w1 

Now the right side:

 pure (.) <*> u <*> v <*> w = liftM2 id (liftM2 id (liftM2 id (pure (.)) u) v) w = do g <- do f <- do p <- pure (.) u1 <- u return $ id p u1 v1 <- v return $ id f v1 w1 <- w return $ id g w1 = do g <- do f <- do p <- return (.) u1 <- u return $ p u1 v1 <- v return $ f v1 w1 <- w return $ g w1 -- associativity law: = do p <- return (.) u1 <- u f <- return $ p u1 v1 <- v g <- return $ f v1 w1 <- w return $ g w1 -- right identity: x' <- return x; fx' == fx = do u1 <- u v1 <- v w1 <- w return $ ((.) u1 v1) w1 -- (f . g) x == f (gx) = do u1 <- u v1 <- v w1 <- w return $ u1 $ v1 w1 

What is it.

[1] http://www.haskell.org/haskellwiki/Monad_laws

+2
source

Source: https://habr.com/ru/post/970588/


All Articles