What does it mean, "you do the calculations in Haskell, declaring something instead of declaring how you get it"?

Recently, I am trying to learn a functional programming language, and I chose Haskell .

Now I read, having learned that you havekell here too - this description is similar to the Haskell philosophy. I’m not sure that I understand this exactly: you are doing calculations in Haskell, declaring something instead of declaring how you will get it.

Suppose I want to get the sum of a list.

In the ad, as you understand it: get the total amount by adding all the elements, so the code will be like this (not haskell, python):

 sum = 0 for i in l: sum += i print sum 

There is a way in something: the total amount is the sum of the first element and the sum of the remaining elements, so the code will be like this:

 sum' :: (Num a) => [a] -> a sum' [] = 0 sum' (x:xs) = x + sum' xs 

But I'm not sure if I get it or not. Can anyone help? Thanks.

+6
source share
3 answers

Imperative and functional are two different approaches to solving problems.

Imperative (Python) gives you the actions you need to use to get what you want. For example, you can tell the computer to “knead the dough and then put it in the oven. Turn on the oven. Oven for 10 minutes.”

Functional (Haskell, Clojure) gives you solutions. You are more likely to tell the computer: "I have flour, eggs and water, I need bread." The computer, as you know, knows the dough, but it does not know the bread, so you tell him that "bread is the dough that has been baked." The computer, knowing what baking is, now knows how to make bread. You sit at the table for 10 minutes while the computer does the work for you. Then you enjoy delicious fresh bread from the oven.

You can see a similar difference in how engineers and mathematicians work. An engineer is required to look at a problem and give employees a plan for solving it. The mathematician defines the problem (solution for x) and the solution (x = -----) and can use any number of proven and correct solutions of smaller problems (2x - 1 = ----- => 2x = - ---- + 1) until he finally finds the right solution.

It is no coincidence that functional languages ​​are mainly used by people at universities, not because they are difficult to study, but because there are not many mathematical thinkers outside universities. In your quote, they tried to determine this difference in the process of thinking, skillfully using how and what. I personally believe that everyone understands the words, turning them into things that they already understand, so I would suggest that my bread metaphor should clarify the difference to you.

EDIT: It is worth noting that when you strongly control the computer, you do not know if you have bread at the end (you may have cooked and burned it for too long, or you have not added enough flour). This is not a problem in functional languages, where you know exactly what each solution gives you. There is no need for a trial version and an error in a functional language, because everything you do will be correct (although it is not always useful, for example, a random solution for t instead of x).

+9
source

The unacceptable part of the explanation is as follows.

This example shows you step by step how to calculate the amount. At some point, you can convince yourself that this is really the sum of the elements in the list. For example, it is not known why sum=0 at first; should it be 0 at all; you go through the correct indexes; what sum+=i gives you.

 sum=0 -- why? it may become clear if you consider what happens in the loop, -- but not on its own for i in l: sum += i -- what do we get? it will become clear only after the loop ends -- at no step of iteration you have *the sum of the list* -- so the step on its own is not meaningful 

In this regard, the declarative example is very different. In this particular case, you start by declaring that the sum of the empty list is 0. This is already part of the answer to what constitutes the sum. Then you add the instruction for non-empty lists - the sum for a non-empty list is the sum of the tail with the head element added to it. This is a declaration of what the amount is. You can demonstrate inductively that it finds a solution for any list.

Pay attention to this trial part. In this case, this is obvious. In more complex algorithms, this is not obvious, so proving correctness is an essential part - and remember that the imperative case makes sense only in general.

Another way to calculate the amount, where, I hope, declarativeness and profitability will become clearer:

 sum [] = 0 -- the sum of the empty list is 0 sum [x] = x -- the sum of the list with 1 element is that element sum xs = sum $ p xs where -- the sum of any other list is -- the sum of the list reduced with p p (x:y:xs) = x+y : p xs -- p reduces the list by replacing a pair of elements -- with their sum p xs = xs -- if there was no pair of elements, leave the list as is 

Here we can convince ourselves that: 1. p makes the list shorter, so the calculation of the sum will stop; 2. p creates a list of sums, therefore, summing up all the shorter lists, we get a list of only one element; 3. since (+) associative, the value obtained by repeatedly applying p coincides with the sum of all the elements in the original list; 4. we can demonstrate that the number of applications (+) less than in a simple implementation.

In other words, the order in which the elements are added does not matter, so we can first sum the elements ( [a,b,c,d,e] ) in pairs ( a+b , c+d ), which gives us a shorter list of [a+b,c+d,e] , the sum of which coincides with the sum of the original list and which can now be reduced in the same way: [(a+b)+(c+d),e] , then [((a+b)+(c+d))+e] .

+4
source
Robert Harper claims in his blog that “declarative” does not make any difference. I suppose he speaks of a clear definition there, which I usually find narrower then the meaning, but the message is still worth checking and hints that you cannot find as clear an answer as you wish.

However, everyone talks about “declarative”, and it seems that when we do this we usually talk about the same thing. those. give several people two different apis / languages ​​/ programs and ask them what is the most declarative, and they usually choose the same.

The embarrassing part for me at first was that your declarative amount

 sum' [] = 0 sum' (x:xs) = x + sum' xs 

can also be considered as instructions on how to get the result. It is just different. It is also worth noting that the sum function in the prelude is not really defined since this particular way of calculating the sum is inefficient. So something dull is clear.

So, the explanation of “what, not how” seems to me unsatisfactory. I think about it instead of declarative - it is an “how”, which, in addition, has some good properties. My current intuition is that these properties are something similar to:

  • A thing is more declarative if it does not mutate any state.

  • A thing is more declarative if you can do math transformations on it and the meaning of the thing seems to remain intact. So, considering your declarative sum again, if we knew that + is commutative, there is some justification for thinking that writing it sum' xs + x should give the same result.

  • A declarative thing can be decomposed into a smaller thing and still has some meaning. like x and sum' xs still have the same meaning if taken separately, but the same thing with sum += x python doesn't work either.

  • A thing is more declarative if it does not depend on the flow of time. For example, css does not describe the style of the webpage when loading the page. It describes the style of the web page at any time, even if the page changes.

  • A thing is more declarative if you do not need to think about the flow of the program.

Other people may have different intuitions or even a definition that I don’t know about, but I hope they are to some extent useful independently.

+3
source

Source: https://habr.com/ru/post/969728/


All Articles