Map / Zoom out: any theoretical basis beyond "howto"?

For a while, I thought you just needed a map for the monoid, and then the reduction will decrease according to the monoid multiplication.

Firstly, it’s not exactly how the monoids work, and secondly, it’s not exactly how the map / reduction actually works.

Namely, take the ubiquitous β€œscore”. If there is nothing to count, any map / reduce engine will return an empty data set, not a neutral element. Bummer.

In addition, in a monoid, an operation is defined for two elements. We can easily extend it to finite sequences or, due to associativity, to finite ordered sets. But there is no way to extend it to arbitrary "collections" if we do not have & sigma; -algebra .

So what is a theory? I tried to figure it out, but I could not; and I tried to find Google, but did not find anything.

+4
source share
1 answer

I think that the right way to think about reducing the map is not as an independent computing paradigm, but as building a control flow, similar to the while . You can view while as a program constructor with two arguments, a predicate function and an arbitrary program. Similarly, a size-reduced construct has two arguments named map and reduce , each of which performs functions. Thus, similarly to while , the useful questions to ask are to prove the correctness of the constructed programs with respect to these preconditions and postconditions. As usual, these issues include (a) completion and performance at run time and (b) preservation of invariants.

0
source

Source: https://habr.com/ru/post/1386871/


All Articles