According to Google Map MapReduce
When an abbreviation worker works with all intermediate data, he sorts it with intermediate keys so that all occurrences of the same key are grouped together.
MongoDB document says
The map / reduce module can reference iteration functions iteratively; therefore, these functions must be idempotent.
So, in the case of MapReduce, as defined in a Google document, the abbreviation starts processing key / value pairs as soon as the data for a particular key has been transferred to the reducer. But, as Tomas said, MongoDB seems to implement MapReduce several times.
In MapReduce, proposed by Google, the Map or Reduce tasks will process KV pairs, but in the MongoDB implementation, the Map and Reduce tasks will simultaneously process KV pairs. The MongoDB approach can be inefficient because nodes are not used efficiently, and there is a chance that the Map and Reduce slots in the cluster are full and might not start new jobs.
Catch in Hadoop, although reducer tasks do not process KV pairs until maps are processed by data, reducer tasks can be generated before the mappers finish processing. The parameter is "mapreduce.job.reduce.slowstart.completedmaps" and set to "0.05", and the description says "The scale of the number of cards in the task, which must be completed before the reductions are planned for work."
Here you will need to move all the values ββ(with the same key) to the same computer that will be summed. Moving data into a function seems to be the opposite of what map reduction should do.
In addition, the data area is considered for map tasks, not reduction tasks. For reduction tasks, data must be moved from different cartographers on different nodes to gearboxes for aggregation.
Just my 2c.