You are reading the book correctly. The reducer does not store all values ββin memory. Instead, when you go through the Iterable list of values, each instance of the object is reused, so it saves only one instance at a time.
For example, in the following objs code, the ArrayList will have the expected size after the loop, but each element will be the same b / c that the Text val instance is reused for each iteration.
public static class ReducerExample extends Reducer<Text, Text, Text, Text> { public void reduce(Text key, Iterable<Text> values, Context context) { ArrayList<Text> objs = new ArrayList<Text>(); for (Text val : values){ objs.add(val); } } }
(If for some reason you want to take further action for each val, you should make a deep copy and then save it.)
Of course, even one value can be more than memory. In this case, he recommended that the developer take steps to process the data down in the previous Mapper so that the value is not so large.
UPDATE . See pages 199-200 Hadoop. The ultimate guide. 2nd edition.
This code snippet makes it clear that the same key and value objects are used on each invocation of the map() method -- only their contents are changed (by the reader next() method). This can be a surprise to users, who might expect keys and vales to be immutable. This causes prolems when a reference to a key or value object is retained outside the map() method, as its value can change without warning. If you need to do this, make a copy of the object you want to hold on to. For example, for a Text object, you can use its copy constructor: new Text(value). The situation is similar with reducers. In this case, the value object in the reducer iterator are reused, so you need to copy any that you need to retain between calls to the iterator.
source share