I would say that this is more related to efficiency - the head can be easily replicated by connecting the fs -cat chaop output using the linux head command.
hadoop fs -cat /path/to/file | head
This is effective since the head will close the underlying stream after the desired number of lines has been output.
Using a tail this way would be significantly less efficient - since you would need to transfer the entire file (all HDFS blocks) to find a finite number x of lines.
hadoop fs -cat /path/to/file | tail
The hadoop fs -tail command, as you note, works on the last kilobyte - hadoop can efficiently find the last block and move to the position of the last kilobyte, and then transfer the result. Piping through the tail cannot easily do this.
Chris White Nov 04 '13 at 23:37 2013-11-04 23:37
source share