Limiting the number of output files means that you want to limit the number of reducers. You can do this using the mapred.reduce.tasks property from the Hive shell. Example:
hive> set mapred.reduce.tasks = 5;
But this may affect the performance of your request. Alternatively, you can use the getmerge command from the HDFS shell as soon as you finish your request. This command takes the source directory and target file as input and combines the files in src into a local destination file.
Using:
bin/hadoop fs -getmerge <src> <localdst>
NTN
Tariq source share