This will concatenate all the files in input_hdfs_dir and will write the output back to HDFS at output_hdfs_file. Do keep in mind that all the data will be brought back to the local system and then again uploaded to hdfs, although no temporary files are created and this happens on the fly using UNIX pe.
Also, this won't work with non-text files such as Avro, ORC etc.
For binary files, you could do something like this (if you have Hive tables mapped on the directories):
insert overwrite table tbl select * from tbl
Depending on your configuration, this could also create more than files. To create a single file, either set the number of reducers to 1 explicitly using mapreduce.job.reduces=1 or set the hive property as hive.merge.mapredfiles=true.
The part-r-nnnnn files are generated after the reduce phase designated by 'r' in between. Now the fact is if you have one reducer running, you will have an output file like part-r-00000. If the number of reducers are 2 then you're going to have part-r-00000 and part-r-00001 and so on. Look, if the output file is too large to fit into the machine memory since the hadoop framework has been designed to run on Commodity Machines, then the file gets splitted. As per the MRv1, you have a limit of 20 reducers to work on your logic. You may have more but the same needs to be customised in the configuration files mapred-site.xml.
Talking about your question; you may either use getmerge or you may set the number of reducers to 1 by embedding the following statement to the driver code
Besides my previous answer I have one more answer for you which I was trying few minutes ago.
You may use CustomOutputFormat which looks like the code given below
public class VictorOutputFormat extends FileOutputFormat<StudentKey,PassValue> {
@Override
public RecordWriter<StudentKey,PassValue> getRecordWriter(
TaskAttemptContext tac) throws IOException, InterruptedException {
//step 1: GET THE CURRENT PATH
Path currPath=FileOutputFormat.getOutputPath(tac);
//Create the full path
Path fullPath=new Path(currPath,"Aniruddha.txt");
//create the file in the file system
FileSystem fs=currPath.getFileSystem(tac.getConfiguration());
FSDataOutputStream fileOut=fs.create(fullPath,tac);
return new VictorRecordWriter(fileOut);
}
}
Just, have a look at the fourth line from the last. I have used my own name as the output file name and I have tested the program with 15 reducers. Still the File remains the same. So getting a single out file instead of two or more is possible yet to be very clear the size of the output file must not exceed the size of the primary memory i.e. the output file must fit into the memory of the commodity machine else there might be a problem with the output file split.
Thanks!!