You can try this ... Export / Import works for all types of files, including parquet in Hive. This is a general concept, you can configure a little based on your requirement, for example, loading from a local (or) cluster
Note. When performing separate steps, you can use hard code instead of $, and also pass the "HDFS path", "Scheme" and "scoreboard" as a parameter when running from a script. This way you can export / import unlimited tables just by passing a parameter
- Step 1: hive -S -e "export table $ schema_file1. $ Tbl_file1 to '$ HDFS_DATA_PATH / $ tbl_file1';" # - Run from HDFS.
- Step 2: # - It contains both data and metadata. zip and scp for the target cluster.
- Step 3: hive -S -e "import table $ schema_file1. $ Tbl_file1 from '$ HDFS_DATA_PATH / $ tbl_file1';" # - The first import through an error, since the table does not exist, but automatically creates the table
- Step 4: hive -S -e "import table $ schema_file1. $ Tbl_file1 from '$ HDFS_DATA_PATH / $ tbl_file1';" # - The second import imports data without any errors, since the table is currently available
thanks
Kumar
source share