We are looking for a solution to create an external hive table for reading data from parquet files in accordance with the parquet / avro scheme.
in another way, how to create a beehive table from a parquet / avro scheme?
thanks:)
Try using the avro scheme:
CREATE TABLE avro_test ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS AVRO TBLPROPERTIES ('avro.schema.url'='myHost/myAvroSchema.avsc'); CREATE EXTERNAL TABLE parquet_test LIKE avro_test STORED AS PARQUET LOCATION 'hdfs://myParquetFilesPath';
The same request is given. Dynamically create an external Hive table using the Avro scheme in parquet data
Source: https://habr.com/ru/post/1238334/More articles:Flask: How to run a method before each route in a project? - pythonTextarea field returned blank after submit - javascriptHow to create a custom music player based on an audio tag using HTML, CSS and JS - javascriptCan we directly upload the Parquet file to Hive? - hadoopHow to change font size and markdown cell color in Ipython laptop (py 2.7) - pythonforEach callback fails with getElementsByClassName - javascriptCreating a Beehive Table Using Parquet File Metadata - scalaDynamically create an external Hive table using Avro on parquet data - hiveC ++ output stream is not reset using endl and stops execution - c ++Is python "elif" compiled differently: if? - javaAll Articles