Java.lang.RuntimeException: Unable to execute instance of org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

I have versions of Hadoop 2.7.1 and apache-hive-1.2.1 installed on ubuntu 14.0.

  • Why does this error occur?
  • Is a metastore required?
  • When we type the hive command on the terminal, what is xml internally called, what is the flow of these xml?
  • Any other configuration required?

When I write the hive command on the ubuntu 14.0 terminal, it throws the exception described below.

$ hive Logging initialized using configuration in jar:file:/usr/local/hive/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:520) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503) ... 8 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:426) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521) ... 14 more Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory NestedThrowables: java.lang.reflect.InvocationTargetException at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:520) at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701) at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:365) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:394) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:291) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:624) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74) ... 19 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:426) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325) at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282) at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240) at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:426) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301) at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187) at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775) ... 48 more Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver. at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259) at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131) at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85) ... 66 more Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver. at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58) at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54) at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238) ... 68 more 

To avoid the error above, I created hive-site.xml with

  <configuration> <property> <name>hive.metastore.warehouse.dir</name> <value>/home/local/hive-metastore-dir/warehouse</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hivedb?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>user</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>password</value> </property> <configuration> 

Also provided are environment variables in ~/.bashrc file ; However, the error persists

 #HIVE home directory configuration export HIVE_HOME=/usr/local/hive/apache-hive-1.2.1-bin export PATH="$PATH:$HIVE_HOME/bin" 
+21
source share
11 answers
 I did below modifications and I am able to start the Hive Shell without any errors: 

1. ~ / .bashrc

Inside the bashrc file, add the following environment variables to the end of the file: sudo gedit ~ / .bashrc

 #Java Home directory configuration export JAVA_HOME="/usr/lib/jvm/java-9-oracle" export PATH="$PATH:$JAVA_HOME/bin" # Hadoop home directory configuration export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin export HIVE_HOME=/usr/lib/hive export PATH=$PATH:$HIVE_HOME/bin 

2. Beehive-site.xml

You need to create this file (hive-site.xml) in the conf Hive directory and add the details below

 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>true</value> </property> <property> <name>datanucleus.fixedDatastore</name> <value>true</value> </property> <property> <name>datanucleus.autoCreateTables</name> <value>True</value> </property> </configuration> 

3. You also need to put the jar file (mysql-connector-java-5.1.28.jar) in the lib directory from Hive

4. Below are the settings required for Ubuntu to run the hive shell:

  4.1 MySql 4.2 Hadoop 4.3 Hive 4.4 Java 

5. Part of execution:

 1. Start all services of Hadoop: start-all.sh 2. Enter the jps command to check whether all Hadoop services are up and running: jps 3. Enter the hive command to enter into hive shell: hive 
+21
source

starting the metastasis service the hive worked for me. First, configure the database for the Hive metastore:

  $ hive --service metastore 

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_install_manually_book/content/validate_installation.html

Secondly, run the following commands:

  $ schematool -dbType mysql -initSchema $ schematool -dbType mysql -info 

https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool

+17
source

In my case, when I tried

 $ hive --service metastore 

I got

MetaException (message: version information not found in the metastore.)

The required tables needed for the metastor are missing in MySQL. Manually create the tables and restart the hive metastor.

 cd $HIVE_HOME/scripts/metastore/upgrade/mysql/ < Login into MySQL > mysql> drop database IF EXISTS <metastore db name>; mysql> create database <metastore db name>; mysql> use <metastore db name>; mysql> source hive-schema-2.xxmysql.sql; 

db metastar name must match the database name specified in the hive-site.xml connection property tag.

File

hive-schema-2.xxmysql.sql depends on the version available in the current directory. Try to find the latest version as it contains many old schema files.

Now try hive --service metastore If everything is cool, just start the hive from the terminal.

 >hive 

I hope the above answer helps you.

+8
source

If you just play in local mode, you can delete the metastasis database and restore it:

 rm -rf metastore_db/ $HIVE_HOME/bin/schematool -initSchema -dbType derby 
+7
source

In the middle of the stack trace lost in the "junk" junk, you can find the root cause:

The specified data warehouse driver ("com.mysql.jdbc.Driver") was not found in CLASSPATH. Check the CLASSPATH specification and driver name.

+1
source

Run the hive in debug mode

hive -hiveconf hive.root.logger=DEBUG,console

and then execute

show tables

may find the actual problem

+1
source

I also ran into this problem, but I restarted Hadoop and used the hadoop dfsadmin -safemode leave command

now start the bush, it will work, I think

0
source

I used MySQL DB for Hive MetaStore. Please follow these steps:

  • in hive-site.xml the metastor must be correct
 <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost/metastorecreateDatabaseIfNotExist=true&amp;useSSL=false</value> </property> 
  • go to mysql: mysql -u hduser -p
  • then run drop database metastore
  • then exit mysql and execute schematool -initSchema dbType mysql

Now the error will go.

0
source

I solved this problem by removing the --deploy-mode cluster from the spark-submit code. By default, spark submit uses client mode , which has the following advantages:

 1. It opens up Netty HTTP server and distributes all jars to the worker nodes. 2. Driver program runs on master node , which means dedicated resources to driver process. 

In cluster mode:

  1. It runs on worker node. 2. All the jars need to be placed in a common folder of the cluster so that it is accessible to all the worker nodes or in folder of each worker node. 

Here he cannot gain access to the hive's meta-repository due to the inaccessibility of the hive banks to any of the nodes in the cluster. enter image description here

0
source

Perhaps your hive metastases are incompatible! I'm in this scene.

first. I'm runing

  $ schematool -dbType mysql -initSchema 

then i found it

Error: Duplicate key name "PCS_STATS_IDX" (state = 42000, code = 1061) org.apache.hadoop.hive.metastore.HiveMetaException: Failed to initialize the circuit! Metastore's condition will be inconsistent !!

then i'm running

  $ schematool -dbType mysql -info 

found this error

Hive distribution version: 2.3.0 Metastore schema version: 1.2.0 org.apache.hadoop.hive.metastore.HiveMetaException: Metastore schema version is not compatible. Hive version: 2.3.0, database schema version: 1.2.0


so I format my metastore hive, then it's done! * delete mysql database, database named hive_db * run schematool -dbType mysql -initSchema to initialize metadata

0
source

Delete MetaStore_db in the hadoop directory and format your hdf files with hasoop namenode -format, then try restarting your chaos with start -all.sh.

-1
source

Source: https://habr.com/ru/post/1243189/


All Articles