C3P0 + MySQL: received error reading communication packets

An error occurred that many encountered with the message:

[Warning] Aborted connection 38 to db: 'database_name' user: 'root' host: 'localhost' (Got an error reading communication packets) 

which is in MySQL logs. In my case, the database is accessed locally through the java client using the com.mysql.jdbc.Driver driver and the well-known C3P0 pool. My MySQL server is configured to accept a fairly large number of connections, and the max_allowed_packet value is 64M. Here is an excerpt from my my.cnf file (MySQL configuration):

 [mysqld] max_allowed_packet = 64M thread_concurrency = 8 thread_cache_size = 8 thread_stack = 192K query_cache_size = 0 query_cache_type = 0 max_connections = 1024 back_log = 50 innodb_thread_concurrency = 6 innodb_lock_wait_timeout = 120 log_warnings 

and

 [mysqldump] quick quote-names max_allowed_packet = 64M 

The User table in my database has the following simple structure:

 CREATE TABLE `User` ( `uid` varchar(255) COLLATE utf8_bin NOT NULL, `name` varchar(255) COLLATE utf8_bin DEFAULT NULL, `mail` varchar(255) COLLATE utf8_bin DEFAULT NULL, `password` varchar(255) COLLATE utf8_bin DEFAULT NULL, `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `maxParallelTasks` tinyint(4) NOT NULL DEFAULT '10', `maxModels` int(11) NOT NULL DEFAULT '2000', `maxBibTeX` int(11) NOT NULL DEFAULT '2000', PRIMARY KEY (`uid`) USING BTREE, UNIQUE KEY `mail` (`mail`) USING BTREE, KEY `uid` (`uid`) USING BTREE, KEY `index_user_name` (`name`) USING BTREE, KEY `index_user_mail` (`mail`) USING BTREE, KEY `index_user_maxModels` (`maxModels`) USING BTREE, KEY `index_user_maxBibTeX` (`maxBibTeX`) USING BTREE, KEY `index_user_maxParallelTasks` (`maxParallelTasks`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin 

when I INSERT value in this table from using the mysql client (i.e. mysql -u root -p ), I do not receive any warnings in the logs. However, when trying the same Java side, the above warning appears in the logs quite a few times! So, Java-side, my connections are retrieved from the C3P0 connection pool, which is configured as follows:

 datasource = new ComboPooledDataSource(); datasource.setJdbcUrl(connectURI); datasource.setMaxPoolSize(1000); datasource.setMinPoolSize(20); datasource.setInitialPoolSize(50); datasource.setNumHelperThreads(6); datasource.setTestConnectionOnCheckin(true); datasource.setTestConnectionOnCheckout(true); 

Statement first prepared:

 PreparedStatement ps = connection.prepareStatement(getSql()); 

then parameterized:

 ps.setString(1, user.getUid()); ps.setString(2, user.getName()); ps.setString(3, user.getMail()); ps.setString(4, user.getHashedPass()); 

and then executed:

 int update = ps.executeUpdate(); 

The prepared statement is then closed:

 ps.close(); 

and the SQL connection is closed:

 if (connection != null) { try { if (!connection.isClosed()) { connection.close(); } } catch (SQLException ex) { throw new DbException(ex); } } 

The general procedure seems (to me) legal. According to the MySQL manual at http://dev.mysql.com/doc/refman/5.0/en/communication-errors.html , this warning may mean that:

The client program did not call mysql_close () before exiting.

or what:

The client program suddenly terminated in the middle of data transfer.

For the purposes of my application, I used C3P0 to write and read in these databases using more than 200 parallel streams, and I have no memory leaks, while I checked that the data is actually transferred and retrieved from the database usually, Does anyone else have a similar experience?

Finally, I include my version of MySQL, as well as other information that may be useful for troubleshooting:

 mysql> show variables like "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.37-1ubuntu5.5 | | version_comment | (Ubuntu) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+ 

and

 Java version: 1.6.0_22, (build 1.6.0_22-b04) 
+4
source share
2 answers

An attempt to penetrate deeper into the cause of this warning ...

Firstly, I noticed that the number of warning messages I receive in my log file is equal to or very close to the minimum pool size, as indicated by datasource.setMinPoolSize(int); , while the initial pool size for testing was set equal to the minimum size each time.

In addition, using the debugger and following the instructions step by step, I had time to notice that warnings are logged as soon as device tests are completed. These facts led me to conclude that the compounds that are combined by C3P0 are not actually closed before they are dropped. It is necessary that it closes the data source if it will no longer be used to call the com.​mchange.​v2.​c3p0.​impl.​AbstractPoolBackedDataSource#close() method. Of course, of course, this is done only at the end of the Unit test, so in my case the correction looks like this:

 @AfterClass public static void tearDownClass() throws Exception { // Close the pool DataSourceFactory.getInstance().close(); } 

In my application, I added a disconnect hook so that the data source is closed until execution is complete. Prior to this, the connection pool remained untouched, and the driver had to close them, thereby causing a warning in the MySQL log. This solved the problem for all INSERT operations, but unfortunately not for SELECT. Please note that I close all result sets, statements, prepared statements and connections and checked the source code using FindBugs (static analysis tool). The workaround makes me suspect that something is wrong with C3P0 again, but I cannot be sure. Thus, the following warnings disappear ...

 @AfterClass public static void tearDownClass() throws Exception { Thread.sleep(100000); // wait for 100 secs DataSourceFactory.getInstance().close(); } 

Please note that before the expiration of a period of 100 seconds. (and the factory data source closes) no warnings appear in the logs! So, it has datasource.close() , which calls them ...

+1
source

Well, it’s good to see a question that contains practically the required amount of information, which hints at possible places where the system will encounter problems. Here is what I will check:

  • Try closing the PreparedStatement object before closing the Connection object and see if this fixes the problem. This may seem unnecessary, but it is necessary for some JDBC drivers, primarily for Oracle (as well as MySQL). The rationale is that the Connection object is not closed, especially if derived objects, such as ResultSet and Statement objects, were not closed first (in that order). This is partly due to the way the JDBC driver was written.
  • Make sure that the device (s) involved can actually handle the load you want. If the underlying network infrastructure simply cannot handle the specified number of connections, then it is entirely possible that the connections will be deleted. You might want to look at the wire dump and draw conclusions from it if the first recommendation does not help.
  • Use the debugUnreturnedConnectionStackTraces flag to detect connection pool leaks. It also requires that unverturnedConnectionTimeout be positive. The flag provides tracking of the stack trace for conditions when the application does not return the connection back to the pool. The stack trace will indicate the point in the code where the connection was originally reserved.
+1
source

Source: https://habr.com/ru/post/1339673/


All Articles