I studied different places and heard a lot of dubious statements, starting from PreparedStatement , in all cases Statement is preferred, even if only to increase productivity; up to the statement that PreparedStatement should be used exclusively for batch statements and nothing else.
However, it seems that I followed blind spots in (mostly online) discussions. Let me introduce a specific scenario.
We have an application developed by EDA with a pool of database connections. Events come, some of them require perseverance, some do not. Some of them are artificially created (for example, update / reset something every X minutes, for example). Some events arrive and are processed sequentially, but other types of events (also requiring storage) can (and will) be processed simultaneously.
Apart from those artificially created events, there is no structure in how events requiring persistence arrive.
This application was developed a long time ago (around 2005) and supports several DBMSs. Typical event handler (where persistence is required):
- get connection from pool
- prepare SQL query
- execute prepared statement
- handle the result set, if applicable, close it
- close the prepared report
- prepare another expression, if necessary, and process it in the same way.
- return connection to pool
If the event requires batch processing, the statement is ready once and the addBatch / executeBatch methods are used. This is an obvious performance advantage, and these cases are not related to this issue.
Recently, I got the opinion that the whole idea of preparing (parsing) a statement, executing it once and closing it, in fact, is a misuse of PreparedStatement , provides zero performance benefits, regardless of whether client prepared statements are used and that typical DBMSs (Oracle, DB2, MSSQL, MySQL, Derby, etc.) will not even push such an operator into a ready-made instruction cache (or at least their driver / JDBC data source will not by default).
In addition, I had to test certain scripts in the dev environment on MySQL, and it seems that the Connector / J usage analyzer agrees with this idea. For all unarmored prepared statements, calling close() prints:
PreparedStatement created, but used 1 or fewer times. It is more efficient to prepare statements once, and re-use them many times
Due to the choice of application design outlined earlier, having a PreparedStatement instance cache that contains every single SQL statement used by any event for every connection in the connection pool sounds like a bad choice.
Can anyone elaborate on this in more detail? Is the logic "cook-execute (once)" close "a flaw and essentially discourage?
PS Explicitly indicating useUsageAdvisor=true and cachePrepStmts=true for Connector / J and using either useServerPrepStmts=true or useServerPrepStmts=false still leads to performance warnings when calling close() in PreparedStatement instances for each unlisted SQL statement.