I have a Hibernate application that can create parallel inserts and updates (via Session.saveOrUpdate ) for records using the same primary key that is assigned . These transactions are somewhat lengthy, possibly an average of 15 seconds (since data is collected from remote sources and stored as it arrives). The isolation level of my database is set to Read Committed, and I use MySQL and InnoDB.
The problem is that this scenario creates excessive locks that wait for a timeout, either as a result of a deadlock or lengthy transactions. This leads me to a few questions:
- Does the database engine only release its locks when committing a transaction?
- If so, should I cut back on my transactions?
- If so, it would be good practice to use separate read and write transactions, where a write transaction could be made short and only be executed after all my data has been collected (the bulk of my transaction involves collecting deleted data).
Edit:
Here is a simple test that approximates what I consider. Since I deal with long transactions, the commit occurs long before the first flush. Therefore, to illustrate my situation, I left the commit from the test:
@Entity static class Person { @Id Long id = Long.valueOf(1); @Version private int version; } @Test public void updateTest() { for (int i = 0; i < 5; i++) { new Thread() { public void run() { Session s = sf.openSession(); Transaction t = s.beginTransaction(); Person p = new Person(); s.saveOrUpdate(p); s.flush();
And the requests that this wait produces, waiting for the second insertion:
select id, version from person where id=? insert into person (version, id) values (?, ?) select id, version from person where id=? insert into person (version, id) values (?, ?)
source share