A lot of "COMMIT"; in the PostgreSQL log of slow queries

I am trying to optimize a PostgreSQL 9.1 database for the Rails application that I am developing. In postgresql.conf I installed

log_min_duration_statement = 200 

Then I use PgBadger to parse the log file. A statement that today takes up most of the time:

 COMMIT; 

I get no more information than this, and I am very confused as to what it is. Does anyone know what I can do to get more details about COMMIT queries? All other queries show the variables used in the statement, SELECT, UPDATE, etc. But not COMMIT requests.

+4
source share
3 answers

COMMIT is an absolutely correct statement, the purpose of which is to fix the current pending transaction. Due to the nature of what it really does - make sure that the data really turns red on the disk, most likely it will take most of the time.

How to make the application run faster? Right now, it is likely that your code uses the so-called auto-commit mode, i.e. Each statement is implicitly commited. If you explicitly nest large blocks in BEGIN TRANSACTION; blocks BEGIN TRANSACTION; ... COMMIT; , you will make your application much faster and reduce the number of commits. Good luck

+6
source

As @mvp notes, if COMMIT slow, the usual reason is slow fsync() , because each transactional commit should clear the data on disk - usually with fsync (). However, this is not the only possible reason for slow commits. You can:

  • have slow fsync () s, as already noted
  • have slow control points stopping I / O
  • has a set of commit_delay - I have not yet confirmed that delayed commits are logged as long statements, but it seems reasonable.

If fsync () is slow, the best option is to restructure your work so that you can run it through smaller transactions. A reasonable alternative would be to use the commit_delay to group; this will group to improve overall throughput, but will actually slow down individual transactions.

Better yet, fix the root of the problem. Go to a RAID controller with a backup battery write-back cache or high-quality solid state drives that are safe for power. See, regular drives can usually run less than one fsync () per turn, or between 5400 and 15,000 per minute, depending on the hard drive. With a lot of transactions and a lot of commits, this will significantly reduce your throughput, especially since the best case if all they do is trivial flushes. In contrast, if you have a reliable write cache on a RAID controller or SSD, the OS does not need to check whether the data is really on the hard drive, you just need to make sure that it has reached the long-term write cache - massively faster, because usually it just some RAM protected RAM.

Perhaps fsync () is not a real problem; It can be slow checkpoints . The best way to see is to check the logs to see if there are any complaints about security checkpoints that take place too often or too long. You can also enable log_checkpoints to record how long and how often breakpoints pass.

If breakpoints take too long, consider setting up a bgwriter completion target (see docs). If they are too frequent, increase checkpoint_segments .

See Configuring your PostgreSQL server for more information.

+7
source

Try recording each request for a couple of days, and then see what happens in the transaction before the COMMIT statement.

log_min_duration_statement = 0

+1
source

Source: https://habr.com/ru/post/1442935/


All Articles