Improve SQLite performance in UPDATE-second?

My question comes directly from this , although I am only interested in UPDATE and only that.

I have an application written in C/C++ that makes heavy use of SQLite , mainly SELECT/UPDATE , at a very frequent interval (about 20 queries every 0.5-1 seconds)

My database is small, about 2500 records in moments, here is the table structure:

 CREATE TABLE player ( id INTEGER PRIMARY KEY AUTOINCREMENT, name VARCHAR(64) UNIQUE, stats VARBINARY, rules VARBINARY ); 

Until this moment, I did not use transactions , because I improved the code and wanted stability, but performance.

Then I measured the performance of my database by simply doing 10 update queries, the following (in a loop of different values):

 // 10 times execution of this UPDATE player SET stats = ? WHERE (name = ?) 

where stats is JSON exactly 150 characters and name is 5 to 10 characters.

Without transactions, the result is unacceptable: - about 1 full second (0.096 each)

With transactions, time decreases by x7.5 times: - about 0.11 - 0.16 seconds (0.013 each)

I tried to remove most of the database and / or reorder / delete columns to make sure that this changed something, but it is not. I get the above numbers, even if the database contains only 100 records (verified).

Then I tried to play with the PRAGMA parameters:

 PRAGMA synchronous = NORMAL PRAGMA journal_mode = MEMORY 

Gave me less time, but not always, more like about 0.08 - 0.14 seconds

 PRAGMA synchronous = OFF PRAGMA journal_mode = MEMORY 

Finally, I gave me very little time about 0.002 - 0.003 seconds , but I do not want to use it, because my application saves the database every second and there is a high probability of database corruption by OS / power failure.

My C SQLite code for queries: (comments / error handling / unrelated parts omitted)

 // start transaction sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, NULL); // query sqlite3_stmt *statement = NULL; int out = sqlite3_prepare_v2(query.c_str(), -1, &statement, NULL); // bindings for(size_t x = 0, sz = bindings.size(); x < sz; x++) { out = sqlite3_bind_text(statement, x+1, bindings[x].text_value.c_str(), bindings[x].text_value.size(), SQLITE_TRANSIENT); ... } // execute out = sqlite3_step(statement); if (out != SQLITE_OK) { // should finalize the query no mind the error if (statement != NULL) { sqlite3_finalize(statement); } } // end the transaction sqlite3_exec(db, "END TRANSACTION", NULL, NULL, NULL); 

As you can see, this is a fairly typical TABLE , the record number is small, and I make a simple simple UPDATE exactly 10 times. Can I do anything else to reduce UPDATE time? I am using the latest SQLite 3.16.2 .

NOTE. . Timings come directly from one END TRANSACTION request. Queries are executed in a simple transaction, and I'm using a prepared expression.

UPDATE:

I have performed several tests with transaction enabled and disabled, and various updates are being counted. I performed tests with the following settings:

 VACUUM; PRAGMA synchronous = NORMAL; -- def: FULL PRAGMA journal_mode = WAL; -- def: DELETE PRAGMA page_size = 4096; -- def: 1024 

Results:

no transactions (10 updates)

  • 0.30800 seconds (0.0308 per update)
  • 0.30200 secs
  • 0.36200 secs
  • 0.28600 seconds

no transactions (100 updates)

  • 2.64400 sec (0.02644 for each update)
  • 2.61200 secs
  • 2.76400 seconds
  • 2.68,700 seconds

no transactions (1000 updates)

  • 28.02800 secs (0.028 each update)
  • 27.73700 seconds
  • ..

with transactions (10 updates)

  • 0.12800 seconds (0.0128 for each update)
  • 0.08100 seconds
  • 0.16400 seconds
  • 0.10400 seconds

with transactions (100 updates)

  • 0.088 sec (0.00088 every update)
  • 0.091 seconds
  • 0.05 sec
  • 0.101 secs

with transactions (1000 updates)

  • 0.08900 seconds (0.000089 each update)
  • 0.15000 seconds
  • 0.11000 seconds
  • 0.09100 sec

My conclusions are that with transactions in time cost per query it makes no sense. Maybe times are getting bigger with a tremendous amount of updates, but I am not interested in these numbers. There is literally no time difference between 10 and 1000 updates per transaction. However, I am wondering if this is a hardware limitation on my machine and cannot do much. It seems I can’t go below ~100 miliseconds using one transaction and varying from 10 to 1000 updates, even using WAL.

Without transactions, there is a fixed time cost of about 0.025 seconds.

+5
source share
3 answers

With such a small amount of data, the time for the database operation itself is not significant; what you are measuring is the transaction overhead (the time it takes to force a write to disk), which depends on the OS, file system, and hardware.

If you can live with its limitations (mostly without a network), you can use asynchronous recording by turning on WAL mode .

+3
source

You can still limit the time required to complete a transaction. In your first example, each transaction took about 0.10 to complete, which is pretty close to the transaction time for inserting 10 records. What results do you get if you post 100 or 1000 updates in one transaction?

In addition, SQLite expects about 60 transactions per second on the average hard drive, while you get about 10. Could your disk problem be the problem here?

https://sqlite.org/faq.html#q19

+3
source

Try adding INDEXE to your database:

 CREATE INDEX IDXname ON player (name) 
0
source

Source: https://habr.com/ru/post/1263799/


All Articles