MySQL: How many UPDATES per second can an average box support?

We have a constant stream of simple updates for one MySQL table (storing information about user activities). Let's say every second we group them into batch updates.

I want the idea of ​​a ball when mysql on a typical 8GB quad-core box starts to experience problems with updates appearing every second. For instance. how many update lines can i do @ 1 per second?

This is a thought-provoking exercise to decide whether I should start working with MySQL in the early days of our application release (to simplify development), or if MySQL can bomb so fast that you should not even take the risk along this path.

+4
source share
3 answers

The only way to get a decent figure is to compare your specific use cases. There are too many variables, and there is no way around this.

It won’t take too long, if you just knock on a bash script or a small demo application and run its jmeter , then this may give you a good idea.

I used jmeter when trying to compare a similar use case. The difference was that I was looking for write throughput for the number of INSERTS . The most useful thing that appeared when I was playing was the innodb_flush_log_at_trx_commit parameter. If you use INNODB and do not need ACID matching for your use case, then change it to 0. This is of great importance for the INSERT bandwidth and most likely will do the same in your UPDATE case for use. Although note that with this option, the changes only change to disk once per second, so if your server receives a power off or something else, you can lose data in a few seconds.

On my 8GB quad-core processor for my use case:

  • innodb_flush_log_at_trx_commit=1 resulted in 80 INSERTS per second
  • innodb_flush_log_at_trx_commit=0 resulted in 2000 INSERTS per second

These numbers are likely to have nothing to do with your use case - that's why you need to compare it yourself.

+5
source

It is not easy to give a general answer to this question. The numbers you ask for are highly dependent not only on the hardware of your database server, MySQL itself, but also on the server and client configuration, network, and, no less important, in your database and table design.

Generally speaking, with MySQL set up on modern servers and update operations using unique keys, I have no problems below 200 application updates if I fired them from localhost , at least what I get from my six-year winxp test environment. The bare install on the new system will scale this way higher. If you think the path is longer, one server is not suitable. MySQL may be tweaked in some way , so many companies rely heavily on it.

Just a few basics:

  • If the fields you want to update have huge index files, updating the statement is much slower because each statement should write not only data, but also information about the indexes.
  • If your update operator cannot use the index, it may take longer for the server to highlight the required fields that need to be updated.
  • Slow memory and / or slow hard drives can also slow down overall server performance.
  • A slow network connection slows down communication between the client and server.

Entire books have been written about this , so I will stop here and advise some additional readings if you are interested!

+2
source

A lot of this depends on the quality of the code that you use to click on the database.

If you write your batch to insert a single value into an INSERT request (i.e.

 INSERT INTO table (field) VALUES (value_1); INSERT INTO table (field) VALUES (value_2); ... INSERT INTO table (field) VALUES (value_n); 

Your performance will crash and record.

If you insert multiple values ​​using one INSERT (i.e.

 INSERT INTO table (field) values (value_1),(value_2)...(value_n); 

you will find that you can easily insert many records per second

As an example, I wrote a quick application that was supposed to add LDAP account request information to the storage buffer. Inserting one field at a time (i.e. LDAP_field, LDAP_value ), the entire script took 10 seconds to complete. When I concatenated the values ​​into a single INSERT query, the execution time of the script was reduced to about 2 seconds from start to finish. This included the overhead of starting and completing a transaction

Hope this helps

+2
source

Source: https://habr.com/ru/post/1439138/


All Articles