A large number of inserts per second, which leads to a large processor load

I have a PHP script that in each run inserts a new line in Mysql db (with a relatively small amount of data ..) I have more than 20 queries per second, and this makes my processor scream for help.

I am using the sql method INSERT DELAYED with the MyISAM engine (although I just notice that INSERT DELAYED does not work with MyISAM).

My main concern is loading my processor, and I started looking for ways to store this data using more processor-friendly solutions.

My first idea was to write this data to the hourly log files and once an hour to extract data from the logs and paste them into the database immediately.

Perhaps the best idea is to use NoSQL DB instead of log files, and then once an hour to insert data from NoSQL into Mysql ..

I have not tested any of these ideas yet, so I really don't know if I can reduce the CPU load or not. I wanted to ask if someone could help me find the right solution that would have the lowest impact on my processor.

+4
source share
6 answers

Ok guys, I manage to significantly reduce the CPU load using the APC cache

I do it like this:

storing data in memory using the APC cache with a TTL of 70 seconds:

 apc_store('prfx_SOME_UNIQUE_STRING', $data, 70); 

once a minute I iterate over all entries in the cache:

 $apc_list=apc_cache_info('user'); foreach($apc_list['cache_list'] as $apc){ if((substr($apc['info'],0,5)=='prfx_') && ($val=apc_fetch($apc['info']))){ $values[]=$val; apc_delete($apc['info']); } } 

insert $values into DB

and the processor continues to smile.

to use

+4
source

I had a very similar problem recently, and my solution was to simply execute batch requests. This was accelerated by about 50 times due to reduced overhead for mysql connections, and the amount of reindexing was also significantly reduced. Storing them in a file and then doing one larger one (100-300 separate inserts) right away is probably a good idea. To speed things up, disable indexing while inserting even more with

 ALTER TABLE tablename DISABLE KEYS insert statement ALTER TABLE tablename ENABLE KEYS 

doing a batch insert will reduce the number of instances of running php script, this will reduce the number of mysql handles currently open (a significant improvement) and reduce the number of indexes.

+4
source

I would put in a dream (1); at the top of your PHP script, before each insert at the top of your loop, where 1 = 1 second. This allows the cycle to cycle once per second.

This way, it will regulate how much the processor is loading, it would be ideal if you would record a small number of records in each run.

You can read more about the sleep function here: http://php.net/manual/en/function.sleep.php

0
source

It's hard to say without profiling both methods, if you write the log file first, you can simply degrade it by specifying your operation from N to N * 2. You get a slight advantage by writing all this to a file and inserting it into a package, but remember that as the log file fills up, the load / write time increases.

To reduce the load on the database, look at using the mem cache to read the database, if you have not already done so.

All in all, although you are probably best trying to get around and see which is faster.

0
source

Since you are trying to defer INSERT DELAYED, I suppose you do not need until the second data. If you want to stick with MySQL, you can try using replication and the table type BLACKHOLE. By declaring a table as a BLACKHOLE type on one server, then replicating it to MyISAM or another table type on another server, you can smooth the CPU and io curves. BLACKHOLE is just a replication log file, so inserting into it is very fast and easy on the system.

0
source

Thanks for the post. I solved my main problem of killing a processor

0
source

Source: https://habr.com/ru/post/1397054/


All Articles