I execute this command every 30 minutes through a bash script (on Centos 6) to delete files that are about 1 hour old. The problem is that the team finduses 45% of my cpu at all times. Is there any way to optimize it. At any given time in the cache there will be about 200 thousand elements.
find
find /dev/shm/cache -type f -mmin +59 -exec rm -f {} \;
You can try to start the process as a lower priority using nice:
nice
nice -n 19 find ...
, , find, -delete -exec:
-delete
-exec
find /dev/shm/cache -type f -mmin +59 -delete
... , find ( @chepner ) ( ...)
rm , . , rm, , OS. + ;
rm
+
;
find /dev/shm/cache -type f -mmin +59 -exec rm -f {} +
-delete, janos; , . , , , .
-exec grep foo {} +
Source: https://habr.com/ru/post/1613553/More articles:Simplifying dual einsum - pythonIs it easy to convert between different geometry classes in C ++? - c ++How to make an additional decorator in Python - pythonΠ‘ΠΎΠ·Π΄Π°ΠΉΡΠ΅ ΠΊΠ»Π°ΡΡ, ΠΊΠΎΡΠΎΡΡΠΉ ΠΌΠΎΠΆΠ΅Ρ Π±ΡΡΡ ΠΊΠΎΠ½Π²Π΅ΡΡΠΈΡΠΎΠ²Π°Π½ Π²/ΠΈΠ· Π΄ΡΡΠ³ΠΈΡ Π±ΠΈΠ±Π»ΠΈΠΎΡΠ΅ΠΊ - c++How to request Elasticsearch with C # via HTTP? - httpΠΠ°ΠΊ ΡΡΡΠ°Π½ΠΎΠ²ΠΈΡΡ Xcode 6 ΠΈΠ»ΠΈ 7 Π½Π° Π²Π½Π΅ΡΠ½ΠΈΠΉ Π΄ΠΈΡΠΊ? - installationcentering the table generated by the kable function of the knitr package - rUsing different monads to understand - scalalme4 - the maximum number of function estimates is exceeded - rPhantom ΠΏΡΠΎΠ±Π΅Π» ΠΌΠ΅ΠΆΠ΄Ρ 2s - htmlAll Articles