I achieved 80 MB / s of random read performance on a "real" disk (spindle). Here are my findings.
So, first determine how much traffic you need to push users away and how much memory you need on the server.
You can skip the disk setup recommendations below because you already have RAID5 setup.
Let's look at an example of a dedicated 1 Gb / s bandwidth server with 3 * 2 TB drives. Save the first OS and tmp disk. For the other 2 disks, you can create a software raid (for me it worked better than on board the hardware raid). In addition, you need to split the files on independent disks. The idea is to use the same load of reading and writing to disk. Software Raid-0 is the best option.
Nginx Conf There are two ways to achieve a high level of performance using nginx.
use directio
aio on,
directio 512; output_buffers 1 8m;
"This parameter will require that you have a good amount of bar." Requires about 12-16 GB of RAM.
userland io
output_buffers 1 2m;
"make sure you set readahead to 4-6MB to mount the software" blockdev --setra 4096 / dev / md0 (or independent disk mount)
This option will optimally use the system file cache and require much less RAM. It takes about 8 GB of RAM.
General notes:
You may also like to use the bandwidth throttle to enable 100s of connections over the available bandwidth. Each download connection will use 4 MB of active bar.
limit_rate_after 2m; limit_rate 100k;
Both of the above solutions easily scale to 1k + simultaneous user on 3 disk servers. Assuming you have 1 Gb / s bandwidth and each connection is throttled at 1 Mb / s, there is an additional setting needed to optimize writing to the disk, without affecting the reading much.
make all downloads to the os main disk on the mount / tmpuploads scanner. this will not result in intermittent disturbances when heavy readings occur. Then move the file from / tmpuploads using the dd command with the tolag = direct command. sort of
dd if=/tmpuploads/<myfile> of=/raidmount/uploads/<myfile> oflag=direct bs=8196k
source share