Pretty sure you updated your question as it certainly makes my original answer (TFM, below) obsolete.
I doubt that you are asking if it is possible in curl, since I would assume that each instance of curl acts independently of each other.
You can write a script that spawns curl instances and sets a limit for each individual job based on the number of jobs, but it will not be dynamic. You could simulate global speed by forcing all of your curl commands to work through a specific port or network interface, and then use QOS for throttling.
However, you should probably just find a boot utility that processes job queues and can itself limit the speed.
From TFM ( man curl ) --limit speeds Specify the maximum transfer rate that you want to use. This feature is useful if you have a limited channel and you do not want to use all your bandwidth.
The given speed is measured in bytes/second, unless a suffix is appended. Appending 'k' or 'K' will count the number as kilo- bytes, 'm' or M' makes it megabytes, while 'g' or 'G' makes it gigabytes. Examples: 200K, 3m and 1G. The given rate is the average speed counted during the entire transfer. It means that curl might use higher transfer speeds in short bursts, but over time it uses no more than the given rate. If you also use the -Y/
source share