All memory marked as copy to write is copied after one change to one piece of data?

My question is perhaps poorly worded and stems from my amateurish understanding of memory management.

My concern is this: I have a Perl script that forks many times. As I understand it, copy-to-write is done on the perldoc fork page. Each of the children then calls system() , reopening it to call an external program. Data from an external program is read back to the child and discarded as a saved file that must be received and processed by the parent after all children exit.

As for me, this is my perceived volatility of this situation. Think about what I see in my mind, the worst case scenario: For each of the children, as soon as new data appears, the entire copy to the record will be copied. If so, I will quickly tackle memory issues after creating a few forks.

But, on the contrary, copy-on-write copies only the smallest piece of memory that contains the necessary data? Then what are these quanta of memory? How is its size set?

I am not sure if the specifics of what I ask depend on the language or on some lower level process.

+4
source share
2 answers

The memory is organized on pages, usually 4K each (this can be set to different values ​​and depends on the equipment, but this is the norm on Intel platforms with standard operating systems). When the child process writes to the copy to write page, it will be copied.

+6
source

Yes, forking will increase your memory capacity. If this is a problem, use a module such as Parallel::ProcManager or Forks::Super , which can throttle the number of active background processes. Limiting the number of active forks is also a good idea when your processes are processor-related, I / O-related, or may abuse any other limited resource on your computer.

 use Forks::Super MAX_PROC => 10, ON_BUSY => block; ... $pid = fork(); # blocks if there are already 10 child processes ... # unblocks when one of the children finishes 
+6
source

Source: https://habr.com/ru/post/1336194/


All Articles