My question is perhaps poorly worded and stems from my amateurish understanding of memory management.
My concern is this: I have a Perl script that forks many times. As I understand it, copy-to-write is done on the perldoc fork page. Each of the children then calls system() , reopening it to call an external program. Data from an external program is read back to the child and discarded as a saved file that must be received and processed by the parent after all children exit.
As for me, this is my perceived volatility of this situation. Think about what I see in my mind, the worst case scenario: For each of the children, as soon as new data appears, the entire copy to the record will be copied. If so, I will quickly tackle memory issues after creating a few forks.
But, on the contrary, copy-on-write copies only the smallest piece of memory that contains the necessary data? Then what are these quanta of memory? How is its size set?
I am not sure if the specifics of what I ask depend on the language or on some lower level process.
source share