How is data distributed between fastCGI processes?

I wrote a simple perl script that I run through fastCGI on Apache. The application downloads a set of XML data files that are used to search for values ​​based on the request parameters of an incoming request. As far as I understand, if I want to increase the number of simultaneous requests that my application can process, I need to allow fastCGI to create many processes. Will each of these processes store duplicate copies of XML data in memory? Is there a way to set the settings so that I can have one copy of the XML data loaded into memory, increasing the ability to handle concurrent requests?

+3
source share
2 answers

As pilcrow answered correctly, FastCGI does not provide a special way to exchange data between processes and lists of traditional ways to reduce memory usage.

Another possibility is that a permanent, non-FastCGI process reads an XML file and acts as a data server for FastCGI processes. The effectiveness of this depends on how complex the queries are and how much data needs to be transmitted and exited, but this will leave one copy of the data in memory.

+9
source

The memory is distributed between individual FastCGI processes, as well as between ordinary, separate processes, that is, data is not used for our purposes.

(FastCGI , , XML-, , .)

, , XML . ( ), XML , , "" GDBM XML , , .

+8

Source: https://habr.com/ru/post/1745041/


All Articles