I wrote a simple perl script that I run through fastCGI on Apache. The application downloads a set of XML data files that are used to search for values based on the request parameters of an incoming request. As far as I understand, if I want to increase the number of simultaneous requests that my application can process, I need to allow fastCGI to create many processes. Will each of these processes store duplicate copies of XML data in memory? Is there a way to set the settings so that I can have one copy of the XML data loaded into memory, increasing the ability to handle concurrent requests?
As pilcrow answered correctly, FastCGI does not provide a special way to exchange data between processes and lists of traditional ways to reduce memory usage.
Another possibility is that a permanent, non-FastCGI process reads an XML file and acts as a data server for FastCGI processes. The effectiveness of this depends on how complex the queries are and how much data needs to be transmitted and exited, but this will leave one copy of the data in memory.
The memory is distributed between individual FastCGI processes, as well as between ordinary, separate processes, that is, data is not used for our purposes.
(FastCGI , , XML-, , .)
, , XML . ( ), XML , , "" GDBM XML , , .
Source: https://habr.com/ru/post/1745041/More articles:Objective C's primary revocation filter for iPhone - regexHow do I get started on a larger project? - javaHow can I find a variable in a JavaScript (Firebug) variable? - javascriptRuby NTLM Library - ruby | fooobar.comI cannot find the reason for the warning of "unverified or unsafe operations" in Java - javawhat is better when creating a new variable in a loop in C ++ - c ++Mach-O binaries using FASM - assemblyUseful tip for developing MS Word professional add-on - c #validates_uniqueness_of ... limit scope - How to restrict someone from creating a certain number of records - scopeFinding a better design: a read-only read mechanism - c #All Articles