Although I usually always recommend that users store their data in a database rather than in simple files, this is an exception. In general, there is little overhead when storing data in a database compared to files, but the former provides greater access flexibility and eliminates many locking problems. However, if you do not expect your page to become especially slow for users with multiple browsers accessing the same session, then concurrency will not be a big problem, i.e.
Using any database will be slower
(also, if you intend to work with a large cluster of web servers β more than 200 β to use the same session, then yes, a distributed database can outperform the cluster file system in a SAN).
You probably want to think about how often the session will be written. By default, the handler writes data back to disk every time, regardless of whether it has been changed or not - for such a large session, I suggest you use your own session handler, which not only writes serialized session data to a file, but also stores a hash of serialized data - when you read in a session, store the hash in a static variable. In the save handler, create a new hash and compare it with the static variable filled at boot time - just record the session if it has changed. You can continue this by using heuristics to split the session into parts that are frequently updated and parts that change less frequently and then write them to separate files.
Using compression for this will not really help in performance.
Of course, to optimize the OS level settings, there are enough opportunities for optimization, but you do not say what your OS is. Assuming its POSIX and your system are not yet on their knees, your prevalence hits will be:
Data file access delay and data analysis
(the time to read the file is relatively small, and the record should be buffered).
As long as there is enough cache, the file will be read from memory, not from disk, so the wait time will be short.
FROM.