gc_probability and gc_divisor are designed to help you determine the "likelihood" of garbage collection (GC) running.
Since GC (like everyone else) has its own cost, you usually do not want it to be executed for every web request processed by your server - this will mean that every page opening, every AJAX request, every image or JS file that is uploaded from the server will make GC work.
Thus, depending on the actual load and use of the server, the administrator must make a reasonable assumption about how often the GC should be launched: once out of 100, 1/10000 or 1 million requests.
But in the original argument of the OP there is a problem flaw - that garbage collection will occur on any idle session . As I read the manual , garbage collection will occur in ANY session, and not just in a simple one:
session.gc_maxlifetime integer : indicates the number of seconds after which data will be considered garbage and can be cleared.
Thus, the session lifetime (downtime or not) is determined using gc_maxlifetime , and the moment the GC starts (as stated in the documents: “potentially”) is really determined using gc_probability and gc_divisor .
To resume, my late answer to the question would be - I would not, under normal conditions, have GC work on every request (scenario 1/1 that you mentioned), because
- this seems serious redundant. At some level, you will likely end up with thousands (if not worse) IFs and only once go to THEN
- You would be logged out of ANY user in 60 minutes, and not just inactive.
source share