Save all user actions as session values ​​and save session values ​​at the end of the session

Usually I save all user activity, page views and all values, executing a query every time the page loads, in php.

  • Now I am thinking about saving all of these values ​​as session values ​​and executing the query at the end of the session . But I don't know how I can handle this "end of session" event in php.

I was thinking of a background process that stores all session values ​​immediately before the session timeout, but I know nothing about multithreading methods in php, although I heard that workaround is possible.

  • I would like to determine if a user is inactive without a user trigger (for example, loading a page) and save all session values ​​to the end of the session, before the timeout.

This will save some workload on the database engine as follows: The user browses through many pages, I increase the value in the session by executing a query like

update pageviews set numberofviews=numberofviews+1 

for each page load.

  • Question: How can I handle end of session in php without user activity ?

Also understood are solutions to a real problem. irrespective of relevance for this question is in bold. (I haven't started writing code yet; my question is not fixing bugs.)

Possible solutions:

I found a php session handler functor , but the documentation says nothing about whether it can have null parameters for callback parameters?

Will I lose the standard functionality if I enter null parameters for $ callback parameters?

I want to change the $ destroy session event. I need help with this, now.

Related posts: RP1

+4
source share
8 answers

You cannot use the session_destroy handler, because it is only called if you explicitly call session_destroy, which will not happen if the user simply closes his browser and ends the session. Since there is no way in this scenario to capture the end of an event session, you will have to do some periodic checking to see if the session has passed within a certain time period. The easiest way to do this is to create a field in the session that contains the last access time, and then create a script that periodically (either via cron or runs some other functions in your PHP scripts) checks this field for all active sessions and for those If you have a timestamp older than xx seconds, run your update request.

Please note that for this, using standard PHP sessions that are stored in the file system, you will actually have to open the files and analyze them directly. An alternative is to save the sessions in the database, but this is probably not what you want to do, given the fact that you are trying to reduce the load on db.

Another opportunity that is not developing so intensively:

  • Add timestamp with last session access
  • Check this tag every time you update a session
  • If the timestamp is older than xx seconds, run the update request on db and set the timestamp at the current time.

Thus, you update the database every so often, and not every page load. The disadvantage of this approach is that if the last action on the session exactly matches the time interval that the update function calls, you cannot store the latest session information in db.

+3
source

This is not what sessions were created for, so what you are trying to achieve is not possible with only session mechanics.

With standard PHP sessions, your options are somewhat limited and depend on session storage:

  • By default, sessions are stored in files. If you turn off session garbage collection, you can run a cron job that will parse session files older than you define. This is not perfect, but certainly the easiest. Also, file sessions suck - write each request.
  • Sessions in Memcached. Without a good method of collecting records older than x and the fact that Memcached will simply lose data when it expires, there is not much here.
  • Sessions in the database - you already have an update for each request.

Thus, your options with standard sessions are limited at best. My recommendation: replace the standard session with your own, but that means that you will no longer access the session through $ _SESSION, and you will need to rewrite a lot of your code.

I replaced the regular session with a singleton that stores session data in APC / Memcached, as well as in DB. Session data is saved in the database only if I request it ( Session::persistentStore($key,$value) ), or if during the processing of the request, the data of the stored APC / Memcached session indicates that it was not written long enough. This limits the number of entries a lot. With this replacement, you simply force the session to be saved by the database: retrieve session identifiers for sessions that were not updated in x minutes with db, retrieve information from APC / Memcached for each of them, and if there is no update information either (indicating that the session is coming soon will end) save it to db.

+2
source

Just a wild thought here, you can run an HTTP request to save your session data when you unload a window from javascript. The problem is that browsers usually stop requests after closing the page, so this request may not even reach the server, but I think it is worth the test.

+1
source

Despite some workarounds, every solution you can find to solve this problem will be more inefficient than recording each user activity in each request.

+1
source

What problem are you trying to solve? I do not understand why you did not run the update request in the database for each request. This should not take much more than 0.0001.

I would definitely go on updating each request. This is 1 line of code, does not create excessive load on the database and is much more efficient.

The proposed method is difficult to test, creates an undesirable load and can even create a global state. You can prevent all of this by simply sending the usual solution. Sessions are (...) when it comes to determining when they "ended".

+1
source

I was thinking of a background process that stores all session values ​​just before the session timeout, but I know nothing about multithreaded methods in php, although I heard some workaround is possible.

you can do this if you use db based sessions quite easily.

  • Does not require multithreading
  • write a cron task that runs every minute / 5 minutes
    • selects all expiration records
    • saves them in another table

the key uses the cron job and does not attempt to handle the event of an intermediate expiration event on an http request

+1
source

You are talking about deferring statistics and session management. Your real problem, I think, is to avoid write requests to the database for every HTTP request.

Some tips that may be helpful for such problems:

  • Be careful with concurrent access in a session . Usually we think that a session is accessible only by one process, but this is wrong, say, for example, you are doing a multi-point download with several js opening several ajax requests to the server, each process uses the same session. The same thing when you open 10 tabs on the same site with your browser.
  • Adding update requests for each request is a tough thing . Select queries quickly and usually use some kind of cache on the SQL side. Query updates are slow, there may be a rebuild of the indexes, and usually means flushing the cache on the SQL server. They may also imply a different SQL session if you use different users in the database for read-only or read-write access.
  • the session size should not be too large, since the session must be loaded for each request, and each time you write something in your session, the lock (because of 1) and the data is placed in the physical data store (default file ) Thus, session recordings slow down your process.
  • A NoSQl backend, such as MongoDb or Reddit , can be a good place to store statistics on demand, they have the necessary tools to provide atomic updates, locks and can perform work faster than the Relationnal database (but you can also check the performance of memory tables in relevant database). Thus, you can use these tools to track activity and prevent SQL updates, using abstracts as the main goal for live statistics. And you can asynchronously collect some statistics from theses databases for your database. Sessions are now temporary objects, and theses NoSQl backends can also be good session repositories (if you write your own session repository handlers).

EDIT: an example of problems with atomic update in parallel use: executing a query with a request for increment counter = counter +1 is the same as reading a counter, incrementing it and executing an update query counter = mynewvalue. The second is incorrect when several processes use the same data.

+1
source

A little off track, but you can make your own garbage collection.

  • use PHP to store sessions in a directory, preferably tmpfs, with garbage collection disabled.

  • write a periodic PHP script to find all session files older than 20 minutes, open them, write to the database and delete them.

0
source

Source: https://habr.com/ru/post/1383168/


All Articles