This is more about application architecture / design than programming, so there is no single correct answer in theory.
However, in practice, Redis / Memcache and many other such implementations do not mean to store very large (or rapidly growing) data sets.
As a nosql data warehouse, Redis uses memory in conjunction with a mirror on the hard drive. Therefore, while there are no limits on the size of the data that you can save, ideally, it should always be less than the free memory that you plan to allocate to Redis.
The simplest solution to cover all databases is to store user activity data in Redis as it is created. Use Redis to display notifications, etc. Save the cron job, which truncates all activity logs older than a specified number of days (or a predefined number of actions for each user) and saves them in a regular database.
When the user wants to receive all notifications, some speed loss may occur (since this is not a frequent, required or advanced action), and you can pull them out of the database, bypassing Redis.
Alternative: Again, the solution to use is best chosen based on the actual numbers of your application. But you can do this:
- Store all actions in the database
- For any registered user, select and save all actions in Redis
- When logging out, delete user actions / notifications from Redis
- When adding activity, additional logic is required to check if users are connected to the network or not. In both cases, you need to add activity to the database, but if the affected user is connected to the network, then also click on your hash in Redis.
source share