MongoDB Caching Counters

I am writing a hit counter for products on a website that uses MongoDB as its "DB-Engine".

It states here that Mongo often accesses materials in memory and has a built-in caching mechanism in memory.

So, can I just transfer this integrated caching system and stupidly set the counters for each visit, or do I need another layer of caching in a high-traffic environment?

+4
source share
2 answers

They are two different things. MongoDB uses a simple swap memory management system, which by design stores the most accessible part of the disk space with memory in memory.

As a result, this will help you the most for counters, which are often requested but not often changed. Unfortunately, for website counters, these two things are mutually exclusive. Since increasing counters generally does not cause MongoDB to move the document containing the counter on disk, reading caching will still be quite efficient.

The main problem is that what you write, basically making an increase per visit, will not be very cost effective. I suggest a strategy in which your webapp counter caches incoming visits and only pushes update counts every X visits or every Y seconds, whichever comes first. Your main goal here is to reduce the number of records per second, so you definitely do not need to write db to the hit counter.

+5
source

Although I never worked on the system that you are describing , I would do the following (assuming I read your question correctly and that you really just want to increase the counter for everyone to visit).

  • Use the $inc operator to atomically increment or use upserts with modifiers to create a document structure if it does not already exist
  • Use the appropriate entry to speed up the update, if it is safe for this (i.e. with the entry "Content No. NONE", your update call will immediately return and you just need to trust Mongo to save it to disk). Of course, whether it is safe or not depends on the use case. If you count millions of hits, then 1 bad hit cannot be a problem.
  • If the scale of the data you store is truly huge, look at using sharding to write partitions
+1
source

Source: https://habr.com/ru/post/1369621/


All Articles