How much overhead does the Elastic Search index have?

I have a single node cluster. The device has 8 GB of RAM, and for the ES-process - 6 GB. I have 531 shard (522 index) working on this node. Most fragments contain virtually no data.

Here are the stats:

Total number of documents: 265743

Deleted Documents: 27069

Total size: 136923957 bytes (130.5 MB)

Fielddata: 250,632 bytes

filter_cache: 9984 bytes

segments: (total: 82 memory_in_bytes: 3479988)

The heap is 5.9 GB and 5.6 GB is used.

If I create a few more indexes in the cluster, the node statistics do the GC and eventually go into OOM. I know that there are a lot of errors in this configuration (only one node, 6 GB of 8 GB).

I want to know how memory is used. The general document, filter cache, field data make up almost nothing, yet I use all the memory.

+6
source share
1 answer

In my personal experience with ES 1.x and 2.x, the overhead for each splinter is not trivial and usually falls in the range of several MB / shard. As far as I understand, this is the memory reserved for indexing buffers, state metadata, references to lucene objects, cache objects, etc.

Basically, a memory bit is reserved to be able to quickly index and trigger caching, if necessary. I do not know how much this is still true in version 5.x.

0
source

Source: https://habr.com/ru/post/1012219/


All Articles