I have a single node cluster. The device has 8 GB of RAM, and for the ES-process - 6 GB. I have 531 shard (522 index) working on this node. Most fragments contain virtually no data.
Here are the stats:
Total number of documents: 265743
Deleted Documents: 27069
Total size: 136923957 bytes (130.5 MB)
Fielddata: 250,632 bytes
filter_cache: 9984 bytes
segments: (total: 82 memory_in_bytes: 3479988)
The heap is 5.9 GB and 5.6 GB is used.
If I create a few more indexes in the cluster, the node statistics do the GC and eventually go into OOM. I know that there are a lot of errors in this configuration (only one node, 6 GB of 8 GB).
I want to know how memory is used. The general document, filter cache, field data make up almost nothing, yet I use all the memory.
source share