"Too many open files" when flushing data in elasticsearch

I am dumping for elasticsearch using Elasticsearch-Exporter in OSX Mavericks:

node /usr/bin/node_modules/elasticsearch-exporter/exporter.js -j ${esIndexName} -f esbackup

I have an application that runs two nodes, which together with the node application integrates a total of three nodes. The node created by the team elasticsearchis the node wizard. When I run the export command against my index, I get this after a few seconds of a successful download:

2014-05-07T14:31:38.325-0700 [elasticsearch[Rancor][[es][1]: Lucene Merge Thread #0]] [WARN] merge.scheduler [][] - [Rancor] [es][1] failed to merge
 815 java.io.FileNotFoundException: /private/var/data/core/elasticsearch_me/nodes/0/indices/es/1/index/_f_es090_0.tip (Too many open files)

I tried the following:

launchctl limit 10000

sudo launchctl limit 40000 65000

elasticsearch soft nofile 32000

elasticsearch hard nofile 32000

adding -XX:-MaxFDLimitjvm to my arguments for application

None of them solves my problem. Sometimes the download ends without errors, but most of the time I encounter an error. Anyone have any ideas / hints that might be my problem?

Edit:

$ launchctl limit cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 709 1064
maxfiles 10000 10240

$ sudo launchctl limit Password: cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 709 1064
maxfiles 40000 65000

+4
1

- elasticsearch node.js Mac, , , ES:

file descriptor

Make sure to increase the number of open files descriptors on the machine (or for the user running elasticsearch). Setting it to 32k or even 64k is recommended.

In order to test how many open files the process can open, start it with -Des.max-open-files set to true. This will print the number of open files the process can open on startup.

Alternatively, you can retrieve the max_file_descriptors for each node using the Nodes Info API, with:

curl localhost:9200/_nodes/process?pretty

, , ES, root (, , root).

, : (http://elasticsearch-users.115913.n3.nabble.com/quot-Too-many-open-files-quot-error-on-Mac-OSX-td4034733.html). , 32k 64k :

In /etc/launchd.conf put:

limit maxfiles 32000 64000


Make sure in your ~/.bashrc file you are not setting the ulimit with something like "ulimit -n 1024".  

Open a new terminal, and run:

launchctl limit maxfiles
ulimit -a

. , elasticsearch :

elasticsearch -XX:-MaxFDLimit

Mac Elasticsearch:

curl http://localhost:9200/_nodes/process?pretty

{
  "cluster_name" : "elasticsearch",
  "nodes" : {
    "XXXXXXXXXXXXXXXXXXXXXXX" : {
      "name" : "Marrina Smallwood",
      "transport_address" : "inet[XX.XX.XX.XX:9300]",
      "host" : "MacBook-Pro-Retina.local",
      "ip" : "XX.XX.XX.XX",
      "version" : "1.1.1",
      "build" : "f1585f0",
      "http_address" : "inet[/XX.XX.XX.XX:9200]",
      "process" : {
        "refresh_interval" : 1000,
        "id" : 538,
        "max_file_descriptors" : 32768,
        "mlockall" : false
      }
    }
  }
}
+5

Source: https://habr.com/ru/post/1539674/


All Articles