Elasticearch not responding after index recovery

ES did not respond to any queries after losing the index (for an unknown reason). After the server reboots, the ES tries to restore the index, but as soon as it reads the entire index (only about 200 MB), the ES stops responding. The last mistake I saw was SearchPhaseExecutionException[Failed to execute phase [query_fetch], all shards failed]. I use ES on one virtual node server. The index has only one fragment with about 3 million documents (200 mb).

How can I restore this index?

Here is ES magazine

[2014-06-21 18:43:15,337][WARN ][bootstrap                ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line
[2014-06-21 18:43:15,554][WARN ][common.jna               ] Unknown mlockall error 0
[2014-06-21 18:43:15,759][INFO ][node                     ] [Crimson Cowl] version[1.1.0], pid[1031], build[2181e11/2014-03-25T15:59:51Z]
[2014-06-21 18:43:15,759][INFO ][node                     ] [Crimson Cowl] initializing ...
[2014-06-21 18:43:15,881][INFO ][plugins                  ] [Crimson Cowl] loaded [], sites [head]
[2014-06-21 18:43:21,957][INFO ][node                     ] [Crimson Cowl] initialized
[2014-06-21 18:43:21,958][INFO ][node                     ] [Crimson Cowl] starting ...
[2014-06-21 18:43:22,275][INFO ][transport                ] [Crimson Cowl] bound_address {inet[/10.0.0.13:9300]}, publish_address {inet[/10.0.0.13:9300]}
[2014-06-21 18:43:25,385][INFO ][cluster.service          ] [Crimson Cowl] new_master [Crimson Cowl][UJNl8hGgRzeFo-DQ3vk2nA][esubuntu][inet[/10.0.0.13:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-21 18:43:25,438][INFO ][discovery                ] [Crimson Cowl] elasticsearch/UJNl8hGgRzeFo-DQ3vk2nA
[2014-06-21 18:43:25,476][INFO ][http                     ] [Crimson Cowl] bound_address {inet[/10.0.0.13:9200]}, publish_address {inet[/10.0.0.13:9200]}
[2014-06-21 18:43:26,348][INFO ][gateway                  ] [Crimson Cowl] recovered [2] indices into cluster_state
[2014-06-21 18:43:26,349][INFO ][node                     ] [Crimson Cowl] started

After deleting another index in the same node ES, respond to the request, but do not restore the index. Here log

[2014-06-22 08:00:06,651][WARN ][bootstrap                ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line
[2014-06-22 08:00:06,699][WARN ][common.jna               ] Unknown mlockall error 0
[2014-06-22 08:00:06,774][INFO ][node                     ] [Baron Macabre] version[1.1.0], pid[2035], build[2181e11/2014-03-25T15:59:51Z]
[2014-06-22 08:00:06,774][INFO ][node                     ] [Baron Macabre] initializing ...
[2014-06-22 08:00:06,779][INFO ][plugins                  ] [Baron Macabre] loaded [], sites [head]
[2014-06-22 08:00:08,766][INFO ][node                     ] [Baron Macabre] initialized
[2014-06-22 08:00:08,767][INFO ][node                     ] [Baron Macabre] starting ...
[2014-06-22 08:00:08,824][INFO ][transport                ] [Baron Macabre] bound_address {inet[/10.0.0.3:9300]}, publish_address {inet[/10.0.0.3:9300]}
[2014-06-22 08:00:11,890][INFO ][cluster.service          ] [Baron Macabre] new_master [Baron Macabre][eWDP4ZSXSGuASJLJ2an1nQ][esubuntu][inet[/10.0.0.3:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-22 08:00:11,975][INFO ][discovery                ] [Baron Macabre] elasticsearch/eWDP4ZSXSGuASJLJ2an1nQ
[2014-06-22 08:00:12,000][INFO ][http                     ] [Baron Macabre] bound_address {inet[/10.0.0.3:9200]}, publish_address {inet[/10.0.0.3:9200]}
[2014-06-22 08:00:12,645][INFO ][gateway                  ] [Baron Macabre] recovered [1] indices into cluster_state
[2014-06-22 08:00:12,647][INFO ][node                     ] [Baron Macabre] started
[2014-06-22 08:05:01,284][WARN ][index.engine.internal    ] [Baron Macabre] [wordstat][0] failed engine
java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.index.ParallelPostingsArray.<init>(ParallelPostingsArray.java:35)
        at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:254)
        at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:279)
        at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
        at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:307)
        at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:324)
        at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185)
        at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:171)
        at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
        at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
        at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:453)
        at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1529)
        at org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:532)
        at org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:470)
        at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:744)
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:228)
        at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
[2014-06-22 08:05:02,168][WARN ][cluster.action.shard     ] [Baron Macabre] [wordstat][0] sending failed shard for [wordstat][0], node[eWDP4ZSXSGuASJLJ2an1nQ], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
[2014-06-22 08:05:02,169][WARN ][cluster.action.shard     ] [Baron Macabre] [wordstat][0] received shard failed for [wordstat][0], node[eWDP4ZSXSGuASJLJ2an1nQ], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
[2014-06-22 08:53:22,253][INFO ][node                     ] [Baron Macabre] stopping ...
[2014-06-22 08:53:22,267][INFO ][node                     ] [Baron Macabre] stopped
[2014-06-22 08:53:22,267][INFO ][node                     ] [Baron Macabre] closing ...
[2014-06-22 08:53:22,272][INFO ][node                     ] [Baron Macabre] closed
[2014-06-22 08:53:23,667][WARN ][bootstrap                ] jvm uses the client vm, make sure to run `java` with the server vm for best performance by adding `-server` to the command line
[2014-06-22 08:53:23,708][WARN ][common.jna               ] Unknown mlockall error 0
[2014-06-22 08:53:23,777][INFO ][node                     ] [Living Totem] version[1.1.0], pid[2137], build[2181e11/2014-03-25T15:59:51Z]
[2014-06-22 08:53:23,777][INFO ][node                     ] [Living Totem] initializing ...
[2014-06-22 08:53:23,781][INFO ][plugins                  ] [Living Totem] loaded [], sites [head]
[2014-06-22 08:53:25,828][INFO ][node                     ] [Living Totem] initialized
[2014-06-22 08:53:25,828][INFO ][node                     ] [Living Totem] starting ...
[2014-06-22 08:53:25,885][INFO ][transport                ] [Living Totem] bound_address {inet[/10.0.0.3:9300]}, publish_address {inet[/10.0.0.3:9300]}
[2014-06-22 08:53:28,913][INFO ][cluster.service          ] [Living Totem] new_master [Living Totem][D-eoRm7fSrCU_dTw_NQipA][esubuntu][inet[/10.0.0.3:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-22 08:53:28,939][INFO ][discovery                ] [Living Totem] elasticsearch/D-eoRm7fSrCU_dTw_NQipA
[2014-06-22 08:53:28,964][INFO ][http                     ] [Living Totem] bound_address {inet[/10.0.0.3:9200]}, publish_address {inet[/10.0.0.3:9200]}
[2014-06-22 08:53:29,433][INFO ][gateway                  ] [Living Totem] recovered [1] indices into cluster_state
[2014-06-22 08:53:29,433][INFO ][node                     ] [Living Totem] started
[2014-06-22 08:58:05,268][WARN ][index.engine.internal    ] [Living Totem] [wordstat][0] failed engine
java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:261)
        at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:279)
        at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
        at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:307)
        at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:324)
        at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185)
        at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:171)
        at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
        at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
        at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:453)
        at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1529)
        at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1199)
        at org.elasticsearch.index.engine.internal.InternalEngine.innerIndex(InternalEngine.java:523)
        at org.elasticsearch.index.engine.internal.InternalEngine.index(InternalEngine.java:470)
        at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:744)
        at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:228)
        at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
[2014-06-22 08:58:06,046][WARN ][cluster.action.shard     ] [Living Totem] [wordstat][0] sending failed shard for [wordstat][0], node[D-eoRm7fSrCU_dTw_NQipA], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
[2014-06-22 08:58:06,047][WARN ][cluster.action.shard     ] [Living Totem] [wordstat][0] received shard failed for [wordstat][0], node[D-eoRm7fSrCU_dTw_NQipA], [P], s[INITIALIZING], indexUUID [LC3LMLxgS3CkkG_pvfTeSg], reason [engine failure, message [OutOfMemoryError[Java heap space]]]
+4
source share
2 answers

Elasticsearch, . , , :

  • , .
    , , -
    ES_HEAP_SIZE. 1 , ,
    , , 1,6 . , Elasticsearch - , bin Elasticsearch. Linux elasticsearch elasticsearch.in.sh.
  • . - , , .
0

, . , . , .

Linux,

  • Elasticsearch usr/local/var/elasticsearch/
  • ,
-2

Source: https://habr.com/ru/post/1545534/


All Articles