I upgraded my Elasticsearch cluster from 1.1 to 1.2, and I have errors when indexing a few large rows.
{ "error": "IllegalArgumentException[Document contains at least one immense term in field=\"response_body\" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped. Please correct the analyzer to not produce such terms. The prefix of the first immense term is: '[7b 22 58 48 49 5f 48 6f 74 65 6c 41 76 61 69 6c 52 53 22 3a 7b 22 6d 73 67 56 65 72 73 69]...']", "status": 500 }
Index Display:
{ "template": "partner_requests-*", "settings": { "number_of_shards": 1, "number_of_replicas": 1 }, "mappings": { "request": { "properties": { "asn_id": { "index": "not_analyzed", "type": "string" }, "search_id": { "index": "not_analyzed", "type": "string" }, "partner": { "index": "not_analyzed", "type": "string" }, "start": { "type": "date" }, "duration": { "type": "float" }, "request_method": { "index": "not_analyzed", "type": "string" }, "request_url": { "index": "not_analyzed", "type": "string" }, "request_body": { "index": "not_analyzed", "type": "string" }, "response_status": { "type": "integer" }, "response_body": { "index": "not_analyzed", "type": "string" } } } } }
I searched the documentation and did not find anything related to the maximum field size. In the section of basic types, I do not understand why I should "fix the analyzer" for the not_analyzed field.
elasticsearch
jlecour Jun 03 '14 at 16:06 2014-06-03 16:06
source share