I am new to elasticity. I want to look for a substring consisting of numbers and characters like "/" and "-". For example, I create an index with default settings and one indexed field:
curl -XPUT "http://localhost:9200/test/" -d ' { "mappings" : { "properties": { "test_field": { "type": "string" } } } } '
Then add some data to my index:
curl -XPOST "http://localhost:9200/test/test_field" -d '{ "test_field" : "14/21-35" }' curl -XPOST "http://localhost:9200/test/test_field" -d '{ "test_field" : "1/1-35" }' curl -XPOST "http://localhost:9200/test/test_field" -d '{ "test_field" : "1/2-25" }'
After updating the index, I do a search. So, I want to find data in which "test_field" begins with "1/1". My request:
curl -X GET "http://localhost:9200/test/_search?pretty=true" -d '{"query":{"query_string":{"query":"1/1*"}}}'
not returning. If I delete the star symbol, then in response I see two hits: "1 / 1-35" and "1 / 2-25". If I try to escape the slash character with a backslash ("1 \ / 1 *"), the results will be the same.
When the "-" character is present in my request, I must avoid this special Lucene character. Therefore, I submit the following search request:
curl -X GET "http://localhost:9200/test/_search?pretty=true" -d '{"query":{"query_string":{"query":"*1\-3*"}}}'
and it returns with a parsing error. If I double escape ("\\") minus, then I have no results.
I have no idea how the search is performed when a query consists of these characters. Maybe I'm doing something wrong.
I tried using the nGram filter in my custom analyzer, but it does not meet the requirements of the search engine.
If someone has encountered this problem, answer.