I have a 10 megabyte JSON file with the following structure (10k entries):
{ entry_1: { description: "...", offset: "...", value: "...", fields: { field_1: { offset: "...", description: "...", }, field_2: { offset: "...", description: "...", } } }, entry_2: ... ... ...
}
I want to implement an autocomplete input field that will extract sentences from this file as quickly as possible when searching for multiple attributes. For example, find all record names, field names, and descriptions that contain some substring.
Method 1:
I tried to smooth nesting into an array of strings:
"entry_1|descrption|offset|value|field1|offset|description", "entry_1|descrption|offset|value|field2|offset|description", "entry2|..."
and match the incomplete string case insensitive, the request took about 900 ms.
Method 2
I tried the Xpath based JSON request (using defiant.js ).
var snapshot = Defiant.getSnapshot(DATA); found = JSON.search(snapshot, '//*[contains(fields, "substring")]');
the request took about 600 ms (for only one attribute of fields ).
Are there any other options that will allow me to get up to 100 ms? I have control over the file format, so I can turn it into XML or any other format, the only requirement is speed.