Typeahead.js, localStorage and a large json file

I have a 1 MB json file. I tried to implement typeahead.js with a simple example:

<div class="container"> <p class="example-description">Prefetches data, stores it in localStorage, and searches it on the client: </p> <input id="my-input" class="typeahead" type="text" placeholder="input a country name"> </div> <script type="text/javascript"> // Waiting for the DOM ready... $(function(){ // applied typeahead to the text input box $('#my-input').typeahead({ name: 'products', // data source prefetch: '../php/products.json', // max item numbers list in the dropdown limit: 10 }); }); </script> 

But when I run it, chrome says:

Uncaught QuotaExceededError: Failed to execute 'setItem' in 'Storage': Setting "__products__itemHash" exceeded the quota.

What can I do? I am using typeahead.min.js

+6
source share
1 answer

You see this error because prefetch typeahead uses localStorage to store data.

Firstly, storing 1 MB of data on the client side is not very good from the user's point of view.

Given that you can still solve the problem with multiple datasets. This is just a workaround and may not be the most elegant solution, but it works great.

The test data I tested was> 1 MB and look like

enter image description here

You can view the sample here (it takes some time to open)

Procedure:

  • First load all the data using $.getJSON
  • Then split the data into pieces of 10,000 (just a magic number that worked for me in different browsers).
  • Sets of bloodhounds for each piece are created and store everything in an array.
  • Then initialize typeahead with this array

code:

 $.getJSON('data.json').done(function(data) { // download the entire data var dataSources = []; var data = data['friends']; var i, j, data, chunkSize = 10000; // break the data into chunks of 10,000 for (i = 0, j = data.length; i < j; i += chunkSize) { tempArray = data.slice(i, i + chunkSize); var d = $.map(tempArray, function(item) { return { item: item }; }); dataSources.push(getDataSources(d)); // push each bloodhound to dataSources array } initTypeahead(dataSources); // initialize typeahead }); function getDataSources(data) { var dataset = new Bloodhound({ datumTokenizer: Bloodhound.tokenizers.obj.whitespace('item'), queryTokenizer: Bloodhound.tokenizers.whitespace, local: data, limit: 1 // limited each dataset to 1 because with 76,000 items I have 8 chunks and each chunk gives me 1. So overall suggestion length was 8 }); dataset.initialize(); var src = { displayKey: 'item', source: dataset.ttAdapter(), } return src; } function initTypeahead(data) { $('.typeahead').typeahead({ highlight: true }, data); // here is where you use the array of bloodhounds } 

I created a demo here with 20 elements and a chunkSize of 2 to show how multiple datasets normally work. (Search for Sean or Benjamin)

Hope this helps.

+8
source

Source: https://habr.com/ru/post/984081/


All Articles