1000 objects is a children's game for Solr, so there is something fishy where ~ 200 ms of Solr is read. However, your most immediate problem is that you write Solr during what appears to be a GET request - what does this have to do with it? Do you save a searchable object that runs the Sunspot auto index? If you need to update models during a GET request (which should probably be done in a background job, if possible), you need to turn off automatic indexing in Sunspot:
searchable :auto_index => false
Then you need to explicitly call my_model.index in your controllers when you really want to update them in Solr.
Finally, this big update at the end is the Solr compilation, which tells Solr to write the undefined changes to disk and load a new crawler that reflects these changes. Charges are expensive; Sunspot :: Rails defaults to committing at the end of any request that it writes to Solr, but this behavior is aimed at the principle of least surprise for new Sunspot users, and not for a real production application. You want to disable it in your config/sunspot.yml :
auto_commit_after_request: false
Then you probably want to configure autoCommit in your solr/conf/solrconfig.xml - it is commented out in the default Sunspot Solr distribution, and there is an explanation. I found that once a minute is a good place to start.
After making these changes, I will see if your readings are still slow - I think it is quite possible that the reason for this is that every time you perform a search, your write / commit to Solr necessitates loading with a new crawler from disk . Thus, it cannot allow any internal cache to heat up, etc., and, as a rule, is under tremendous stress.
Hope this helps!
source share