I run a query to the SOLR core and limit the result using a filter like fq: {!frange l=0.7 }query($q) . I know that SOLR scores do not have an absolute value, but 0.7 (only an example) is calculated based on user input and some heuristics, which works quite well.
The problem is this: I am updating quite a few documents in my core. Updated fields are only metadata fields that are not associated with the above search. But since the update internally is delete + insert, IDF and doc count the change. And also calculated estimates. Suddenly my query returns different results.
As Jonick explained to me here , this is design behavior. So my question is: what is the easiest and minimum way to keep the results and results of my query stable?
Performing an optimization after each commit should solve the problem, but I'm wondering if there is anything simpler and cheaper.
Achim source share