Document ID Exception

I have a solr 5.4.1 table that I allow users to make comments. Through PHP, the user performs the following function:

function solr_update($id, $comments) { $ch = curl_init("http://url:8983/solr/asdf/update?commit=true"); $data = array( "id" => $id, "comments" => array( "set" => $comments), ); $data_string = json_encode(array($data)); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($ch, CURLOPT_POST, TRUE); curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-type: application/json')); curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string); echo curl_exec($ch); } 

Although it works most of the time, and I get this answer:

 {"responseHeader":{"status":0,"QTime":83}} 

Recently, I have come across times when this is the answer I get from curl_exec($ch) .

 {"responseHeader":{"status":400,"QTime":6}, "error":{"msg":"Exception writing document id 376299 to the index; possible analysis error.","code":400}} 

I'm not sure what causes this, but when this happens, it looks like the whole table, and I have to use the restore point to return it (http://url:8983/solr/asdf/replication?command=restore&name=solr_backup_20161028) .

If I try to load the kernel in solr ( http://url:8983/solr/asdf ), no entries will appear, and it says: "Luke is not configured." Although I can still fulfill the request: ( http://url:8983/solr/asdf/select?q=*:* ), I do not see the record counter or do not modify the database at all.

Am I doing something wrong, causing my table to get damaged?

Edit

Bounty Time. I really need help resolving this issue.

Edit2 - Server Logs

d: \ Solr \ Solr-5.4.1 \ server \ Logs \ solr.log.2

  2016-11-15 13:59:49.997 ERROR (qtp434176574-19) [ x:invoice] oassHttpSolrCall null:org.apache.lucene.index.IndexNotFoundException: no segments* file found in NRTCachingDirectory( MMapDirectory@D :\solr\solr- 5.4.1\server\solr\asdf\data\restore.snapshot.solr_backup_20161104 lockFactory=org.apache.lu cene.store.NativeFSLockFactory@17df8f10 ; maxCacheMB=48.0 maxMergeSizeMB=4.0): files: [_pej4.fdt, _pej4_Lucene50_0.doc, _pej4_Lucene50_0.tim, _wxvy.fdt, _wxvy_Lucene50_0.doc, _wxvy_Lucene50_0.tim] 

In addition, now I have several folders "solr_backupYYYYMM" in my asdf/data folder. Can I manually delete them without causing problems? The only thing that, in my opinion, works against solr after 5pm is a python script that I wrote back up every day, and as part of the script, it currently deletes any folder with the YYYYMM line that is older than 7 days ( so I don’t have enough space). I took that part of OUT from the script as of yesterday, in case this could be the cause of this problem. I'm just trying to think about everything.

enter image description here

+5
source share
1 answer

It seems your problem is described here: http://lucene.472066.n3.nabble.com/Exception-writing-document-to-the-index-possible-analysis-error-td4174845.html

As there is a whole discussion, I will copy two quotes with the bottom line of the discussion:

"I think you tried indexing an empty string into a number field. It will not work. It must be a real number, or you need to completely leave the field."

"The _collection_id field _collection_id required in the schema and my update request is not populated."

Your second problem, β€œ IndexNotFoundException: no segments* file found in ”, is because after an exception, IndexWriter dies, leaving write.lock and temporary index files in place. This issue is described here https://wilsonericn.wordpress.com/2011/12/14/my-first-5-lucene-mistakes under "# 3: Leave IndexWriter open." heading.

+2
source

Source: https://habr.com/ru/post/1259054/


All Articles