MongoDB - PHP - MongoCursorException "No cursor found"

I have 2 collections: A (3.8M docs) and B (1.7M docs)

I have a PHP script that I run from a shell that:

  • iterates over every entry in A
  • ~ 60% of the time, it makes findOne on B (using _id)
  • does some basic math by creating a php array

as soon as the cycle for all documents in is completed:

4) loop over php array

5) up in compilation C

during (1), I consistently get: PHP Fatal error: Failed to throw "MongoCursorException" with the message "No cursor found" The last processed item was # 8187 of 3872494.

real 1m25.478s user 0m0.076s sys 0m0.064s 

By running it again, without changing the code, the exception fell into the element # 19826/3872495

 real 3m19.144s user 0m0.120s sys 0m0.072s 

And again, # 8181/387249

 real 1m31.110s user 0m0.036s sys 0m0.048s 

Yes, I understand that I can (and probably should) catch the exception ... but ... why is it even thrown? Especially with such a different past tense / depth in the database.

If that helps, my setup is a 3-node replica set (2 + arb). I took a secondary offline and tried only with the main launch. The same results (different number of processed results and times, but always throws a Cursor Not Found exception).

+6
source share
3 answers

Yes, I understand that I can (and probably should) catch the exception ...

Yes, this is definitely the first thing to do. Are there dozens of legitimate reasons for exclusion? What do you think happens when the primary goes offline and becomes inaccessible?

... why is he even abandoned?

There are several potential causes, but resolve directly to the error code that you see.

  • PHP white papers are here .
  • Quote from this page: the driver tried to get more results from the database, but there was no query record in the database. This usually means that the cursor is calculated on the server side ...

The PHP MongoDB driver has two different timeouts:

  • Connection timeout
  • Cursor Timeout

You hit the cursor timeout. You can connect to the database, but your request is "time is running out".

Possible fixes:

  • Pull out the timeout cursor. Or you can set it to zero and make it forever.
  • Do this work in batches. Get the first 1000 _ids from A, process them, and then note that you did. Then get the next 1000 _ids more than your last run, etc.

I would suggest # 2 along with exception handling. Even if they do not completely solve the problem, it will help you isolate and mitigate the problem.

+9
source

I know this late, and it may not be your solution, but you can try using immortal (). As Gates vice president noted, this page describes the exception.

The driver tried to get more results from the database, but there was no query record in the database. This usually means that the time interval is on the server side: after a few minutes of inactivity, the database will kill the cursor (see MongoCursor :: immortal () for information on this warning).

I decided that I would publish the entire description for others reaching this page, and since timeout () and immortal () are different. timeout () sets the time to wait for a response. immortal () denies that the cursor is dying due to inactivity.

+4
source

This may be a memory limitation problem. Try experimenting with providing more memory and see if your results change, which you can do with the -d option: php -d memory_limit = 256M yourscript.php

This is a lot of documents, and it looks like you are creating a fairly large array of objects. There are also various php functions, such as memory_get_usage (), which you can use to profile the allocation of memory at runtime, as well as debugging extensions like xdebug or what zend provides.

0
source

Source: https://habr.com/ru/post/893436/


All Articles