MongoDB - Querying for over 10 million records

First of all: I already read a lot of posts according to MongoDB query performance, but I did not find a good solution.

Inside the collection, the document structure is as follows:

{
    "_id" : ObjectId("535c4f1984af556ae798d629"),
    "point" : [
        -4.372925494081455,
        41.367710205649544
    ],
    "location" : [
        {
            "x" : -7.87297955453618,
            "y" : 73.3680160842939
        },
        {
            "x" : -5.87287143362673,
            "y" : 73.3674043270052
        }
    ],
    "timestamp" : NumberLong("1781389600000")
}

My collection already has an index:

db.collection.ensureIndex({timestamp:-1})

The request looks like this:

db.collection.find({ "timestamp" : { "$gte" : 1380520800000 , "$lte" : 1380546000000}})

Despite this, the response time is too long, about 20-30 seconds (this time depends on the specified request parameters)

Any help is helpful!

Thanks in advance.

EDIT: I changed the search options, replacing them with real data.

The above query takes 46 seconds, and this is the information provided by the explain () function:

{
    "cursor" : "BtreeCursor timestamp_1",
    "isMultiKey" : false,
    "n" : 124494,
    "nscannedObjects" : 124494,
    "nscanned" : 124494,
    "nscannedObjectsAllPlans" : 124494,
    "nscannedAllPlans" : 124494,
    "scanAndOrder" : false,
    "indexOnly" : false,
    "nYields" : 45,
    "nChunkSkips" : 0,
    "millis" : 46338,
    "indexBounds" : {
        "timestamp" : [
            [
                1380520800000,
                1380558200000
            ]
        ]
    },
    "server" : "ip-XXXXXXXX:27017"
}
+4
source share
1 answer

- . 124494 (nscanned), , (n). - , , .

, , , . ( ), , , .

? "", ? , , , :

  • . , , . . , , next previous. MongoDB .limit(n) .skip(n).
  • , - , , . , .
  • , - , , . , , , .
+9

Source: https://habr.com/ru/post/1541387/


All Articles