I performed "mongodump" on a 3-dimensional skull cluster with a 600 GB database and pieces evenly distributed across all three shards.
My mongodump command was like this:
mongodump --db mydb123 --authenticationDatabase admin --journal -u root -p password123 -o mydb123
The pieces were distributed equally across all three fragments. ++++++++++++++++++++++++++++++++++++++++++++++++++++ +++++++++++++
Then I moved the dump file to a new cluster and executed the "mongorestore" of this dump file in a new cluster with a 2-dimensional skull. The size of this database is now only 80 GB. I guess this is expected (compact action). But here is my problem: in a new cluster with a 2-dimensional skull, executing the “sh.status ()” command does not display CHUNKS for this database. My mongorestore command was like this:
mongorestore -u root -p newpass123 --authenticationDatabase admin --verbose / data / db / backups / new_dir / mydumpfile
++++++++++++++++++++++++++++++++++++++ There was no error executing this mongorestore command. The actual output of SH.STATUS () is shown below:
mongos> sh.status () --- Shard state --- fragmentation version: {"_id": 1, "version": 3, "minCompatibleVersion": 3, "currentVersion": 4, "clusterId": ObjectId (" 52efaaa0a83668acafc3bcb0 ")} splinters: {" _id ":" sh1 "," host ":" sh1 / hfdvmprmongodb1: 27000, hfdvmprmongodb2: 27000 "} {" _id ":" sh2 "," host ":" sh2 / hdbmf , hfdvmprmongodb2: 27001 "} databases: {" _id ":" admin "," partitioned ": false," primary ":" config "} {" _id ":" test "," partitioned ": false," primary " : "sh1"} {"_id": "pricing","split": true, "primary": "sh2"} {"_id": "mokshapoc", "partitioned": true, "primary": "sh1"}
mongos > isBalancerRunning()
4 11: 09: 39.242 ReferenceError: isBalancerRunning
mongos > sh.isBalancerRunning()
+++++++++++++++++++++++++++++++++++++++++++++++ +++++
, mongorestore CHUNKS, 80 ( 600 mongodump)
, .
( , )
:
MongoDB: 2.4.6
,