Neo4J: Import a large dump of Cypher

I have a large dump (millions of nodes and relationships) from a Neo4J 2.2.5 database in Cypher format (created using neo4j-sh -c dump ), which I am trying to import into instance 3.0.3.

However, the import process ( neo4j-sh < dump.cypher ) slows down sharply after a few minutes, down to a couple of records per second.

Is there a way to speed up this process? I tried updating the database as described in the manual , but the new instance is eliminated due to a mismatch in the store format.

+6
source share
2 answers

Neo4j 3.0 ships with the bin / neo4j-admin tool for this purpose.

try bin/neo4j-admin import --mode database --from /path/to/db

see http://neo4j.com/docs/operations-manual/current/deployment/upgrade/#upgrade-instructions

The cypher dump is not useful for a large database, it is intended only for small settings (several thousand nodes) for demonstrations, etc.

FYI: in Neo4j 3.0, the cypher export procedure from APOC is much more suitable for large cypher dumps.

Update

You can also try moving from 2.2 to 2.3 first. For example, using neo4j-shell

add allow_store_upgrade=true to your neo4j.properties` in 2.3

and then do: bin/neo4j-shell -path /path/to/db -config conf/neo4j.properties -c quit

If finished, that backup of your db is on version 2.3

Then you can use neo4j-admin -import ...

+2
source

I recently had the same symptom when my CSV import slowed to death. My load-csv cypher script had too many rels.

So, I divided my load into two parts. Create nodes first, then relationships and the most related nodes. E.I.V ..

Back to your problem First, try increasing the memory for the JVM. NEO / conf has a wrapper file. Initially, there are memory settings.

Finally, from the instance with your data, export to several CSV files and import them to a new server.

0
source

Source: https://habr.com/ru/post/1011733/


All Articles