Berkeley DB Java Edition - Configuring Big Data

I need to load over 1 billion keys into DB Berkley, and so I want to tune it in advance to get the best performance. With a standard configuration, it takes me about 15 minutes to load 1,000,000 keys, which are too slow. Is there a suitable way to configure, for example, B + Tree from Berkley DB (size of node, etc.)?

(As a comparison, after setting up the Tokyo cabinet, it loads 1 billion keys in 25 minutes).

PS I am looking for tips on how to configure it as code, not installation parameters for a running system (e.g. jvm size, etc.)

+3
source share
2 answers

, TokyoCabinet 1B- 25 , /? / ? "" 1B ? ~ 666 666 , -, . , .

, Gustavo Duarte, - , , , , TokyoCabinet . , , , (fdsync() - ing) .

: Oracle Oracle Berkeley DB ( TokyoCabinet), , .

Berkeley DB , , (" D" ACID) .

Berkeley DB Java Edition (BDB-JE), :

  • : , ( , )
  • : B- ( ) -
  • 10MiB - , 100MiB, - -

. , , , , .

, .

+6

BDB-JE , . , ( ) , . , 100 000 , , , .

0

Source: https://habr.com/ru/post/1755355/


All Articles