Super Fast File Storage Engine

I basically have one big giant table (about 1,000,000,000,000 records) in the database with these fields:

id, block_id, record

id is unique, block_id is not unique, it contains about 10k (max) entries with the same block_id, but with different entries

To simplify my work related to the database, I have an API similar to this:

Engine e = new Engine(...);
// this method must be thread safe but with fine grained locked (block_id) to improve concurrency
e.add(block_id, "asdf"); // asdf up to 1 Kilobyte  max

// this must concatenate all the already added records added block_id, and won't need to be bigger than 10Mb (worst case) average will be <5Mb
String s = e.getConcatenatedRecords(block_id);

If I map each block to a file (have not done it yet), then each record will be a line in the file, and I can still use this API

But I want to know if I will have a gain in form using flat files compared to a well-matched postgresql database? (at least for this particular scenario)

, getConcatenatedRecords ( add). , , , ?

+4
3

. , :

, API java (, , ...)

EDIT: Proyects , , . . , proyects , ( SQL NoSQL). proyects proyects, mongodb, h2database orientdb, , , - . , .

+1

, postgres - , ? , , , , ( ).

CLUSTER ? ?

, ?

+1

, , PostgreSQL, ?

OpenStack Swift:

, , , . (ab), PostgreSQL , , . , , , ACID - , , .

PostgreSQL ( , tranactions), , . , .

+1

Source: https://habr.com/ru/post/1524752/


All Articles