FILESYSTEM vs SQLITE, saving up to 10 M files

I would like to save files up to 10 MB, 2 TB in size. The only properties I need are limited by file names and their contents (data).

The maximum file size is 100 MB, most of them are less than 1 MB. The ability to delete files is required, and the speed of writing and reading should be a priority - while low storage efficiency, recovery methods or integrity are not needed.

I thought about NTFS, but most of its functions are not needed, but cannot be disabled and are considered overhead, some of them: creation date, date modified, attributes, log and, of course, permissions.

Because of the built-in file system features that are not needed, would you suggest using SQLITE for this requirement? or is there an obvious flaw that I should know about? (would one suggest that deleting files would be a daunting task?)

(SQLITE will be via C api)

My goal is to use a more suitable solution to increase productivity. Thanks in advance - Doori Bar

+3
source share
2 answers

If your primary requirement is performance, go to your own file system. DBMSs are not suitable for processing large BLOB files, so SQLite is not an option for you (I donโ€™t even know why everyone considers SQLite available for each hole).

NTFS ( , ), , N , .

, , , . .

: ( , ), , NTFS 26 ^ 4 AAAA ZZZZ . .

+8

SQLite , . ~ 10 sqlite 35% .

SQLite (, ) 35% ยน, , fread() fwrite().

, SQLite, 10- , 20% , .

( ), SQLite, open() close() , open() close() blob blobs, . , open() close() , . , , SQLite.

2017-06-05, SQLite 3.19.2 3.20.0. , SQLite .

, SQLite kvtest, / .

0

Source: https://habr.com/ru/post/1766797/


All Articles