Cassandra LeveledCompactionStrategy and High SSTable Read Value

We use cassandra 2.0.17, and we have a table with 50% selections, 40% updates and 10% attachments (without deletion).

In order to have high reading performance for such a table, we found that it is proposed to use LeveledCompactionStrategy (it is assumed that 99% of readings will be made from one SSTable). Every day, when I run nodetool cfhistograms, I see more and more SSTtables for each reading. On the first day we had 1, than we had 1,2,3 ...
and this morning I see this:

ubuntu@ip:~$ nodetool cfhistograms prodb groups | head -n 20                                                                                                                                
prodb/groups histograms

SSTables per Read
1 sstables: 27007
2 sstables: 97694
3 sstables: 95239
4 sstables: 3928
5 sstables: 14
6 sstables: 0
7 sstables: 19

The described groups return this:

CREATE TABLE groups (
  ...
) WITH
  bloom_filter_fp_chance=0.010000 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.100000 AND
  gc_grace_seconds=172800 AND
  index_interval=128 AND
  read_repair_chance=0.000000 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'LeveledCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};

This is normal? In this case, we lose the advantage of using LeveledCompaction, which, as described in the documentation, should guarantee 99% of reading from one sstable.

+4
1

usecase, , , LCS 90% 10% . 50/50.

, LCS, . , , . - nodetool cfstats .

:

SSTables : [2042/4, 10, 119/100, 232, 0, 0, 0, 0, 0]

, sstables . [L0, L1, L2...]. - . , L1 10, L2 100, L3 1000 ..

sstables L0, . , . 2000 sstables , . , STCS.

Nodetool cfstats , LCS . 15 . , , . , , STCS. 10 , , - . - LCS - .

- 2.1 L0 STCS, , . - .

+15

Source: https://habr.com/ru/post/1656771/


All Articles