BigDecimal is the optimal coding scale

I need to encode BigDecimal compactly in ByteBuffer to replace my current (garbage) encoding scheme ( BigDecimal record in UTF-8 encoding String with a prefix byte denoting String length).

Given that a BigDecimal is actually an integer value (in the mathematical sense) and a scale associated with it, I plan to write the scale as one byte, followed by a VLQ encoded integer. This should adequately cover the range of expected values ​​(i.e., maximum scale of 127).

My question is: when faced with large values ​​such as 10,000,000,000, it is obvious that it is optimal to encode this as a value: 1 with a -10 scale instead of encoding an integer of 10,000,000,000 with a scale of 0 (which will take up more bytes). How to determine the optimal scale for a given BigDecimal ? ... In other words, how can I determine the minimum possible scale that I set, assign a BigDecimal without the need to round?

Please do not include the term “premature optimization” in your responses :-)

+6
source share
1 answer
+7
source

Source: https://habr.com/ru/post/899778/


All Articles