Redis - Data Storage Approach in Redis :: JSON String OR Serialized pojo

I have a class as shown below:

  public class Person
       {
          public String name;
          public String age;
       }

I'm a little confused that you saved the Perons map in Redis:

Should I switch to a java serialized / deserialized approach to the object or should I try converting to JSON and then save and vice versa.

Any thoughts on the points below:

  • The cost of serializing and deserializing VS the cost of comparing with Java and JSON
  • Memory for JSON and serialized object for Redis
  • Compression: stream and data

    What kind of compression do we need? Although DATA compression seems a bit complicated (not so much artificial), since we use Redish Hash

Some of the assumptions:

  • Pojo contains many instancd variables
  • will use the redis hash to store the object
+6
source share
2 answers

You should consider using MessagePack, as it is fully compatible with Redis and Lua, this is excellent JSON compression: http://msgpack.org/

This means that some Lua code is compressed and decompressed, but the cost should be small. Here is an example: http://gists.fritzy.io/2013/11/06/store-json-as-msgpack

There is a small benchmark that is missing data: https://gist.github.com/muga/1119814

However, this is a great option for you, as you can use it in different languages, fully supporting redis, and it is based on JSON.

+3
source

Answer: you have to measure it for your use cases and environment. I would first try JSON on it more universal and less problematic, i.e. Easier to debug and recover corrupted data.

Performance. JSON serialization is fast, so in many scenarios this will not be your bottleneck. Most likely it is a disk or network IO: java serialization benchmarking . Avoid using standard Java serialization as it is slow. Kryo is an option for binary output. If you need a miltiple platform for binary format, consider the internal DB format or, for example, Google Protobuffers.

Compression. At Google, they use Snappy to compress with less CPU. Snappy is also used in Cassandra, Hadoop and Hypertable. Some tests for JVM compressors: Compression test using Calgary corpus dataset .

+1
source

Source: https://habr.com/ru/post/957722/


All Articles