I implemented a hazelcast service that stores its data in local mapdb instances through MapStoreFactory and newMapLoader. Thus, keys can be loaded if a cluster restart is required:
public class HCMapStore<V> implements MapStore<String, V> { Map<String, V> map; /** specify the mapdb eg via * DBMaker.newFileDB(new File("mapdb")).closeOnJvmShutdown().make() */ public HCMapStore(DB db) { this.db = db; this.map = db.createHashMap("someMapName").<String, Object>makeOrGet(); } // some other store methods are omitted @Override public void delete(String k) { logger.info("delete, " + k); map.remove(k); db.commit(); } // MapLoader methods @Override public V load(String key) { logger.info("load, " + key); return map.get(key); } @Override public Set<String> loadAllKeys() { logger.info("loadAllKeys"); return map.keySet(); } @Override public Map<String, V> loadAll(Collection<String> keys) { logger.info("loadAll, " + keys); Map<String, V> partialMap = new HashMap<>(); for (String k : keys) { partialMap.put(k, map.get(k)); } return partialMap; }}
The problem I'm currently facing is that the loadAllKeys method of the MapLoader interface from hazelcast requires ALL keys of the entire cluster to be returned, but each node ONLY stores the objects that belong to it.
Example: I have two nodes and 8 objects are stored, and then, for example, 5 objects are stored in mapdb node1 and 3 in mapdb node2. I think which of the objects belongs to node. Now when restarting, node1 will return 5 keys for loadAllKeys, and node2 will return. 3. Hazelcast decides to ignore 3 elements, and the data is "lost."
What could be a good solution to this?
Update for the bounty : Here I asked about it on the hc mailing list, which lists 2 options (I will add 1 more), and I would like to know if something like this is possible with hazelcast 3.2 or 3.3:
Currently, the MapStore interface only retrieves data or updates from the local node. Will it be possible to notify the MapStore interface of each storage action of the entire cluster? Or perhaps this is already possible with some listener magic? Perhaps I can make hazelcast put all the objects in one partition and have 1 copy on each node.
If I restart, for example. 2, then the MapStore interface is correctly called with my local databases for node1, and then for node2. But when both nodes join the data of node2, they will be deleted, since Hazelcast assumes that only the node wizard can be right. Can I teach hazelcast to accept data from both nodes?
source share