Instead of trying to sync cached data between two server instances, why not centralize the caching instead using something like memcached / couchbase or redis? Using distributed caching with something like ehcache is much more complicated and prone to IMO errors versus centralizing cached data using a caching server like the one mentioned.
As a complement to my initial answer, when deciding which approach to use for caching (in memory, centralized), you need to consider the volatility of the data that is cached.
If the data is stored in the database, but does not change after loading the servers, then you do not even need synchronization between the servers. Just let each of them load this static data into memory from the source, and then go about their fun ways to do what they do. Data will not change, so you do not need to enter a complex template to synchronize data between servers.
If there really is a level of volatility in the data (for example, let's say that you cache the search for entity data from the database to save hits in the database), I still think that centralized caching is the best approach, distributed and synchronized caching. You just need to make sure that you use the appropriate time on the cached data so that from time to time it allows for natural data updates. In addition, you can simply delete the cached data from the centralized repository when the update path is for a specific object, and then simply reload it from the cache the next time you request this data. This IMO is better than trying to make a real cache entry, where you write to the base store, as well as to the cache. The database itself may make changes to the data (for example, using unsupported default values), and your cached data in this case may not coincide with the data in the database.
EDIT
In the comments, a question was asked about the benefits of a centralized cache (I guess something like a distributed cache in memory). I will provide my opinion on this, but first the standard expression of disclaimer. Centralized caching is not a cure. It is designed to solve specific problems related to caching in jvm memory. Before evaluating whether to switch to it, you should understand what your problems are in the first place and see if they correspond to the advantages of centralized caching. Centralized caching is an architectural change and may be related to issues / caveats. Don't just switch to it, because someone says it better than what you do. Make sure the cause is appropriate for the problem.
Well, now, in my opinion, for what problems centralized caching can solve vs in-jvm-memory (and possibly distributed) caching. I am going to list two things, although I am sure there are a few more. My two are big: Total memory and Data synchronization problems .
Start with Total Memory . Let's say you perform standard object caching to protect your relational database from excessive stress. Let me also say that you have a lot of caching data to really protect your database; They speak in the range of many GB. If you do caching in jvm memory and you say that you have 10 application server boxes, you will need to get additional memory ($$$) 10 times for each of the boxes that will need to be cached in jvm memory. In addition, you will need to allocate a large heap for your JVM to host cached data. I believe the JVM heap should be small and simplified to make garbage collection easier. If you have large chunks of the old general that cannot be collected, then you are going to highlight your garbage collector when it enters the full GC and try to extract something from this bloated space of the old generation. You want to avoid long periods of pause GC2 and bloating your old general will not help. In addition, if the memory requirement exceeds a certain threshold, and you used 32-bit machines for your application level, you will have to upgrade to 64-bit machines, which could be another prohibitive cost.
Now, if you decide to centralize the cached data (using something like Redis or Memcached), you could significantly reduce the total memory size of the cached data, since you could use it across multiple blocks instead of the entire application server-side application. You probably want to use a cluster approach (both technologies support it) and at least two servers to provide you high availability and avoid one point of failure in your caching (moreover, per second). With one that has a couple of machines to support the required amount of memory for caching, you can save a considerable $$. In addition, you can now customize application and caching windows in different ways, as they serve different purposes. Application boxes can be configured for high throughput and low heap, and cache boxes can be configured for large memory. And with small heaps, it will definitely help with the overall throughput of mailboxes at the application level.
Now one quick point for centralized caching in general. You must configure your application so that it can survive without a cache if it completely disappears within a certain period of time. In traditional object caching, this means that when the cache is completely inaccessible, you simply push your database directly for each request. Not surprisingly, but not the end of the world.
Ok now for Data Sync Issues . With distributed caching in jvm memory, you need to synchronize the cache. Changing cached data in one node must be replicated to other nodes and synchronized with their cached data. This approach is a little scary in the sense that if for some reason (for example, a network failure) one of the nodes falls out of synchronization, then when the request goes to this node, the data that the user sees will be inaccurate as to what currently in the database. Even worse, if they make another request and get to another node, they will see different data, and this will confuse the user. By centralizing the data, you fix this problem. Now, one could argue that a centralized cache requires concurrency to manage updates with the same cached data key. If two parallel updates are included in the same key, how do you make sure that the two updates do not stomp on each other? My thought here is not to worry about it; when the update occurs, discard the item from the cache (and immediately write to the database), and let it be reloaded at the next read. It is safer and easier. If you do not want to do this, you can use the CAS (Check-and-Set) functionality instead of optimistic concurrency management if you really want to update both cache and db for updates.
So, to summarize, you can save money and better tune your machines at the application level if you centralize the data that they cache. You can also get more accurate accuracy of this data, since you have fewer problems with data synchronization. Hope this helps.