I know this is unfair to the terracotta guys, but has anyone tried to use Hazelcast to use scheduled tasks in a cluster environment?
The simplest implementation I can imagine is the following architecture:
- Hazelcast's global lock to provide only one server triggers the Quartz configuration.
- Performing actual tasks like DistributedTask. (this can be done later, for the time being, the heavy scheduled tasks will have to take care of starting DistributedTask)
- As soon as the server holding the lock shuts down, the other server receives the lock.
I believe that this would be a great advantage for people who already have Hazelcast, as they will not require all the environmental problems by revealing terracotta material all the time.
At the moment, I have encoded the simplest solution to make only one node, which will be responsible for the execution of Quartz triggers. Since I only use triggers like Cron, this might be an acceptable solution if I take care of creating DistributedTasks for heavy trigger tasks.
Here is my org.springframework.scheduling.quartz.SchedulerFactoryBean extension that does this:
@Override public void start() throws SchedulingException { new Thread(new Runnable() { @Override public void run() { final Lock lock = getLock(); lock.lock(); log.warn("This node is the master Quartz"); SchedulerFactoryBean.super.start(); } }).start(); log.info("Starting.."); } @Override public void destroy() throws SchedulerException { super.destroy(); getLock().unlock(); }
Please let me know if I am missing something big and if it can be done.
I added two files to github. Here is the RAMJobStore extension:
https://github.com/mufumbo/quartz-hazelcast/blob/master/src/main/java/com/mufumbo/server/scheduler/hazelcast/HazelcastRAMJobStore.java
And here is the Spring SchedulerFactoryBean extension:
https://github.com/mufumbo/quartz-hazelcast/blob/master/src/main/java/com/mufumbo/server/scheduler/hazelcast/SchedulerFactoryBean.java
source share