After some searching, I found that there are two ways to manage the AS API or the AS as a whole for jobs:
One way is to manage the health of the server directly from the worker. This is what quite a few sites do, and it is effective when your employee does not find more jobs or redundancy in the system, he notes that the server is functioning as unhealthy. Thus, the AS API comes and automatically cleans it up after a while.
Thus, using this method, you will have a scaling policy based on your SQS queue size for a certain period of time (for example, every 5 minutes of SQS messages containing more than 100 additions of 2 servers, every 10 minutes of SQS messages will exceed 500 times the throughput network capacity by 50%). Zoom out will be done using code instead of the active policy.
This method will work with zero clusters, so that you can completely disconnect your cluster to servers if it is not used, which makes it quite cost-effective.
Benefits:
- Easy setup
- Using AWS API Features
- Probably the fastest way to configure
- Using AWS managed APIs to manage cluster size for you
Disadvantages:
- It is difficult to manage without using the full AWS API, that is, when creating a new server, you will not be able to return it without running the full API command for all instances of the instances. There are other cases where the AWS AS API bothers you and makes life a little more difficult if you want a self-control element over your clan.
- Based on Amazon, youโll find out whatโs best for your wallet. You rely on the Amazon API to scale properly, this is an advantage for many, but for someone there is a drawback.
- The worker must contain the code for your server pool, which means that the worker is not shared and cannot be immediately moved to another cluster without any configuration changes.
With this in mind, there is a second option - DIY. You use an instance of EC2 instance and the Demand Instance API to create your own AS API based on your custom rules. This is pretty easy to explain:
- You have a cli script that, when launched, starts, say, 10 servers
- You have a cronjob that, when it detects certain conditions, resets the servers or increases the volume
Benefits:
- Simplicity and ease of use.
- Can create generic workers.
- Server pool can start managing multiple clusters
- You can make rules and itโs not difficult to get numbers from AWS metrics and use them with comparisons and time ranges to figure out if something is going to change.
Disadvantages:
- Difficult to get multi-area (not so bad for SQS, since SQS is a single area)
- It is difficult to cope with errors in the capacity and workload of the region.
- You must rely on your own server uptime and your own code to ensure that cronjob works as it should and provides servers as needed and breaks them down when necessary.
So the battle seems to be more convenient for the end user. I personally think over two more and created a small self-contained server pool that could work for me, but at the same time I am tempted to try and get this to work on my own AWS API.
Hope this helps people
EDIT: Please note that using any of these methods you still need a function on your side to predict how you should bet, so you will need to call the bid history API on your place type (type EC2) and calculate how rate.
Other Editing. Another way to automatically detect redundancy in the system is to check the empty response metric for your SQS queue. This is the number of times your employees pinged a line and did not receive a response. This is quite effective if you use exclusive locks in your application during business hours.
source share