EC2 instance failed in loadbalancer

I have an EC2 instance and it works. I have a load balancer where it is associated with an EC2 instance.

Ping Target : HTTP:3001/healthCheck Timeout : 5 seconds Interval : 24 seconds Unhealthy threshold : 2 Healthy threshold : 10 

enter image description here Now the instance is displayed as OutofService. I even tried to change the auditory ports and that’s it. Everything worked until my EC2 instance rebooted. Any help would be greatly appreciated.

For information only: I have a rails application running on port 3001, and I have one listener for HTTP: 80 (loadbalancer) for HTTP: 3001.

I also checked the working application via ssh in the terminal.

+5
source share
5 answers

The problem was that after the reboot, the aws instance assigns a new ip to EC2, which I did not notice.

And I introduced verbose ssh into the old ec2 instance. And therefore, the curl was never interrupted.

(I am very curious why this IP address is still active, and when I last checked its activity, even after 15 days)

However, the big control points are (in general) SkyWalker .

Finally, what I had to:

With the new ip, my pem file was also corrupted. Consequently, a new instance was created, a new pem file, configured by the load balancer to point to this instance and accorindgly security groups.

PS: I could not be more stupid.

+1
source

Proposition # 1:

If the current state of some or all of your instances is OutOfService and a message is displayed in the description field that the instance has failed, at least the unhealthy threshold for the number of health checks has been sequentially, the instances have not deleted the load balancing state.

The following are problems to search for, possible causes, and steps that you can take to solve problems by following this link: Classic troubleshooting Load balancing: health check

Proposition # 2:

chrisa_pm gave some tips on this issue:

If you can confirm that your EC2 instance is available, you can remove this from your load balancer and add it back. The load balancer will recognize it in a few minutes.

Keep in mind that you need to confirm the health as it is set in your Health Check Configuration:

  • For HTTP: 80 you need to specify a page that is really accessible (e.g. index.html)
  • For TCP: 80, only access to TCP port 80 is required.

Proposition # 3:

qh2 makes the decision as follows:

Create a service at startup to unregister and register your instance again.

Example: awsloadbalancer file

 #!/bin/sh chkconfig: 2345 95 20 

When the isntance value is stopped, there is no load balancing. this reboot balancing

 case "$1" in start) aws --region eu-west-1 elb deregister-instances-from-load-balancer --load-balancer-name test --instances i-3c339b7c aws --region eu-west-1 elb register-instances-with-load-balancer --load-balancer-name test --instances i-3c339b7c ;; stop) echo "stopping aws instances" ;; restart) echo "Restarting aws, nothing to do" ;; *) echo "Usage: $0 {start|stop|restart}" exit 1 ;; esac 

create a file in /etc/init.d/ after that, register as a service.

Proposition # 4:

Kenneth Snyder also solved the problem for a specific problem with ELB.

I also had a similar problem, but I was able to fix it.

I created a security group for ELB that accepts a request for port 80 and go to EC2 on port 80. The security group that was previously created for EC2 also has inbound rules for ports 80 and RDP.

However, the instances showed up as OutOfService under the ELB. Later, I tried to add another incoming rule to the EC2 security group to allow port 80 for SG, which was created for ELB. and it worked.

I think this requires that the ELB SG be allowed in the rules created for an individual instance of SG. Hope this helps.

Link to the resource:

https://forums.aws.amazon.com/thread.jspa?messageID=733153

+2
source

Did you provide a health check endpoint and specify it in the EC2 console? Sort of:

Health Snapshot

Pay attention to port 80 and the actual route. You probably did not install port 3001 in your nginx / apache configuration

In the rails application, create this action:

 class HealthCheckController < ActionController::Base def ping head :ok end end 

and route:

 get 'health_check/ping' 

The AWS load balancer will ping its endpoint, and if the response is 200 OK enough time (according to the Healthy threshold , it will consider the instance "healthy".

+1
source

I see some problems with your ELB health check configuration. Right now, you have set up a health check to check the instance 10 times every 24 seconds before the ELB sends requests. Therefore required

 24seconds x 10 = 240secs # 4mins after reboot 

Assuming that your Unicorn starts faster and does not die after its launch, you should reduce the internal and healthy threshold of health.

  • Reduce the interval to 3-5 seconds.
  • Reduce the health threshold to 2-5 times.

The above should help the ELB make the in-service instance faster.

It is assumed that your server is configured correctly to listen on /healthcheck port 3001 from external hosts. If this is not the case, check your firewalls / security groups / server configuration.

+1
source

Verify that the Security Group Appserver allows the ELB Security Group to access the health check endpoint at the port you specified during the health check.

0
source

Source: https://habr.com/ru/post/1259397/


All Articles