How to deploy Node.js in the cloud for high availability using multi-core, reverse proxies and SSL

I posted this on ServerFault, but the Node.js community seems tiny, so I hope this brings more attention.

I have a Node.js application (0.4.9), and I'm learning how to best deploy and maintain it. I want to run it in the cloud (EC2 or RackSpace) with high availability. The application must run on HTTPS. I will worry about moving East / West / EU later.

I read a lot about keep-alive (Upstart, Forever), multi-core utilities (Fugue, multi-node, Cluster) and proxy / load balancers (node-http-proxy, nginx, Varnish and Pound). However, I'm not sure how to combine the various utilities available to me.

I have this setting and you need to remove some questions and get feedback.

  • Cluster is the most actively developed and, apparently, popular multi-core utility for Node.js, so use it to run 1 node "cluster" on the application server on an unprivileged port (say, 3000). Q1: Should I use Forever to keep the cluster alive or just superfluous?
  • Use 1 nginx for each application server running on port 80, just go to node on port 3000. Q2:. will node -http -proxy be more suitable for this task, even if it does not quickly load static gzip files or server files?
  • Minimum 2x servers, as described above, with an independent server acting as a load balancer in these boxes. Use Pound , listening on 443, to complete HTTPS and pass HTTP to Larnish , which will cyclically distribute the load balance to the server IP addresses above. Q3: Should I use nginx for both? Q4: If instead you should consider AWS or RackSpace load balancing (the latter does not end HTTPS)

General issues:

  • Do you see the need (2) above?
  • Where is the best place to stop HTTPS?
  • If you need WebSockets in the future, what nginx lookups would you do?

I would love to hear how people set up their current work environment and what combination of tools they prefer. Very much appreciated.

+46
deployment
Aug 31 '11 at 15:15
source share
4 answers

It has been several months since I asked this question and not a lot of answers. Both Samyak Bhuta and nponeccop had good suggestions, but I wanted to discuss the answers that I found to my questions.

This is what I decided at the moment for the production system, but further improvements are always made. I hope this helps someone in a similar scenario.

  • Use Cluster to create many child processes that you want to handle incoming requests on multicore virtual or physical machines. It communicates with one port and simplifies maintenance. My rule is n - 1 Cluster Workers. You do not need Forever on this, as Cluster revives workflows that die. To have fault tolerance even at the parent level of the cluster, make sure you use the Upstart script (or equivalent) to demonstrate the Node.js application and use Monit (or equivalent) to view the PID of the cluster parent and respawn if it dies. You can try using the Upstart respawn function, but I prefer Monit to watch things, so instead of sharing responsibilities, I believe that it is best to let Monit handle the respawn.

  • Use 1 nginx for each application server running on port 80, just go to your cluster on any port you are bound to (1). node-http-proxy can be used, but nginx is more mature, more functional and faster when serving static files. Run nginx lean (don't write, don't download tiny files) to minimize overhead.

  • At least at least 2 servers, as described above, and if AWS uses ELB, which terminates HTTPS / SSL on port 443 and communicates with HTTP port 80 with Node.js application servers. ELBs are simple and, if you like, make autoscaling easier. You can run multiple nginx either for sharing IP addresses, or using cyclic scaling configured by your DNS provider, but now I have found this excess. At this point, you will remove the nginx instance on each application server.

I do not need WebSockets, so nginx is still suitable, and I will again consider this problem when WebSockets appear on the image.

Feedback is welcome.

+20
Jan 24 '12 at 16:12
source share

You do not have to worry about quickly serving static files. If your load is small - node static file servers will do. If your load is large - it is better to use CDN (Akamai, Limelight, CoralCDN).

Instead of eternity, you can use monit.

Instead of nginx you can use HAProxy. It is known to work well with websockets. We also consider proxy flash sockets, as they are a good workaround until websocket support is ubiquitous (see Socket.io).

HAProxy supports some support for HTTPS load balancing, but not interruption. You can try using stunnel to complete HTTPS, but I think it is too slow.

Circular load balancing (or other statistical) works pretty well in practice, so in most cases there is no need to know about loading other servers.

Consider also using ZeroMQ or RabbitMQ to communicate between nodes.

+2
Sep 10 2018-11-21T00:
source share

This is a great topic! Thanks to everyone who provided useful information.

I dealt with the same problems as the last few months, creating the infrastructure for our launch.

As mentioned earlier, we need a Node environment with multi-cell support + web sockets + vhosts

As a result, we created a hybrid between our own cluster module and http-proxy and named it Drone - of course, it revealed the source code:

https://github.com/makesites/drone

We also released it as AMI with Monit and Nginx.

https://aws.amazon.com/amis/drone-server

I found this topic while researching how to add Drone - tnx SSL support for ELB recommendation, but I would not rely on a proprietary solution for something so important.

Instead, I extended the default proxy to handle all SSL requests. The configuration is minimal while SSL requests are converted to plain http - but I assume it is preferable when you pass traffic between ports ...

Feel free to look into it and let me know if it fits your needs. All feedback is welcome.

+2
Feb 12 '13 at
source share

I saw an AWS load balancer to load balance and complete + http- node -proxy for the reverse proxy if you want to run several services behind the + cluster.js field to support the mulicore and go to the process level, which performs very well.

forever.js on cluster.js may be a good option for extreme caution, which you want to take in terms of failure, but this is hardly needed.

0
Dec 01 '11 at 17:40
source share



All Articles