What is your experience using nginx and memcached to optimize your website?

We have a Java EE-based web application running on a Glassfish application server cluster. Inbound traffic will mainly be RESTful requests for views of our XML-based resources, but perhaps 5% of the traffic could be for JSON or XHTML / CSS views.

We are now exploring load balancing solutions for distributing inbound traffic through Glassfish instances in a cluster. We also learn how to offload a cluster using memcached, a distributed in-memory hash map whose keys are the names of REST resources (for example, "/ user / bob", "/ group / jazzlovers") and whose values ​​correspond to the XML representation.

One approach that sounds promising is to kill both birds with one stone and use the lightweight, fast nginx HTTP server / reverse proxy, Nginx processed each incoming request, first looking at memcached's URIs to see if XML representation not yet submitted. If not, nginx sends a request to one of the Glassfish instances. The nginx memcached module is described in this short entry .

What is your overall impression of using nginx and memcached, how happy are you with them? What resources did you find most useful for learning them? If you tried them and they did not meet your goals, why not, and what did you use instead?

Note: here is a related question .

Update: I later asked the same question on ServerFault.com. The answers there mostly suggest alternatives to nginx (useful, but indirectly).

+4
source share
1 answer

Assuming you have an application server server upstream for users.

upstream webservices { server 10.0.0.1:80; server 10.0.0.2:80; server 10.0.0.3:80; } server { ... default nginx stuff ... location /dynamic_content { memcached_pass localhost:11211; default_type text/html; error_page 404 502 = @dynamic_content_cache_miss; set $memcached_key $uri; } location @dynamic_content_cache_miss { proxy_pass http://webservices; } 

What the nginx.conf fragment described above is direct traffic from http://example.com/dynamic/ * DIRECTLY to the memcached server. If memcache has content, your upstream servers will not see ANY traffic.

If the cache error failed with error 404 or 502 (not in the cache or memcache cannot be reached), then nginx will send the request to the upstream servers. Since there are three servers in the upstream definition, you also get a transparent proxy load balancer.

Now the only caveat is that you have to make sure your server application servers keep memcache data fresh. I use nginx + memcached + web.py to create simple, small systems that process thousands of requests per minute on relatively modest hardware.

A common pseudo code for an application server is similar to this for web.py

 class some_page: def GET(self): output = 'Do normal page generation stuff' web_url = web.url().encode('ASCII') cache.set(web_url, str(output), seconds_to_cache_content) return output 

The important things to remember in the web.py/second code above are that the content coming from memcached via nginx cannot be altered at all. nginx uses simple strings, not unicode. If you save unicode output to memcached, you will get at least weird characters at the beginning and end of your cached content.

I use nginx and memcached for a sports site where we get huge traffic pulses that last only a few hours. I could not do without nginx and memcached. Server load during our last big July 4th sporting event fell from 70% to 0.6% after implementing the above changes. I can not recommend it enough.

+9
source

Source: https://habr.com/ru/post/1286396/


All Articles