Assuming you have an application server server upstream for users.
upstream webservices { server 10.0.0.1:80; server 10.0.0.2:80; server 10.0.0.3:80; } server { ... default nginx stuff ... location /dynamic_content { memcached_pass localhost:11211; default_type text/html; error_page 404 502 = @dynamic_content_cache_miss; set $memcached_key $uri; } location @dynamic_content_cache_miss { proxy_pass http:
What the nginx.conf fragment described above is direct traffic from http://example.com/dynamic/ * DIRECTLY to the memcached server. If memcache has content, your upstream servers will not see ANY traffic.
If the cache error failed with error 404 or 502 (not in the cache or memcache cannot be reached), then nginx will send the request to the upstream servers. Since there are three servers in the upstream definition, you also get a transparent proxy load balancer.
Now the only caveat is that you have to make sure your server application servers keep memcache data fresh. I use nginx + memcached + web.py to create simple, small systems that process thousands of requests per minute on relatively modest hardware.
A common pseudo code for an application server is similar to this for web.py
class some_page: def GET(self): output = 'Do normal page generation stuff' web_url = web.url().encode('ASCII') cache.set(web_url, str(output), seconds_to_cache_content) return output
The important things to remember in the web.py/second code above are that the content coming from memcached via nginx cannot be altered at all. nginx uses simple strings, not unicode. If you save unicode output to memcached, you will get at least weird characters at the beginning and end of your cached content.
I use nginx and memcached for a sports site where we get huge traffic pulses that last only a few hours. I could not do without nginx and memcached. Server load during our last big July 4th sporting event fell from 70% to 0.6% after implementing the above changes. I can not recommend it enough.
source share