A scalable way to share cached files on external servers

I have several server servers constantly building and updating the public api parts to cache them. Backend servers are built depending on what needs to be done in the job queue.

At that time, backend 1 server will build:

/article/1.json
/article/5.json

server 2 will build:

/article/3.json
/article/9.json
/article/6.json

I need to serve these files from front-end servers. The cache is stored as a file for direct maintenance of nginx without going through the rail stack.

The problem is how to manage cache updates on front-end servers in a scalable way (adding new servers should be smooth).

I thought:

  • NFS / S3 (but too slow)
  • Memcached (but cannot serve directly from nginx - maybe not so?)
  • CouchDB JSON ( , )
  • Backend json redis, ( )

?

+4
3

, , , , , , " " . scenerio haproxy//squid/nginx / .

, , .

:

internet -> load balancer -> caching server 1   --> numerous app servers
                         \-> caching server 2  -/

. . , /, . .

+5

, S3 itlsef, HTTPS 5-10 . S3 , , - S3 Nginx .

- Nginx S3, , , -, S3, .

, :

  • proxy_cache_lock doc

  • proxy_cache_use_stale doc

Sgin Nginx S3 proxy configuration will consider this https://gist.github.com/mikhailov/9639593

+1
source

Source: https://habr.com/ru/post/1532973/


All Articles