If you cannot change the backend application as suggested, or if authentication is simple, such as auth basic, an alternative approach is to authenticate with Nginx.
Implementing this authentication process and determining the validity period of the cache is all you need to do and Nginx will take care of the rest according to the process flow below
Nginx process thread as pseudocode:
If (user = unauthorised) then Nginx declines request; else if (cache = stale) then Nginx gets resource from backend; Nginx caches resource; Nginx serves resource; else Nginx gets resource from cache; Nginx serves resource; end if end if
Con is that depending on the type of auth you have, you may need something like a Nginx Lua module to handle the logic.
EDIT
See additional discussions and information. Now, not fully knowing how the backend application works, but looking at the config example, the user anki-code provided the GitHub that you commented HERE , the configuration below will avoid the problem that occurred during authentication / authorization of the backend applications, which are not executed for previously cached resources.
I assume that the backend application returns HTTP 403 code for unverified users. I also assume that you have a Nginx Lua module, since the configuration of GitHub depends on this, although I note that the part you tested does not need this module.
Config:
server { listen 80; listen [::]:80; server_name 127.0.0.1; location / { proxy_pass http://127.0.0.1:3000; # Metabase here } location ~ /api/card((?!/42/|/41/)/[0-9]*/)query { access_by_lua_block { -- HEAD request to a location excluded from caching to authenticate res = ngx.location.capture( "/api/card/42/query", { method = ngx.HTTP_HEAD } ) if res.status = 403 then return ngx.exit(ngx.HTTP_FORBIDDEN) else ngx.exec("@metabase") end if } } location @metabase { # cache all cards data without card 42 and card 41 (they have realtime data) if ($http_referer !~ /dash/){ #cache only cards on dashboard set $no_cache 1; } proxy_no_cache $no_cache; proxy_cache_bypass $no_cache; proxy_pass http://127.0.0.1:3000; proxy_cache_methods POST; proxy_cache_valid 8h; proxy_ignore_headers Cache-Control Expires; proxy_cache cache_all; proxy_cache_key "$request_uri|$request_body"; proxy_buffers 8 32k; proxy_buffer_size 64k; add_header X-MBCache $upstream_cache_status; } location ~ /api/card/\d+ { proxy_pass http://127.0.0.1:3000; if ($request_method ~ PUT) { # when the card was edited reset the cache for this card access_by_lua 'os.execute("find /var/cache/nginx -type f -exec grep -q \\"".. ngx.var.request_uri .."/\\" {} \\\; -delete ")'; add_header X-MBCache REMOVED; } } }
In doing so, I expect the test with $ curl 'http://localhost:3001/api/card/1/query' to work as follows:
First run (with required cookie)
- Request hits
location ~ /api/card((?!/42/|/41/)/[0-9]*/)query - In the Nginx access phase, the "HEAD" routine is issued on
/api/card/42/query . This place is excluded from caching in the given configuration. - The backend application returns a non-403 response, etc., since the user is authenticated.
- Then the subprocess is issued in the
@metabase block with the name location, which processes the actual request and returns the contents to the user.
Second run (no cookie required)
- Request hits
location ~ /api/card((?!/42/|/41/)/[0-9]*/)query - In the Nginx access phase, the "HEAD" routine is issued to the backend on
/api/card/42/query . - The backend application returns a 403 Forbidden response because the user is not authenticated
- The user client receives a 403 denied response.
Instead of /api/card/42/query , if resource-intensive, you can create a simple card request that will simply be used to perform authorization.
It seems like an easy way around this. The backend remains unchanged, and you configure your caching data in Nginx.