I initially said that this should work well for sites with low traffic, but, if you think about it, no, this is a bad idea.
Each time you launch the Docker container, it adds a read-write layer to the image. Even if very little data is written, this layer exists, and each request will generate it. When one user visits a website, page rendering will generate 10 to 1000 requests, for CSS, for javascript for each image, for fonts, for AJAX, and each of them will create these layers for reading and writing.
Right now there is no automatic cleaning of read-write layers - they are saved even after exiting the Docker container. By default, nothing is lost.
Thus, even for a single site with low traffic, over time, you will constantly increase the use of your disk. You can add your automatic cleaning.
Then a second problem arises: everyone uploaded to the website will not be available for any other requests if they were not recorded in some kind of shared repository. This is fairly easy to do with S3 or a separate and permanent database service, but it is starting to show weakness in "one new connection to the docker per request." If you get some permanent services, why not make the Docker containers more resilient and run them longer?
source share