Why is Firefox not loading images already on the page?

I just read this article: https://developer.mozilla.org/en/HTTP_Caching_FAQ

There is firefox behavior (and some other browsers that I think), I would like to understand:

if I take any web page and try to insert the same image several times in javascript, the image only loads ONCE, even if I specify all the necessary headers to say "never use cache". (see article)

I know there are workarounds (for example, addind query strings to the end of URLs, etc.) , but why does Firefox work like this if I say that the image does not need to be cached, why the image is still taken from the cache when Am I trying to reinsert it?

Also, what cache is used for this? (I think it's a memory cache)

Is this behavior the same for dynamic inclusion, for example? ANSWSER NO :) I just tested it, and the same headers for the js script will make firefox redownload it every time you add a script to the DOM.

PS: I know that you're wondering WHY I should do this (adding the same image several times and forcing reuse, but this is how our application works)

Thank you


Good answer: firefox will store images for the current page load in the memory cache, even if you specify that it should not cache them.

You cannot change this behavior, but it is odd because it is not the same for javascript files, for example

Can someone explain or link a document describing how firefox cache works?

+4
source share
4 answers

What you mean here is partly related to caching.

The whole idea of ​​embedding images by linking to them at URLs is that you can link to the same URL several times and load the browser only once.

So basically, if you are writing HTML (say index.html ):

 <img src="hello.jpg" /> <img src="hello.jpg" /> 

In this case, the browser only calls 2 HTTP requests: one for index.html and one for hello.jpg .

What you need to do is "fake" that the images are downloaded from different URLs. There are several methods for this.

Potentially the easiest solution is to add an additional prefix to the end of the URL. For example, if you want to upload an image twice, you can add:

 <img src="hello.jpg?1" /> <img src="hello.jpg?2" /> 

This will cause the browser to send one HTTP request to retrieve each of the images: one for hello.jpg?1 and the second for hello.jpg?2 . However, some web servers / browsers can still optimize this and send only one request, just to get hello.jpg .

If this does not work, you can also try passing the image through a script as PHP, as suggested in another answer.

+4
source

I believe that caching only refers to loading a page another time, so when you load your page a second time, images do not load again when they are cached. But since a single image (which is referenced by a single URL) is uploaded, it is not affected by the "never use cache" flags during a single page load.

Using PHP, I would go with a script, for example. image.php, which takes a hash (random code only) as a parameter, but only delivers the same image over and over, while the browser will always interpret the image as different.

So, image.php?id=asdf and image.php?id=qwer will load the same image, but will look like different images in the browser and will be downloaded separately.

+1
source

The Jaakko solution should work fine for your needs, while your server side will be happy to deal with part of the dummy request.

This can be thought of as an error, as you can argue that FF should really create a new HTTP request for each image. Appropriate caching (client side) can stop these requests from actually entering the network, where this is not necessary. The actual answer to whether this behavior is an error or not should be in the HTML specification, but I do not know how I did not consider it.

If I were you, I would change your approach a bit and use a workaround that, as you know, will work in any browser that you should take care of.

It looks like you have a URI that outputs random / varying responses. Another approach might be to use a single URI, but a response from there with a 3xx redirect with a variable / random location header to other available images. I did not test it, but this could avoid the problem and also allow you to refer to one URI. It will also allow you to cache the resulting images, which you cannot make if you have one URI that is constantly changing.

Get / image / random
- 303 See Others
- Location: / image / xzkzkj3242hkjaha123

Get / image / random
- 303 See Others
- Location: / image / xlso847iuewrqb1231

+1
source

It’s not safe for me to cache images in a single request and use the cached version while rendering the page. But I noticed a different behavior in my application, which is very undesirable. I have a fully ajaxed application and in the following ajax thread:

  • I have http://url/image.jpg displayed on the page
  • I am loading a new image from the user interface, replacing image.jpg. The URL remains unchanged - http://url/image.jpg - but the image changes. All this is done in an ajax request and no simple GET is executed.
  • Firefox shows old image.jpg .

I think that for FF this is the same single request, and there is no need to update the image (or even ask the server if the image has expired). After a simple GET on the same page, I got a new image.

This IMHO can be considered a mistake.

0
source

Source: https://habr.com/ru/post/1300175/


All Articles