How to optimally maintain and load JavaScript files?

I hope someone with a lot of experience with global web applications can clarify some of the questions, assumptions, and possible misunderstandings that I have.

Take a hypothetical site (a large number of client / dynamic components), in which hundreds of thousands of users around the world, and sources come from one place (for example, from central Europe).

  • If the application depends on popular JavaScript libraries, would it be better to take it from Google’s CDN and compile it into one miniature JS file (along with all the JavaScript for applications) or download it separately from Google’s CDN?
  • Assetic VS headjs : Does it make sense to load one JS file or load all scripts in parallel (executed in the order of dependencies)?

My assumptions (please correct me):

Compiling all the JS code for a specific application / local JS code into a single file using CDNs such as Google for popular libraries, etc., but downloading all of them through headjs in parallel seems optimal, but I'm not sure. Compiling third-party JS and JS applications on the server side into a single file seems to almost deprive the goal of using CDNs, since the library is probably cached somewhere along the line for the user anyway.

In addition to caching, it may be faster to download a third-party library from Google CDN than the central server hosting the application.

If a new version of the popular JS library is released with a large increase in performance, it is tested with the application and then implemented:

  • If all JS is compiled into one file, each user will have to reload this file, even if the application code has not changed.
  • If third-party scripts are downloaded from the CDN, then the user only downloads the new version from the CDN (or from the cache somewhere).

Are any of the following legal concerns in a situation similar to that described?

  • Some users (or browsers) can only have a certain number of connections to the same host name at once, so extracting some scripts from a third-party CDN will lead to a faster increase in load time.
  • Some users may use the application in a limited environment, so the application domain may be white rather than CDN domains. (If possible, this is a real problem, is it even possible to try to boot from the CDN and boot from the central server in the event of a failure?)
+6
source share
3 answers

Compiling the entire application / local user JS code into a single file

Since some of our key goals are to reduce the number of HTTP requests and minimize request overhead , this is a very widely accepted best practice.

The main case when we could not do this is when there is a high probability of frequent cache invalidations, i.e. when we make changes to our code. There will always be trade-offs: servicing a single file is likely to increase the cache’s invalidity, while servicing many individual files is likely to slow down for users with an empty cache.

For this reason, embedding a random JavaScript bit with specific text is not as vicious as some say. In general, however, combining and minimizing JS into a single file is an excellent first step.

using CDNs like google for popular libraries etc.

If we are talking about libraries where the code that we use is pretty unchanged, that is, it is unlikely to be affected by cache invalidation, I could support saving HTTP requests a bit more by transferring them to your monolithic local JS file. a large code base based, for example, on a specific version of jQuery. In cases such as this error, the library version will almost certainly include significant changes to the code of your client application, which negates the advantage of keeping them separate.

However, mixing domains are an important win , as we do not want to unnecessarily increase the maximum connections per domain cap. Of course, the subdomain can also serve for this, but the Google domain has the advantage of being cookieless and probably already in the client’s DNS cache.

but loading all of them through headjs in parallel seems optimal

While there are advantages to the appearing node of JavaScript loaders, we should bear in mind that their use negatively affects the launch of the page, since the browser must go and get our loader before the loader can request the rest of our assets. In other words, a user with an empty cache requires full feedback from the server before any actual download begins. Again, the “compilation” step may come to the rescue - see require.js for a great hybrid implementation.

The best way to make sure your scripts are not blocking UI drawing is to put them at the end of your HTML. If you prefer to place them elsewhere, the async or defer now offer you this flexibility. All modern browsers request assets in parallel, so if you don’t need to support certain tastes of an inherited client, this should not be a serious factor. The browser network table is a great reference for this kind of thing. IE8 is supposed to be the main intruder, still blocking images and iFrame requests until scripts are loaded. Back in 3.6, Firefox completely parallelized everything except iFrames.

Some users may use the application in a limited environment, so the application domain may be white, but not CDN domains. (If this is possible, this is a real problem, is it even possible to try downloading from the CDN and downloading from the central server in the event of a failure?)

Developing if the client machine can access the remote host will always incur severe penalties, because we must wait until it can connect before we can download our backup. I would be much more prone to locating these assets locally.

+7
source
  • Many small js files are better than several large ones for many reasons, including changes / dependencies / requirements.
  • JavaScript / css / html and any other static content are processed very efficiently from any of the current web servers (Apache / IIS and many others), most of the time one web server is more than capable of serving requests 100 and 1000 / second, and in any In this case, this static content is likely to be cached somewhere between the client and your server.
  • Using any external (not controlled by you) repositories for the code that you want to use in the working environment is NO-NO (for me and many others), you do not want a sudden, catastrophic and irrevocable failure of your entire site JavaScript functionality just because Someone somewhere clicked commit without hesitation or verification.
+4
source

Compiling all application-specific JS code into a single file using CDNs such as Google for popular libraries, etc., but loading them all through headjs in parallel seems optimal ...

I would say that this is mostly correct. Do not merge several external libraries into one file, because, as you think, you know that this will deny most cases of browsers of users who have already cached (separate) resources.

For your own application-specific JS code, one consideration you might want to make is how often this will be updated. For example, if there is a kernel of functionality that will change infrequently, but some smaller components that can change regularly, it may make sense to only compile (with which I assume that you mean minify / compress) the kernel Into one file, continuing to serve smaller parts in parts.

Your decision should also consider the size of your JS assets. If - and this is unlikely, but possible - you are serving a very large amount of JavaScript, combining all of it into a single file can be counterproductive, as some clients (for example, mobile devices) have very strict restrictions on what they will cache. In this case, you will be better served by several small assets.

These are just random tidbits that you should be aware of. The main thing I wanted to do was that your first instinct (cited above) is probably the right approach.

+1
source

Source: https://habr.com/ru/post/921861/


All Articles