Compiling the entire application / local user JS code into a single file
Since some of our key goals are to reduce the number of HTTP requests and minimize request overhead , this is a very widely accepted best practice.
The main case when we could not do this is when there is a high probability of frequent cache invalidations, i.e. when we make changes to our code. There will always be trade-offs: servicing a single file is likely to increase the cache’s invalidity, while servicing many individual files is likely to slow down for users with an empty cache.
For this reason, embedding a random JavaScript bit with specific text is not as vicious as some say. In general, however, combining and minimizing JS into a single file is an excellent first step.
using CDNs like google for popular libraries etc.
If we are talking about libraries where the code that we use is pretty unchanged, that is, it is unlikely to be affected by cache invalidation, I could support saving HTTP requests a bit more by transferring them to your monolithic local JS file. a large code base based, for example, on a specific version of jQuery. In cases such as this error, the library version will almost certainly include significant changes to the code of your client application, which negates the advantage of keeping them separate.
However, mixing domains are an important win , as we do not want to unnecessarily increase the maximum connections per domain cap. Of course, the subdomain can also serve for this, but the Google domain has the advantage of being cookieless and probably already in the client’s DNS cache.
but loading all of them through headjs in parallel seems optimal
While there are advantages to the appearing node of JavaScript loaders, we should bear in mind that their use negatively affects the launch of the page, since the browser must go and get our loader before the loader can request the rest of our assets. In other words, a user with an empty cache requires full feedback from the server before any actual download begins. Again, the “compilation” step may come to the rescue - see require.js for a great hybrid implementation.
The best way to make sure your scripts are not blocking UI drawing is to put them at the end of your HTML. If you prefer to place them elsewhere, the async or defer now offer you this flexibility. All modern browsers request assets in parallel, so if you don’t need to support certain tastes of an inherited client, this should not be a serious factor. The browser network table is a great reference for this kind of thing. IE8 is supposed to be the main intruder, still blocking images and iFrame requests until scripts are loaded. Back in 3.6, Firefox completely parallelized everything except iFrames.
Some users may use the application in a limited environment, so the application domain may be white, but not CDN domains. (If this is possible, this is a real problem, is it even possible to try downloading from the CDN and downloading from the central server in the event of a failure?)
Developing if the client machine can access the remote host will always incur severe penalties, because we must wait until it can connect before we can download our backup. I would be much more prone to locating these assets locally.