How to optimize a web application against latency due to many asynchronous requests in the background?

I am developing a modular RIA based on MVC architecture on the client side on the client side. Currently, the application is only 10% full, and so it’s not too late to include design changes.

The application is designed in such a way that it is initially loaded with a very small size and, depending on the action performed by the user, large amounts of data are extracted asynchronously. This data potentially includes both data stored on my servers and data from third-party web services, including social networks and microblogging services.

However, what bothers me is that is it possible that several heavy ajax requests of data running in the background stop the browser? Recently, I noticed some serious problems with the delay in some content aggregation services, and having analyzed the code on the client side, I was surprised that the amount of the application on the client side was quite small, as well as within 300 KB. However, when the application was launched frequently, the browser (both Firefox and IE) hung up and took several seconds to recover. When analyzing asynchronous requests, it turned out that the application simultaneously extracted user content from gmail, facebook and twitter and pushed them into the DOM and took up too many memory resources.

It would be great if someone could point me to some recommendations / best practices to prevent such problems. It would be advisable to write a custom shell script that loads the content in the background sequentially in a given order of importance, rather than loading them all in parallel, which can lead to multiple callbacks being executed in parallel.

Any recommendations would be greatly appreciated.

+6
source share
2 answers

One solution, well, this is not a killer solution for all cases, but one solution is to delegate server-side aggregation of content, and not everything in the final browser.

This can be done using ESIGates . One of them is Varnish-Esi , but it does not cover the entire ESI Specification . This one (esigate.org) is also open source and possibly with better coverage (not yet verified). ESI means your application layout can be a combination of different blocks with different cache policies (TTLs) and different providers. The ESI server will take part of the traffic that you originally deported to the last browser, so this will cost you much more bandwidth , but at least you will get more control on this software than on different browsers used by HTTP clients.

At least, perhaps this can improve the caching policy of asynchronous data loads on your server, and thus it can speed up the response time for the final browser (better response time, less parallel operation).

Now, on the user’s side, in terms of the priority on your page, you must definitely determine what is the most important content that the user can start playing with and that is just “ decoration ” (well, this implies your service as a good ratio of information and noise if your website contains nothing but social networking, you will get problems).

I assume that since your application is a small static application with lots of asynchronously loaded data, you use a lot of ajax and not too many page changes. This means that after loading the content, it will be on the page for a long time.

Thus, the presence of a large amount of parallel data on social networks and other web services with a delay and chained should not be a problem. It may not be in the first 15 years, but if it stays on the page for the next 15 minutes, then this may not be a problem (if the most important content is already there, the user may not even notice that the decorative content was not available). One IE6 advisor (and sometimes IE7), use the SetTimeouts() js commands everywhere to force redrawing of pages, you will see that the available content shows faster.

Last tip if you need to do some regular ajax checks for updated content. If you really do the thesis of checking for 10 contents every minute, you will always have problems with parallel loading and a lot of activity, the same problem as with the initial loading, usually you can use two things to fix this problem: COMET familly with long HTTP connections (so that you could use PUSH data and / or receive faster responses instead, but only with your server configured for such HTTP traffic). The second one adds a time factor for the next checks, so the first check is done after 1 minute, the next - 2 minutes, then 3, 15, 25, etc. At the end you get a new check only every hour maybe. And you can reduce the next check delay when a user detects any activity (some user interaction). This is because you can assume that the user is really looking for fresh data only when he is really doing something with your page. you will save some user CPU and you will also help load your server.

+2
source

I also had a page load delay due to Facebook / LinkedId / etc plugins.

The solution for me is asynchronous loading of third-party JavaScript. It doesn’t work for every web service / widget, but for many of them (Like button, Twitter tweet button, etc.).

Example:

 <div id="tweet"> <a href="http://twitter.com/share" class="twitter-share-button" data-count="horizontal" >Tweet</a> </div> <script type="text/javascript"> $(function() { $.getScript('http://platform.twitter.com/widgets.js'); }); </script> 

This code will display the "Tweet" button after loading the page (DOM).

But first you have to check if the requested JavaScript web service is using document.write () or not. If so, you cannot do anything here, only synchronization is possible.

0
source

Source: https://habr.com/ru/post/891267/


All Articles