Multiple Small DOM operations versus one large DOM operation

This is more a question about best practices. When trying to apply an MVC pattern similar to a pattern in a web application, the question often arises as to how I need to update the view.

For example, if I have to update View_1, which has an X number of elements. Is it better:

A: iterate through each of the X elements, the calculation of which needs to be updated and the DOM change applied with very fine detail.

or

B: using the data provided by Model or some other data structure to restore the markup for all of this View and all its incoming elements and replace the root element of View_1 in the same DOM manipulation?

Correct me if I am wrong. I heard that rendering engines are usually more efficient at replacing most of the DOM at a time than a few smaller DOM operations. If so, then approach B is superior. However, even with the use of template mechanisms, it is still difficult for me to avoid overwriting labels for parts that are not changed.

I looked at the source code of the Bespin project before renaming it. I clearly remember that they implemented some kind of rendering cycle mechanism where DOM operations are queued and applied at fixed time intervals, similar to how games manage their frames. This is similar to approach A. I can also see the rationale for this approach. The small DOM operations used in this way support the user interface (especially important for a text editor). Also in this way, the application can be made more efficient only by updating the elements that need to be changed. Static text and aesthetic elements can remain intact.

These are my arguments for both sides. Guys, what do you think? Are we looking for a happy environment somewhere, or is one approach far superior?

Are there any good books / articles / sites on this topic?

(let it be assumed that the web application in question interacts strongly with many dynamic updates)

+4
source share
2 answers

After 3 years working with various web technologies, I think I finally found a great balance between the two approaches: virtual DOM

Libraries like vdom, Elm, mithril.js and, to some extent, Facebook React track a subtle abstraction of the actual DOM, determine what needs to be changed, and try to apply the smallest possible change to the DOM.

eg.

https://github.com/Matt-Esch/virtual-dom/tree/master/vdom

0
source

It is true that rendering engines usually process changes in large chunks faster than a few small changes.

tl; dr: A wireless path would be ideal, and if possible, do it in the workplace.

Depending on the size of the amount of changes, you can try to start the worker and calculate the changes inside the worker since JS launches the user interface. You can use the following stream:

  • Create an object with a part of the dom tree, as well as a parent identifier.
  • Flatten an object in JSON
  • Get started
  • Pass the string object to the working one
  • Get and parse a string.
  • Work on changing all the necessary parts of the domino tree that you went through.
  • Re-draw the object.
  • Pass the object back to the main thread.
  • Parsing and extracting a new tree.
  • Insert in dom again.

It will be faster if there are a lot of changes and a small tree. If it is a large tree and a few changes that make changes locally in a copy of the real DOM tree will be faster, then update the DOM in one go.

Also read googles sites about page speed:

https://code.google.com/speed/articles/

And especially this article:

https://code.google.com/speed/articles/javascript-dom.html

+2
source

Source: https://habr.com/ru/post/1389550/


All Articles