Ten years later, I started web development again. I created a page with 50 different input fields in all variants that control the contents of the div. Thanks for the performance compared to the convenience, I wrote all the event handlers in native JS. Out of interest, I reproduced the page using jQuery.
Now, in my native JS, I cannot group event handlers for these input fields, even if they do similar things. Creating them with a loop also will not save a lot of code, since it does not exceed 1-3 related input fields. In the end, I have a whole bunch of functions that look like this:
var input = document.getElementById('input');
input.addEventListener('input', function() {
});
input.addEventListener('change', function() {
});
JS for this test page is about 20 kb (unminified). Replicated JS using jQuery instead of embedded JS is about 9 kb (unminified).
During jQuery research, all articles that advise against it will have some benchmark, which shows that after x million iterations of a method, native JS was x seconds faster than jQuery.
The question I ask myself is: how relevant is this in real web applications. I mean, besides the fact that it took me four times as long to write my native JS, it wouldn't load more than twice as much as JS, it slows the viewer much more than the theoretical x millionth slower execution time for each method ?
user7492538