Doesn't have local micro-optimization features?

Would I move an internal function outside of this so that it is not created every time the function is called, is it micro-optimization?

In this particular case, the doMoreStuff function doMoreStuff used only inside doStuff . Should I worry about local features like these?

 function doStuff() { var doMoreStuff = function(val) { // do some stuff } // do something for (var i = 0; i < list.length; i++) { doMoreStuff(list[i]); for (var j = 0; j < list[i].children.length; j++) { doMoreStuff(list[i].children[j]); } } // do some other stuff } 

For example, a relevant example:

 function sendDataToServer(data) { var callback = function(incoming) { // handle incoming } ajaxCall("url", data, callback); } 
+2
source share
5 answers

Not sure if this falls under the category of micro-optimization. I would say no.

But it depends on how often you call doStuff . If you call it often, then creating a function again and again is simply superfluous and will certainly add overhead.

If you do not want to have a โ€œhelper functionโ€ in the global scope, but avoid re-creating it, you can wrap it like this:

 var doStuff = (function() { var doMoreStuff = function(val) { // do some stuff } return function() { // do something for (var i = 0; i < list.length; i++) { doMoreStuff(list[i]); } // do some other stuff } }()); 

Since the return function is a closure, it has access to doMoreStuff . Note that the external function is immediately executed ( (function(){...}()) ).

Or you create an object containing function references:

 var stuff = { doMoreStuff: function() {...}, doStuff: function() {...} }; 

For more information on encapsulation, object creation patterns, and other concepts, see JavaScript Templates .

+4
source

It completely depends on how often the function is called. If it's an OnUpdate feature called 10 times per second, it's a decent optimization. If it is called three times per page, it is microoptimization.

Although convenient definitions of nested functions are never needed (they can be replaced with additional arguments to the function).

An example with a nested function:

 function somefunc() { var localvar = 5 var otherfunc = function() { alert(localvar); } otherfunc(); } 

Same thing now instead of argument:

 function otherfunc(localvar) { alert(localvar); } function somefunc() { var localvar = 5 otherfunc(localvar); } 
0
source

This is absolutely micro-optimization. The whole reason for having functions in the first place is that you make your code cleaner, more convenient and readable. Functions add a semantic border to sections of code. Each function should do only one thing, and it should do it cleanly. Therefore, if you find that your functions perform several actions at the same time, you have a candidate for reorganizing it into several procedures.

Only optimize when something is too slow for you (if it still does not work, it is too early to optimize. Period). Remember, no one ever paid for a program that was faster than their needs / requirements ...

Change Given that the program is not finished yet, this is also a premature optimization. Why is that bad? Well, at first you spend time on things that may not matter in the long run. Secondly, you do not have a baseline to see if your optimization has improved anything in a realistic sense. Thirdly, you reduce maintainability and readability before you even start it, so it will be more difficult to work than if you went with a clean short code. Fourth, you donโ€™t know if you need doMoreStuff somewhere else in the program until you finish it and understand all your needs (maybe a long shot depending on specific details, but not outside the scope).

There is a reason Donnald Knuth said that premature optimization is the root of all evil ...

0
source

A quick "test" is performed on an average PC (I know that there are many unaccounted variables, so do not comment on the obvious, but this is interesting anyway):

 count = 0; t1 = +new Date(); while(count < 1000000) { p = function(){}; ++count; } t2 = +new Date(); console.log(t2-t1); // milliseconds 

It can be optimized by moving the increment to the condition, for example (the execution time is reduced by about 100 milliseconds, although it does not affect the difference between creating and without creating a function, so it does not really matter)

Running 3 times gave:

 913 878 890 

Then comment out the line for creating the function, three runs gave:

 462 458 464 

So, purely on 1,000,000 creations of an empty function, you add about half a second. Even if your source code runs 10 times per second on a handheld device (let's say that the overall performance of the devices is 1/100 of this laptop, which is exaggerated - it is probably closer to 1/10, although it will provide a good upper bound) equivalent to 1000 function creation / sec on this computer, which occurs in 1/2000 second. Thus, every second a handheld device adds overhead for 1/2000 seconds of processing ... half a millisecond every second is not very much.

From this primitive test, I would conclude that on a PC it is definitely micro-optimization, and if you are developing weaker devices, it is almost certain.

0
source

An original question was asked in 2011. Given the growth of Node.js since then, I thought it was worth revisiting the issue. In a server environment, a few milliseconds here and there can make a difference. This may be the difference between the remaining reactive under load or not.

While internal functions are conceptual, they can cause problems for the JavaScript code optimizer. The following example illustrates this:

 function a1(n) { return n + 2; } function a2(n) { return 2 - n; } function a() { var k = 5; for (var i = 0; i < 100000000; i++) { k = a1(k) + a2(k); } return k; } function b() { function b1(n) { return n + 2; } function b2(n) { return 2 - n; } var k = 5; for (var i = 0; i < 100000000; i++) { k = b1(k) + b2(k); } return k; } function measure(label, fn) { var s = new Date(); var r = fn(); var e = new Date(); console.log(label, e - s); } for (var i = 0; i < 4; i++) { measure('A', a); measure('B', b); } 

Command to run the code:

 node --trace_deopt test.js 

Output:

 [deoptimize global object @ 0x2431b35106e9] A 128 B 130 A 132 [deoptimizing (DEOPT eager): begin 0x3ee3d709a821 b (opt #5) @4, FP to SP delta: 72] translating b => node=36, height=32 0x7fffb88a9960: [top + 64] <- 0x2431b3504121 ; rdi 0x2431b3504121 <undefined> 0x7fffb88a9958: [top + 56] <- 0x17210dea8376 ; caller pc 0x7fffb88a9950: [top + 48] <- 0x7fffb88a9998 ; caller fp 0x7fffb88a9948: [top + 40] <- 0x3ee3d709a709; context 0x7fffb88a9940: [top + 32] <- 0x3ee3d709a821; function 0x7fffb88a9938: [top + 24] <- 0x3ee3d70efa71 ; rcx 0x3ee3d70efa71 <JS Function b1 (SharedFunctionInfo 0x361602434ae1)> 0x7fffb88a9930: [top + 16] <- 0x3ee3d70efab9 ; rdx 0x3ee3d70efab9 <JS Function b2 (SharedFunctionInfo 0x361602434b71)> 0x7fffb88a9928: [top + 8] <- 5 ; rbx (smi) 0x7fffb88a9920: [top + 0] <- 0 ; rax (smi) [deoptimizing (eager): end 0x3ee3d709a821 b @4 => node=36, pc=0x17210dec9129, state=NO_REGISTERS, alignment=no padding, took 0.203 ms] [removing optimized code for: b] B 1000 A 125 B 1032 A 132 B 1033 

As you can see, functions A and B performed at the same speed initially. Then, for some reason, a deoptimization event occurred. From this moment, B is almost an order of magnitude slower.

If you write code where performance matters, it is best to avoid internal functions.

0
source

Source: https://habr.com/ru/post/949533/


All Articles