Is x> = 0 more efficient than x> -1?

Do comparisons in C ++ with int more than x >= 0 more efficient than x > -1 ?

+6
source share
6 answers

short answer: no.

a longer answer to give some insight into education: it depends entirely on your compiler, although I am sure that every sensible compiler creates identical code for two expressions.

code example:

 int func_ge0(int a) { return a >= 0; } int func_gtm1(int a) { return a > -1; } 

and then compile and compare the resulting assembler code:

  % gcc -S -O2 -fomit-frame-pointer foo.cc

gives the following:

  _Z8func_ge0i:
 .LFB0:
     .cfi_startproc
     .cfi_personality 0x0, __ gxx_personality_v0
     movl 4 (% esp),% eax
     notl% eax
     shrl $ 31,% eax
     ret
     .cfi_endproc

against.

  _Z9func_gtm1i:
 .LFB1:
     .cfi_startproc
     .cfi_personality 0x0, __ gxx_personality_v0
     movl 4 (% esp),% eax
     notl% eax
     shrl $ 31,% eax
     ret
     .cfi_endproc

(compiler: g ++ - 4.4)

conclusion: do not try to outwit the compiler, focus on algorithms and data structures, control and profile real bottlenecks, if in doubt: check the output of the compiler.

+19
source

You can look at the resulting assembly code, which may differ from architecture to architecture, but I would argue that the resulting code would require exactly the same cycles.

And, as mentioned in the comments, it’s better to write what is most acceptable, optimize when you have real measured bottlenecks that you can identify with the profiler.

BTW: In truth, x>-1 can cause problems if x is unsigned . It can be implicitly entered in signed (although you should get a warning about this), which will lead to an incorrect result.

+7
source

The last time I answered such a question, I just wrote a "measure" and filled in the periods until SO agreed.

This answer was omitted 3 times in a few minutes and was deleted (along with at least one other answer to the question) using the SO moderator.

However, there is no alternative to measurement.

So this is the only possible answer.

And in order to discuss this in detail and in detail, the answer is not only omitted and deleted, you need to keep in mind that you measure only that one set of measurements will not necessarily tell you anything at all, but only a specific result. Of course, this may sound patronizing, talking about such obvious things. So, well, let it be so: just measure.

Or, perhaps, I should mention that most processors have special instructions for comparing with zero, and yet this does not allow anyone to do anything about the performance of your code snippets?

Well, I think I stop there. Remember: measurement. And don't optimize prematurely!



EDIT : Amendment with comments referenced in @MooingDuck's comment.

Question:

Doing comparisons in C ++ with int is x >= 0 more efficient than x > -1 ?

What is wrong with the question

Donald Knuth, author of the classic three-volume work, The Art of Computer Programming, once wrote [1],

“We must forget about little efficiency, say, about 97% of the time: premature optimization is the root of all evil”

How effective x >= 0 compared to x > -1 , often does not matter. That is, this is most likely the wrong thing to focus on.

How clearly he expresses what you want to say is much more important. Your time and time for others supporting this code is usually much more important than program runtime. Focus on how well the code interacts with other programmers, i.e. Focuses on clarity.

Why is the focus of the question wrong

Clarity affects the likelihood of correctness. Any code can be made arbitrarily quickly if it does not have to be correct. Therefore, correctness is important, and this means that clarity is very important - much more important than shaving the nano-seconds of the & hellip;

And both expressions are not equivalent. clarity and rights. to their chance to be true.

If x is a signed integer, then x >= 0 means exactly the same as x > -1 . But if x is an unsigned integer, for example. of unsigned type, then x > -1 means x > static_cast<unsigned>(-1) (via implicit promotion ), which in turn means x > std::numeric_limits<unsigned>::max() . This is probably not what the programmer wanted to express!

Another reason why the focus is wrong (on micro-efficiency, although it should be on clarity) is that the main effect on efficiency does not usually come from the timings of individual operations (except, in some cases, from dynamic allocation and from even slower disk and network operations), but from algorithmic efficiency . For example, write & hellip;

 string s = ""; for( int i = 0; i < n; ++i ) { s = s + "-"; } 

quite inefficient, because it uses time proportional to the square of n , O ( n 2 ), quadratic time .

But instead write & hellip;

 string s = ""; for( int i = 0; i < n; ++i ) { s += "-"; } 

reduces the time to proportionality n , O ( n ), linear time .

Focusing on the individual timings of the work, you can think about writing '-' instead of "-" and such silly details. Instead, focusing on clarity could focus on making this code more understandable than with a loop. For instance. using the appropriate string constructor:

 string s( n, '-' ); 

Wow!

Finally, the third reason not to sweat in the details is that in general this is a very small part of the code that disproportionately increases the execution time. And the definition of this part (or parts) is not easy to do by simply analyzing the code. Measurements are necessary, and this kind of measurement "where he spends time" is called profiling .

How to find out the answer to a question

Twenty or thirty years ago, you could get a reasonable idea of ​​the effectiveness of individual operations by simply looking at the generated machine code.

For example, you can view the machine code by running the program in the debugger, or you use the approiate option to ask the compiler to generate a list of assembly languages. Note for g ++: the -masm=intel option is convenient for telling the compiler not to generate an ungrokkable AT & T syntax assembly, but instead of Intel syntax. For example, Microsoft assembler uses Intel extended syntax.

Today, a computer processor is more intelligent. It can execute instructions out of order and even before their effect is needed for the "current" point of execution. The compiler can predict that (by incorporating effective knowledge obtained from measurements), but a person has little chance.

The only resource for an ordinary programmer is to measure .

Measurement, measurement, measurement!

And in general, this includes performing a thing that needs to be measured, a million times and divided by a million.

Otherwise, start time and downtime dominate, and the result is garbage.

Of course, if the generated machine code is the same, then the measurement will not tell you anything useful regarding the relative difference. He can then only indicate something about how large the measurement error is. Because you know that there should be zero difference.

Why measurement is the right approach

Suppose that the theoretical considerations in the SO answer indicate that x >= -1 will be slower than x > 0 .

The compiler can surpass any such theoretical consideration by creating horrible code for this x > 0 , possibly because of the contextual possibility of "optimization", which it then (unfortunately!) Recognizes.

A computer processor can also make a mess of prediction.

Thus, in any case, you have to measure.

This means that the theoretical examination did not bring you anything useful: you will still do the same, namely: measurement.

Why is this detailed answer, although apparently useful, IMHO really not

Personally, I would prefer the single word "measure" as the answer.

Due to the fact that it comes down to.

Everything that the reader can not only figure out on his own, but will also have to find out the details anyway; so this is just a wording to try to describe it here, really.

Literature:

[1] Whip, Donald. Structured Programming with the Transition to Reports , ACM Journal Computing Surveys, Volume 6, No. 4, December 1974. p. 268.

+6
source

Your compiler can decide how to implement them (which assembly instructions to use). Because of this, there is no difference. One compiler could implement x > -1 as x >= 0 , and another could implement x >= 0 as x > -1 . If there is any difference (unlikely), your compiler will choose the best one.

+3
source

They must be equivalent. Both will be translated by the compiler into a single assembly instruction (neglecting the fact that they will need to load x into the register). In any modern processor time there is an instruction “more than” and “more than or equal to”. And since you are comparing it with a constant value, it will take the same amount of time.

Do not worry for the smallest details, find big performance problems (for example, algorithm design) and attack them, look at Amdal's law.

+2
source

I doubt there is any measurable difference. The compiler should emit some compiled code with jump instructions , such as JAE (jump if higher or equal) or JA (jump if higher). These instructions probably cover the same number of cycles.

Ultimately, it does not matter. Just use what is more clear to the reader reading your code.

+1
source

Source: https://habr.com/ru/post/902682/


All Articles