It seems like a pretty poor understanding of what undefined behavior means.
In C, C ++, and related languages, such as Objective-C, there are four types of behavior: there is behavior defined by the language standard. There is a specific implementation behavior, which means that the language standard explicitly states that the implementation should define the behavior. There is undefined behavior when the locale states that several behaviors are possible. And there is undefined behavior , where the locale says nothing about the result . Since the locale says nothing about the result, everything can happen with undefined behavior.
Some people suggest that "undefined behavior" means "something bad." It is not right. This means that “everything can happen,” and that includes “something bad can happen,” not “something bad must happen.” In practice, this means: "nothing bad happens when you test your program, but as soon as it is sent to the client, all hell breaks." Since anything can happen, the compiler can actually assume that your code does not have undefined behavior, because either it is true or it is false, in which case something can happen, which means that everything happens due to a wrong compiler assumption still true.
Someone claimed that when p points to an array of three elements and p + 4 is calculated, nothing bad will happen. Wrong. Here is your optimization compiler. Say this is your code:
int f (int x) { int a [3], b [4]; int* p = (x == 0 ? &a [0] : &b [0]); p + 4; return x == 0 ? 0 : 1000000 / x; }
A score of p + 4 is undefined behavior if p points to [0] but does not point to b [0]. Therefore, the compiler is allowed to assume that p points to b [0]. Therefore, the compiler is allowed to assume that x! = 0, because x == 0 leads to undefined behavior. Therefore, the compiler is allowed to remove the x == 0 check in the return statement and simply return 1,000,000 / x. This means that your program crashes when you call f (0) instead of returning 0.
Another assumption was that if you increase the null pointer and then decrease it again, the result is the null pointer again. Wrong. Besides the possibility that incrementing a null pointer might just work on some hardware, that’s about it: since incrementing a null pointer is an undefined behavior, the compiler checks if the pointer is null and only increments the pointer if it is not a null pointer, so p + 1 is again a null pointer. And, as a rule, it will do the same for decrementing, but, being a smart compiler, notices that p + 1 is always undefined if the result was a null pointer, so we can assume that p + 1 is not a null pointer, so check null pointer may be omitted. This means that (p + 1) - 1 is not a null pointer if p was a null pointer.