Assigning integers to smaller types is somewhat strange, since the standard clearly recognizes that some implementations can trap, but - uniquely - this requires that the capture follows the rules of the signals; the decision to impose this requirement here, but not elsewhere, is somewhat curious, because in many cases this will prevent what would otherwise be a simple optimization - replacing an integer with a meaningful variable whose type is shorter than int and whose address has never been accepted, with int .
However, for some reason, the authors of the standard have gone astray to prohibit this optimization. [Note. If I were responsible for the standard, I would indicate that explicit casting to a shorter integer type will give a value that, when casting to an unsigned type of the same size, will give the same result as the value of the value immediately for each such value exists, but that the storage of an oversized value directly for an lvalue without a throw will not be thus limited; I did not write a standard, though].
This is ironic, in fact: given:
uint64t signedpow(int32_t n, uint32_t p) { uint64_t result; while(p--) { n*=n; result+=n; } return result; } uint64t unsignedpow(uint32_t n, uint32_t p) { uint64_t result; while(p--) { n*=n; result+=n; } return result; }
On a platform where int is 32 bits, the latter defined semantics for all values of n and p , while the former will not, but on a platform where int is 64 bits, the opposite would be true. A compiler for a typical 64-bit platform that did not want to spend code on some other specific behavior would be required by the standard to mask and sign signed n after each multiplication, but with some unsigned values the compiler could do whatever it wanted, including come back on time and pretend that no implementation has ever promised to always fulfill half-sized unsigned factors in accordance with modular arithmetic.
source share