Often a programmer can beat the performance of a remainder operation when the programmer knows about operands that are not in the compiler. For example, if the base is likely to have a capacity of 2, but is unlikely to be more than the cost to be reduced, you could use something like:
unsigned mod(unsigned int x, unsigned int y) { return y & (y-1) ? x % y : x & (y-1); }
If the compiler extends a function in a string, and the base is a power constant of 2, the compiler will replace the remainder operator with the striking AND, which can be a significant improvement. In cases where the base is not constant power in two, the generated code will have to do some calculations before choosing whether to use the remainder operator, but in cases where the base turns out to be two, the cost-saving bitwise AND may exceed the cost of conditional logic.
Another scenario in which a user-defined function of a module can help is when the base is a fixed constant, for which the compiler did not provide for calculating the remainder. For example, if you want to calculate x% 65521 on a platform that can perform fast integer shifts and multiplications, you may notice that the calculation x -= (x>>16)*65521; will cause x be much smaller, but will not affect the value of x % 65521 . Performing the operation a second time will reduce x to the range 0..65745 - small enough for one conditional subtraction to give the correct remainder.
Some compilers may know how to use such methods to effectively control the % operator with a constant base, but for those that are not suitable, the approach can be a useful optimization, especially when working with numbers larger than the machine word [note that 65521 is 65536-15, so on a 16-bit machine, you can evaluate x as x = (x & 65536) + 15*(x >> 16) . Not as readable as a form that subtracts 65521 * (x >> 16) , but itβs easy to understand how efficiently it can be processed on a 16-bit machine.
supercat May 2 '17 at 23:23 2017-05-02 23:23
source share