If I understand the comments in "/usr/include/fenv.h" correctly,
#include <fenv.h> fesetenv(FE_DFL_DISABLE_SSE_DENORMS_ENV);
should do what you want.
FE_DFL_DISABLE_SSE_DENORMS_ENV
A pointer to a fenv_t object with the default floating-point state modifed
to set the DAZ and FZ bits in the SSE status / control register. When using
this environment, denormals encountered by SSE based calculation (which
normally should be all single and double precision scalar floating point
calculations, and all SSE / SSE2 / SSE3 computation) will be treated as zero.
Calculation results that are denormals will also be truncated to zero.
Setting this option reduced the programโs runtime. Why does changing 0.1f to 0 slow down performance by 10x? (link given by @Mysticial in his comment) from 27 seconds to 0.3 seconds (MacBook Pro, 2.5 GHz Intel Core 2 Duo).
source share