I can’t believe that I can’t understand how to do this, but what can I say, I can’t solve it. I'm just trying to write numbers in a standard format (as indicated in scientific notation).
I read countless examples of how to achieve this using "setprecision (...)" and "fixed" and that’s all, but the problem is that the accuracy of the numbers is not known at compile time and a conservative estimate is introduced with 'setprecision (...) 'leaves heaps of extra zeros relative to the place.
Here is an example of what I need:
let: tau = 6.2831 tau * 0.000001 -> 0.0000062831 tau * 0.001 -> 0.0062831 tau -> 6.2831 tau * 1000 -> 6283.1 tau * 1000000 -> 6283100
At the moment I get:
tau * 0.000001 -> 6.2831e-006 tau * 0.001 -> 0.0062831 tau -> 6.2831 tau * 1000 -> 6283.1 tau * 1000000 -> 6.2831e+006
The only thing I can do is somehow extract the double indicator, then if the positive indicator "fix" the accuracy to zero, otherwise set the accuracy to "-1 * exp"; but this seems like an extremely confusing way of "turning off" scientific notation. Does anyone know a better way?
source share