Let's say I write unit test for a function that returns a floating point number, I can do it as such in full measure, as in my machine:
>>> import unittest
>>> def div(x,y): return x/float(y)
...
>>>
>>> class Testdiv(unittest.TestCase):
... def testdiv(self):
... assert div(1,9) == 0.1111111111111111
...
>>> unittest.main()
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Will the same full floating-point precision be the same for OS / distro / machine?
I could try to round up and make unit test as such:
>>> class Testdiv(unittest.TestCase):
... def testdiv(self):
... assert round(div(1,9),4) == 0.1111
...
>>>
I could also argue with log(output)
, but in order to maintain a fixed decimal precision, I still need to do rounding or truncation.
But in what other way should one pythonically handle unittesting for floating point output?
alvas