Why is complex floating point division weird with NumPy?

Consider this code:

import numpy
numpy.seterr(under='warn')
x1 = 1 + 1j / (1 << 533)
x2 = 1 - 1j / (1 << 533)
y1 = x1 * 1.1
y2 = x2 * 1.1
z1 = x1 / 1.1
z2 = x2 / 1.1
print(numpy.divide(1, x1))  #              1-3.55641399918e-161j  # OK
print(numpy.divide(1, x2))  #              1+3.55641399918e-161j  # OK
print(numpy.divide(1, y1))  # 0.909090909091-3.23310363561e-161j  # underflow
print(numpy.divide(1, y2))  # 0.909090909091+3.23310363561e-161j  # underflow
print(numpy.divide(1, z1))  #            1.1-3.91205539909e-161j  # underflow
print(numpy.divide(1, z2))  #            1.1+3.91205539909e-161j  # underflow

The disadvantage does not make sense, no matter how I look at it. As Wikipedia says ,

Underflow is a condition in a computer program where the result of the calculation is a number of a lower absolute value than the computer can actually store in memory on its CPU.

But, obviously, the computer is able to store numbers in the general proximity of the values ​​in question, so the definition does not look like the behavior that I see here.

Can someone explain why precisely some of them give shortcomings, while others do not?
Is this the right behavior or is it a mistake?

+4

Source: https://habr.com/ru/post/1683781/


All Articles