Yes! It is really more numerically stable.
In the case you are looking at, the numbers are [0.0, 0.1, ..., 0.9] , note that when rounding down, only four of these numbers are rounded down ( 0.1 through 0.4 ), five are rounded up, and one ( 0.0 ) does not change with the rounding operation, and, of course, this pattern is repeated for 1.0 through 1.9 , 2.0 through 2.9 , etc. So on average there are more values ​​rounded from zero than to it. But in the even rounds, we get:
- five rounding values ​​and four rounding in
[0.0, 0.9] - four rounding values ​​and five rounding in
[1.0, 1.9]
etc. On average, we get the same number of values ​​rounded as rounding. More importantly, the expected error that occurs during rounding approaches zero (under suitable assumptions about the distribution of inputs).
Here's a quick demo using Python. To avoid difficulties due to the differences between Python 2 / Python 3 in the built-in round function, we give two rounding functions using the Python function:
def round_ties_to_even(x): """ Round a float x to the nearest integer, rounding ties to even. """ if x < 0: return -round_ties_to_even(-x)
Now we look at the average error introduced using these two functions on decimal values ​​with decimal precision after the decimal point in the range [50.0, 100.0] :
>>> test_values = [n / 10.0 for n in range(500, 1001)] >>> errors_even = [round_ties_to_even(value) - value for value in test_values] >>> errors_away = [round_ties_away_from_zero(value) - value for value in test_values]
And we use the recently added standard statistics library module to calculate the mean and standard deviation of these errors:
>>> import statistics >>> statistics.mean(errors_even), statistics.stdev(errors_even) (0.0, 0.2915475947422656) >>> statistics.mean(errors_away), statistics.stdev(errors_away) (0.0499001996007984, 0.28723681870533313)
The key point here is that errors_even has a zero mean: the average error is zero. But errors_away has a positive average: the average error is biased away from zero.
More realistic example.
Here's a semi-realistic example demonstrating a shift from rounding-from-zero to a numerical algorithm. We are going to calculate the sum of a list of floating point numbers using the pair summation algorithm. This algorithm splits the sum, which is calculated into two gross equal parts, recursively sums the two parts, and then adds the results. It is significantly more accurate than a naive sum, but usually not as good as more complex algorithms such as Kahan summation . This is the algorithm used by the NumPy sum function. Here is a simple Python implementation.
import operator def pairwise_sum(xs, i, j, add=operator.add): """ Return the sum of floats xs[i:j] (0 <= i <= j <= len(xs)), using pairwise summation. """ count = j - i if count >= 2: k = (i + j) // 2 return add(pairwise_sum(xs, i, k, add), pairwise_sum(xs, k, j, add)) elif count == 1: return xs[i] else:
We have included the add parameter in the function above, representing the operation that will be used to add. By default, it uses the regular Python add algorithm, which on a typical machine allows the standard IEEE 754 add-on using a round-off rounding mode.
We want to see the expected error from the pairwise_sum function, using both the standard addition and the version of the addition round-ties-away-from-zero. Our first problem is that we don’t have a simple and portable way to change the rounding mode of the hardware from Python, and implementing binary floating point software will be big and slow. Fortunately, there is a trick we can use to get round ties-from-zero, still using a hardware floating point. For the first part of this trick, we can use Knut's “2Sum” algorithm to add two floats and get a correctly rounded amount along with the exact error in that amount:
def exact_add(a, b): """ Add floats a and b, giving a correctly rounded sum and exact error. Mathematically, a + b is exactly equal to sum + error. """
With this in hand, we can easily use the term errors to determine when the exact amount is a binder. We have a connection if and only if error nonzero and sum + 2*error is exactly representable, in which case sum and sum + 2*error are the two floats closest to this connection. Using this idea, here is a function that adds two numbers and gives a properly rounded result, but rounds off ties from zero.
def add_ties_away(a, b): """ Return the sum of a and b. Ties are rounded away from zero. """ sum, error = exact_add(a, b) sum2, error2 = exact_add(sum, 2.0*error) if error2 or not error:
Now we can compare the results. sample_sum_errors is a function that generates a list of floats in the range [1, 2], adds them using both the normal addition of rounded links to parity and our regular version of round-ties-away-from-zero, compares with the exact amount and returns errors for both versions, measured in units in last place.
import fractions import random def sample_sum_errors(sample_size=1024): """ Generate `sample_size` floats in the range [1.0, 2.0], sum using both addition methods, and return the two errors in ulps. """ xs = [random.uniform(1.0, 2.0) for _ in range(sample_size)] to_even_sum = pairwise_sum(xs, 0, len(xs)) to_away_sum = pairwise_sum(xs, 0, len(xs), add=add_ties_away)
Here is an example:
>>> sample_sum_errors() (1.6015625, 9.6015625)
Thus, the error is 1.6 ulps using the standard addition and the error is 9.6 ulps when rounding ties from zero. Of course, it seems that the tack-from-zero method is worse, but one run is not particularly convincing. Let it do it 10,000 times, with a different random pattern every time, and build the errors that we get. Here is the code:
import statistics import numpy as np import matplotlib.pyplot as plt def show_error_distributions(): errors = [sample_sum_errors() for _ in range(10000)] to_even_errors, to_away_errors = zip(*errors) print("Errors from ties-to-even: " "mean {:.2f} ulps, stdev {:.2f} ulps".format( statistics.mean(to_even_errors), statistics.stdev(to_even_errors))) print("Errors from ties-away-from-zero: " "mean {:.2f} ulps, stdev {:.2f} ulps".format( statistics.mean(to_away_errors), statistics.stdev(to_away_errors))) ax1 = plt.subplot(2, 1, 1) plt.hist(to_even_errors, bins=np.arange(-7, 17, 0.5)) ax2 = plt.subplot(2, 1, 2) plt.hist(to_away_errors, bins=np.arange(-7, 17, 0.5)) ax1.set_title("Errors from ties-to-even (ulps)") ax2.set_title("Errors from ties-away-from-zero (ulps)") ax1.xaxis.set_visible(False) plt.show()
When I run the above function on my machine, I see:
Errors from ties-to-even: mean 0.00 ulps, stdev 1.81 ulps Errors from ties-away-from-zero: mean 9.76 ulps, stdev 1.40 ulps
and I get the following graph:

I planned to take one more step and perform a statistical test for bias across two samples, but the bias from the tack-from-zero method is so noticeable that it seems unnecessary. Interestingly, while the snap-from-zero method gives a weaker result, it gives less error propagation.