It's not a problem.
First, note that 0 ≤ a <1, so errors on average tend to decrease rather than accumulate. Incoming new data crowds out old errors.
Subtraction of floating-point numbers of a similar magnitude (of the same sign) does not lose absolute accuracy. (You wrote “precision”, but accuracy is the subtlety with which values ​​are presented, for example, double type width, and this does not change with subtraction.) Subtraction of numbers of the same value can cause an increase in relative error: Since the result is smaller, the error is larger with respect to to him. However, the relative error of the intermediate value is not a concern.
In fact, the subtraction of two numbers, each of which is equal to or superior to half the other, does not have an error: the correct mathematical result is accurately represented (Sterbenz lemma).
Thus, subtraction in the last sequence of operations is likely to be an accurate or low error, depending on how much the values ​​fluctuate. Then multiplication and addition have the usual rounding errors, and they are not particularly alarming if there are no positive or negative values, which can lead to large relative errors when the average approaches zero. If a smooth multiple-add operation is available (see fma in <tgmath.h> ), you can eliminate the error from multiplication.
In the previous sequence of operations, the estimate 1-a will be exact if a at least ½. This leaves two multiplications and one addition. This will tend to have a very slightly larger error than the last sequence, but probably this is not enough to notice. As before, old errors will decrease.
source share