Note. The best answer so far is just to use numpy.logaddexp(logA,logB)
.
Why exactly do you compare with log(0)
? This is equal to -numpy.inf
, in this case you come to log(1 + math.exp(-inf-logB) ) + logB
, which boils down to logB. This call will always give a warning message, which is very slow.
I could come up with this one line. However, you really need to measure whether this is actually happening faster. It uses only one βcomplexβ calculation function instead of the two that you are using, and no recursion occurs if
it still exists, but is hidden (and possibly optimized) in fabs
/ maximum
.
def log_add(logA,logB): return numpy.logaddexp(0,-numpy.fabs(logB-logA)) + numpy.maximum(logA,logB)
edit:
I did a quick timeit () with the following results:
- Your original version took about 120 seconds
- My version took about 30 seconds
- I removed the comparison with log (0) from your version and reached 20 seconds
- I edited my code to save
logaddexp
, but also worked with your recursive if it dropped to 18 s.
Updated code, you can also switch the recursive call using the built-in updated formula, but this has affected my time tests a bit:
def log_add2(logA, logB): if logA < logB: return log_add2(logB, logA) return numpy.logaddexp(0,logB-logA)+logA
Edit 2:
As noted in the comments, you could just do numpy.logaddexp(logA, logB)
, which boils down to computing log(exp(logA)+exp(logB))
, which of course is log(A+B)
. I timed it (on the same computer as above), and it went on until about 10 seconds. So, we came to about 1/12, not bad;).