Does atan () provide any computational advantage over pnorm () in R?

This article describes the analytical approximation of normal CDF:

enter image description here

In the approximation, an arctangent function is used, which is also numerically approximated. I found some discussions about the arctan function algorithm as a whole and it seems rather confusing. By comparison, the source code pnorm() in R seems pretty straightforward, although it may not be as efficient.

Is there any computational advantage of using atan()instead of pnorm()in R, especially with big data and a large parameter space, when there are already a bunch of other numerical calculations based on a normal PDF?

Thank!

+4
source share
1 answer

Tried to look at it out of curiosity

Define a function first

PNORM <- function(x) { 1/(exp(-358/23*x + 111*atan(37*x/294)) + 1) }

Then consider the differences in the range [-4, 4]

x <- seq(-4, 4, .01)
plot(x, pnorm(x)-PNORM(x), type="l", lwd=3, ylab="Difference")

leading to this graph

enter image description here

So the difference is small, but maybe not so small as to ignore in some applications. YMMV. If we look at the computational time, then they are approximately equal, if the approximation is a little faster

> microbenchmark::microbenchmark(pnorm(x), PNORM(x))
Unit: microseconds
     expr    min      lq     mean  median      uq    max neval cld
 pnorm(x) 34.703 34.8785 36.54254 35.1820 38.3150 47.786   100   b
 PNORM(x) 24.293 24.4625 27.07660 24.8875 28.9035 59.216   100  a 
+3
source

Source: https://habr.com/ru/post/1683488/


All Articles