Call x in range( a ) slow? (Note that py2 is hidden by RISK when using range() ...)
23[us] spent in [py2] to process ( x in range( 10E+0000 ) ) 4[us] spent in [py2] to process ( x in range( 10E+0001 ) ) 3[us] spent in [py2] to process ( x in range( 10E+0002 ) ) 37[us] spent in [py2] to process ( x in range( 10E+0003 ) ) 404[us] spent in [py2] to process ( x in range( 10E+0004 ) ) 4433[us] spent in [py2] to process ( x in range( 10E+0005 ) ) 45972[us] spent in [py2] to process ( x in range( 10E+0006 ) ) 490026[us] spent in [py2] to process ( x in range( 10E+0007 ) ) 2735056[us] spent in [py2] to process ( x in range( 10E+0008 ) ) MemoryError
The syntax of the in range( a ) constructor is not only slow in the [TIME] , having - best of all - O (log N), if we make it smarter than a pure sequential search on the listed domain of value lists, but
in
py2, native range() always also has composite additional O( N ) costs for both [TIME] domain costs (assembly time) and [SPACE] (allocation of storage space + spend more time transferring all of these data through ...) from such memory range representation.
Let me test the safe, O( 1 ) scaled approach (+ always do the test)
>>> from zmq import Stopwatch >>> aClk = Stopwatch() >>> a = 123456789; x = 123456; aClk.start(); _ = ( 0 <= x < a );aClk.stop() 4L >>> a = 123456789; x = 123456; aClk.start(); _ = ( 0 <= x < a );aClk.stop() 3L
It takes 3 ~ 4 [us] to evaluate a condition-based formulation that has O (1) scaling that is invariant to x .
Then do the same tests using the x in range( a ) formula:
>>> a = 123456789; x = 123456; aClk.start(); _ = ( x in range( a ) );aClk.stop()
and your computer will almost slow down in memory-bandwidth associated with CPU-starvation (not to mention the unpleasant redistribution of swaps from cost ranges from several ~ 100 [ns] several orders of magnitude higher to some t217> costs of input / output data flows with disk sharing).
No no no. Never check x within a limited range.
The ideas of creating some other class-based appraiser that still approaches the problem using an enumeration (set) can never match the test 3 ~ 4 [us] (unless some extraterrestrial magic is used outside my understanding of the laws causally -investigations in classical and quantum physics)
Python 3 changed the way range() -constructor works , but that was not the main merit of the original post
3 [us] spent in [py3] to process ( x in range( 10E+0000 ) ) 2 [us] spent in [py3] to process ( x in range( 10E+0001 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0002 ) ) 2 [us] spent in [py3] to process ( x in range( 10E+0003 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0004 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0005 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0006 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0007 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0008 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0009 ) ) 2 [us] spent in [py3] to process ( x in range( 10E+0010 ) ) 1 [us] spent in [py3] to process ( x in range( 10E+0011 ) )
In Python 2, none of range() not xrange() comes out of the O( N ) scaling trap, where the xrange() generator seems to only work with 2x less slow
>>> from zmq import Stopwatch >>> aClk = Stopwatch() >>> for expo in xrange( 8 ): ... a = int( 10**expo); x = a-2; aClk.start(); _ = ( x in range( a ) );aClk.stop() ... 3L 8L 5L 40L 337L 3787L 40466L 401572L >>> for expo in xrange( 8 ): ... a = int( 10**expo); x = a-2; aClk.start(); _ = ( x in xrange( a ) );aClk.stop() ... 3L 10L 7L 77L 271L 2772L 28338L 280464L
The range boundary syntax has an O( 1 ) constant time ~ < 1 [us] , as shown above, so a criterion for comparing repetitions was set:
>>> for expo in xrange( 8 ): ... a = int( 10**expo); x = a-2; aClk.start(); _ = ( 0 <= x < a );aClk.stop() ... 2L 0L 1L 0L 0L 1L 0L 1L