Similar to std::vector
, the CPython list
already pre-allocates more elements than it needs, and then increases the allocated space in such a way that it gives O(1)
amortization. Therefore, I would leave this to this until I could prove that this profiling is indeed a bottleneck.
edit: You mentioned in the comments that you have already done profiling. In such a case, pre-allocating [None]*n
may be reasonable to try to understand if these are redistributions that are a bottleneck.
If your array is numeric, I would recommend you take a look at NumPy .
source share