Lots of stacks and dynamic allocation

In a multidimensional array, where the second dimension will have known dimensions (albeit different for each first dimension), it would be faster for hard code to actually build these arrays:

int* number[1000]; int secondDim[1]; number[0] = secondDim; int secondDimTwo[2]; number[1] = secondDim; 

etc .. etc 1000 times (I know, I know)

or dynamically highlight every second dimension:

 for(int i = 0; i < 1000; i++) number[i] = new int[i+1]; 

Just trying to wrap a concept around me.

+4
source share
4 answers

Typically, you can assume that stack distribution will be faster. Just remember that the stack has limited capabilities, and overuse of the large arrays associated with the stacks can lead to it ... stack overflow!

In any case, your question is valid, but the solutions that I have seen so far are extremely limited. The fact is that you can easily create a utility type that will act as a triangular matrix, and then up to a specific use case, whether you save it on the stack or heap. Note:

 namespace meta { template <size_t N> struct sum { static const int value = (N + 1) * N / 2; }; } template <size_t Size> struct MultiArray { // actual buffer int numbers[ meta::sum<Size>::value ]; // run-time indexing int* getArray(size_t dimensions) { // get sum of (dimensions-1) size_t index = (dimensions * (dimensions-1)) >> 1; return &numbers[index]; } // compile-time indexing template <size_t dimensions> int* getArray() { size_t index = meta::sum<dimensions - 1>::value ; return &numbers[ index ]; } int* operator[](size_t index) { return getArray(index); } }; 

Now you decide where to store it.

 MultiArray<1000> storedOnStack; MultiArray<1000>* storedOnHeap = new MultiArray<1000>(); 

You have accessors to get to the internal arrays:

 int* runTimeResolvedArray = storedOnStack.getArray(10); int* compileTimeResolvedArray = storedOnStack.getArray<10>(); int* runTimeResolvedArray2 = storedOnStack[10]; storedOnStack[10][0] = 666; 

Hope this helps!

EDIT: I also have to say that I don't like the term "stack allocation". This is misleading. Stack allocation is simply an increase in the stack pointer register. So, if you "allocate" 100 bytes to the stack, the pointer will be increased by 100 bytes. But if you allocate 100 bytes to the heap, then it becomes complicated - the current allocator should find a suitable empty memory space, update distribution map, etc. And so on.

If this is a one-time allocation, go ahead and do it on the heap, the overhead of dynamic allocation will not be noticeable. But if you do this many times per second, then select the stack distribution. In addition, stack arrays can be potentially faster to access, since the contents of the stack are likely to be in the cache. But, obviously, really huge arrays will not match the cache. So, anwser: profile.

+4
source

If you already know the size of the array that you need, you should use the array on the stack.
Note: Usually dynamic allocations are more expensive than stack allocations.
If the difference in performance is significant enough for your case, this can only be determined by profiling.

Also, avoid dynamic allocations as they are more error prone if you are not using some kind of RAII- based resource management.

+1
source

It is best to select the entire array at a time so that all data is contiguous (better caching).

int * arr; arr = malloc (sizeof (int) * sum (1, n)); you must define the sum function.

Just select the 2d array (first dim) and set the pointers to the correct location in arr.

+1
source

Until you really profile and don't know that dynamic allocation is the problem:

 std::vector<std::vector<int> > number; for ( size_t i = 0; i < 1000; ++ i ) { number.push_back( std::vector<int>( i + 1 ); } 

or (maybe a little faster, especially if your compiler doesnโ€™t yet have the semantics moved):

 std::vector<std::vector<int> > number( 1000 ); for ( size_t i = 0; i != number.size() ; ++ i ) { number[i].resize( i + 1 ); } 

If this causes performance issues or maybe itโ€™s just better to encapsulate, you can write a simple class that represents the data Structure:

 class TriangularMatrix { std::vector<int> myData; size_t mySize; size_t rowIndex( size_t i ) const { return (i * (i + 1)) / 2; } public: TriangularMatrix( size_t size ) : myData( rowIndex( size ) ) , mySize( size ) { } int* operator[]( size_t i ) // Returns row { assert( i < mySize ); return &myData[rowIndex( i )]; } int const* operator[]( size_t i ) const // Returns row { assert( i < mySize ); return &myData[rowIndex( i )]; } int& operator( int i, int j ) // indexation { assert( j <= i ); return operator[]( i )[ j ]; } int const& operator( int i, int j ) const // indexation { assert( j <= i ); return operator[]( i )[ j ]; } }; 

In fact, I think something like this will be much cleaner and more readable.

+1
source

Source: https://habr.com/ru/post/1395733/


All Articles