Typically, you can assume that stack distribution will be faster. Just remember that the stack has limited capabilities, and overuse of the large arrays associated with the stacks can lead to it ... stack overflow!
In any case, your question is valid, but the solutions that I have seen so far are extremely limited. The fact is that you can easily create a utility type that will act as a triangular matrix, and then up to a specific use case, whether you save it on the stack or heap. Note:
namespace meta { template <size_t N> struct sum { static const int value = (N + 1) * N / 2; }; } template <size_t Size> struct MultiArray { // actual buffer int numbers[ meta::sum<Size>::value ]; // run-time indexing int* getArray(size_t dimensions) { // get sum of (dimensions-1) size_t index = (dimensions * (dimensions-1)) >> 1; return &numbers[index]; } // compile-time indexing template <size_t dimensions> int* getArray() { size_t index = meta::sum<dimensions - 1>::value ; return &numbers[ index ]; } int* operator[](size_t index) { return getArray(index); } };
Now you decide where to store it.
MultiArray<1000> storedOnStack; MultiArray<1000>* storedOnHeap = new MultiArray<1000>();
You have accessors to get to the internal arrays:
int* runTimeResolvedArray = storedOnStack.getArray(10); int* compileTimeResolvedArray = storedOnStack.getArray<10>(); int* runTimeResolvedArray2 = storedOnStack[10]; storedOnStack[10][0] = 666;
Hope this helps!
EDIT: I also have to say that I don't like the term "stack allocation". This is misleading. Stack allocation is simply an increase in the stack pointer register. So, if you "allocate" 100 bytes to the stack, the pointer will be increased by 100 bytes. But if you allocate 100 bytes to the heap, then it becomes complicated - the current allocator should find a suitable empty memory space, update distribution map, etc. And so on.
If this is a one-time allocation, go ahead and do it on the heap, the overhead of dynamic allocation will not be noticeable. But if you do this many times per second, then select the stack distribution. In addition, stack arrays can be potentially faster to access, since the contents of the stack are likely to be in the cache. But, obviously, really huge arrays will not match the cache. So, anwser: profile.