There is such a type std::size_t . It can be used to describe the size of an object, since it is guaranteed that it can express the maximum size of any object (as it is written here ). But what does this mean? In fact, we have no objects in memory. Does this mean that this type can store an integer that represents the largest amount of memory that we can theoretically use?
If I try to write something like
size_t maxSize = std::numeric_limits<std::size_t>::max(); new char[maxSize];
I will get an error because the total size of the array is limited to 0x7fffffff. What for? Moreover, if I pass a maxSize expression equal to maxSize , std::bad_array_new_length will be selected. If I pass in an expression smaller than maxSize but still larger than 0x7fffffff, std::bad_alloc will be selected. I believe that std::bad_alloc thrown out due to lack of memory, and not because the size is greater than 0x7fffffff. Why is this happening? I think it is natural to throw a special exception if the size of the memory that we allocate is greater than 0x7fffffff (which is the maximum value for const that is passed to the new [] at compile time), and why std::bad_array_new_length is only thrown if I maxSize ? Is this a special case?
By the way, if I pass maxSize to the vector constructor as follows:
vector<char> vec(maxSize);
std::bad_alloc will be selected, not std::bad_array_new_length . Does this mean that the vector uses a different allocator?
I am trying to do an array implementation myself. Using unsigned int to store size, capacity, and indexes is a bad approach. So it is a good idea to define some aliases as follows:
typedef std::size_t size_type;
and use size_type instead of unsigned int ?
source share