Size_t and memory allocation

There is such a type std::size_t . It can be used to describe the size of an object, since it is guaranteed that it can express the maximum size of any object (as it is written here ). But what does this mean? In fact, we have no objects in memory. Does this mean that this type can store an integer that represents the largest amount of memory that we can theoretically use?

If I try to write something like

 size_t maxSize = std::numeric_limits<std::size_t>::max(); new char[maxSize]; 

I will get an error because the total size of the array is limited to 0x7fffffff. What for? Moreover, if I pass a maxSize expression equal to maxSize , std::bad_array_new_length will be selected. If I pass in an expression smaller than maxSize but still larger than 0x7fffffff, std::bad_alloc will be selected. I believe that std::bad_alloc thrown out due to lack of memory, and not because the size is greater than 0x7fffffff. Why is this happening? I think it is natural to throw a special exception if the size of the memory that we allocate is greater than 0x7fffffff (which is the maximum value for const that is passed to the new [] at compile time), and why std::bad_array_new_length is only thrown if I maxSize ? Is this a special case?

By the way, if I pass maxSize to the vector constructor as follows:

 vector<char> vec(maxSize); 

std::bad_alloc will be selected, not std::bad_array_new_length . Does this mean that the vector uses a different allocator?

I am trying to do an array implementation myself. Using unsigned int to store size, capacity, and indexes is a bad approach. So it is a good idea to define some aliases as follows:

 typedef std::size_t size_type; 

and use size_type instead of unsigned int ?

+5
source share
1 answer

The answer lies in the process of creating an object with dynamic storage duration.

In short, when a program executes a new expression like: new char[size] :

  • It checks that s=size*sizeof(char)+x is a valid size (implementation is defined by 0x7fffffff, which depends on the ABI) (x = 0 on most platforms if you create an array of a trivially destructible type). If the size is not valid, it throws bad_array_new_lenght , otherwise

  • It calls the distribution function ::operator new(s) . This is the first parameter to the std::size_t function, so std::size_t must be large enough to create an object (an array is an object) of any size.

  • This distribution function requests the system to reserve a storage area of ​​size s . If the system manages to reserve this space, it returns a pointer to the beginning of the storage area. Otherwise, it calls a new handler and redistribution, but if it fails, it throws bad_alloc

  • If the distribution is successful, it by default initializes (in this case) size char (no-op) in the allocated memory and can also store the size of the array in this distributed storage (reason to be added x ) (This is used when executing the delete expression to know how many destructors must be called. If the destructor is trivial, this is optional).

You will find all the details in the C ++ standard (§6.7.4 [basic.stc.dynamic], §8.3 [expr.new], §8.4 [expr.delete], §21.6 [support.dynamic]) ..

And for the last question, you can use a signed type for the index and for the size of the object. Even if the size or index of the object should not be negative, the standard imposes that unsigned arithmetic follows modular arithmetic, which limits serious optimization. Moreover, unsigned integer arithmetic and comparison are frequent subjects of error. std::size_t unsigned for compatibility reasons, it was chosen unsigned because the prehistoric machine was short in bits! (16 bits or less!)

+2
source

Source: https://habr.com/ru/post/1271696/


All Articles