C ++ dynamically allocated memory

I do not quite understand the meaning of dynamically allocated memory, and I hope that you guys can make everything more understandable to me.

First of all, every time we allocate memory, we just get a pointer to that memory.

int * dynInt = new int; 

So what is the difference between what I did above and:

 int someInt; int* dynInt = &someInt; 

As I understand it, in both cases the memory is allocated for int, and we get a pointer to this memory.

So what's the difference between the two. When one method is preferable to another.

Next, why should I free memory with

 delete dynInt; 

in the first case, but not in the second case.

My hunch:

  • When dynamically allocating memory for an object, the object is not initialized, and if you do something like the second case, the object is initialized. If this is the only difference, is there any motivation besides the fact that dynamic memory allocation is faster.

  • The reason we do not need to use delete for the second case is because the fact that the object was initialized creates some kind of automatic destruction procedure.

Those that only speculations would like if someone corrected me and clarified for me.

+6
source share
8 answers

The difference lies in the duration of storage.

  • Objects with automatic storage time are your "normal" objects that automatically go beyond the scope at the end of the block in which they are defined.

    Create them as int someInt ;

    You may have heard of them as "stack objects," although I object to this terminology.

  • Objects with dynamic storage duration have a kind of “manual” life; you must destroy them yourself with delete and create them with the new keyword.

    You may have heard of them as "heap objects," although I also object to this.

The use of pointers is not really strictly related to any of them. You can have a pointer to an object with a duration of automatic storage (second example), and you can have a pointer to an object with a dynamic duration of storage (your first example).

But it rarely happens that you need a pointer to an automatic object, because:

  • you do not have it "by default";
  • the object will not last very long, so with this pointer you can’t do much.

In contrast, dynamic objects are often accessed through pointers, simply because the syntax is close to its enforcement. new returns a pointer to use, you need to pass a pointer to delete , and (other than using links) there really is no other way to access the object. He lives "there" in a cloud of dynamism that does not sit on a local scale.

In this regard, the use of pointers is sometimes confused with the use of dynamic storage, but in fact the former is not connected with the causally last.

+15
source

An object created as follows:

 int foo; 

has an automatic storage duration - the object lives until the variable foo goes out of scope. This means that in your first example, dynInt will be an invalid pointer when someInt goes out of scope (for example, at the end of a function).

An object created as follows:

 int foo* = new int; 

It has a dynamic storage duration - the object lives until you explicitly name delete on it.

Initialization of objects is an orthogonal concept; it is not directly related to the type of storage you are using. See here for more information on initialization.

+13
source

For a single integer, this makes sense if you need to save the value after, for example, after returning from a function. If you declared someInt , as you said, that would be invalidated as soon as it went beyond.

However, as a rule, it is more used for dynamic distribution. There are many things that your program does not know before distribution and depends on the input. For example, your program should read an image file. How big is this image file? We could say that we store it in such an array:

 unsigned char data[1000000]; 

But this will only work if the image size is less than or equal to 1,000,000 bytes, and will also be wasteful for smaller images. Instead, we can dynamically allocate memory:

 unsigned char* data = new unsigned char[file_size]; 

Here file_size defined at runtime. You could not say this value at compile time.

+4
source
  • When you start, your program receives the initial piece of memory. This memory is called a stack. Currently, the amount is about 2 MB.

  • Your program may request the OS for additional memory. This is called dynamic memory allocation. This allocates memory in free storage (C ++ terminology) or a bunch (C terminology). You can request as much memory as the system is ready to provide (several gigabytes).

The syntax for allocating a variable on the stack is as follows:

 { int a; // allocate on the stack } // automatic cleanup on scope exit 

The syntax for allocating a variable using memory from free storage is as follows:

 int * a = new int; // ask OS memory for storing an int delete a; // user is responsible for deleting the object 


To answer your questions:

When one method is preferable to another.

  • Stack distribution is preferred.
  • Dynamic selection is required when you need to save a polymorphic object using its base type.
  • Always use a smart pointer to automate deletion:
    • C ++ 03: boost::scoped_ptr , boost::shared_ptr or std::auto_ptr .
    • C ++ 11: std::unique_ptr or std::shared_ptr .

For instance:

 // stack allocation (safe) Circle c; // heap allocation (unsafe) Shape * shape = new Circle; delete shape; // heap allocation with smart pointers (safe) std::unique_ptr<Shape> shape(new Circle); 

Even more, why do I need to free memory in the first case, but not in the second case.

As I mentioned above, the stacks of the allocated variables are automatically freed when leaving the area. Please note: you cannot delete the stack stack. This will inevitably lead to the collapse of your application.

+4
source

Whenever you use new in C ++ memory, it is allocated through malloc , which calls the sbrk (or similar) system call. Therefore, no one except the OS knows about the requested size. Therefore, you will need to use delete (which calls free , which goes to sbrk again) to return memory to the system. Otherwise, you will get a memory leak.

Now, when it comes to your second case, the compiler has knowledge of the size of the allocated memory. That is, in your case, the size of one int . Setting a pointer to the address of this int does not change anything in the knowledge of the necessary memory. Or in other words: the compiler is able to take care of freeing up memory. In the first case with new this is not possible.

In addition to this: new accordingly malloc does not need to allocate exactly requsted size, which makes things a little more complicated.

Edit

Two more common phrases: the first case is also known as the allocation of static memory (performed by the compiler), the second case relates to the allocation of dynamic memory (performed by the runtime system).

+2
source

Learn more about dynamic memory allocation and garbage collection.

You really need to read a good C or C ++ programming book .

An explanation in detail will take a long time.

A heap is the memory within which dynamic allocation occurs (with new in C ++ or malloc in C). There are system challenges related to heap growth and contraction. On Linux, they are mmap and munmap (used to implement malloc and new , etc.).

You can invoke a distribution primitive many times. So you can put int *p = new int; inside the loop and get a new place on every loop!

Remember to free memory (with delete in C ++ or free in C). Otherwise, you will get a memory leak - a naughty kind of error. On Linux, valgrind helps to catch them.

+2
source

What happens if your program should allow the user to store any number of integers? Then you will need to decide at runtime, based on user input, how many ints to allocate, so this should be done dynamically.

+1
source

In a nutshell, a dynamically allocated object is controlled by you , not by language. This allows you to live as long as it is required (as opposed to the end of the area), perhaps determined by a condition that can only be calculated at startup.

In addition, dynamic memory, as a rule, is much more "scalable", i.e. you can allocate more and / or larger objects compared to the stack based distribution.

A distribution essentially “marks” a piece of memory, so no other object can be allocated in one space. De-allocation "cancels" this part of the memory so that it can be reused for subsequent allocations. If you fail to free memory after it is no longer needed, you will receive a condition known as a “memory leak” - your program takes up memory that is no longer needed, which leads to a possible failure in the allocation of new memory (due to lack of free memory), and just generally imposes an unnecessary load on the system.

+1
source

Source: https://habr.com/ru/post/903761/


All Articles