Why shouldn't I use shared_ptr and unique_ptr always and use regular pointers instead?

I have background in C # and obj-c, so RC / GC are the things that I (still) cherish for me. When I started to learn C ++ in more detail, I can’t stop wondering why I would use regular pointers if they would be unmanageable and not other alternative solutions?

shared_ptr provides a great way to store links and not lose them without deleting them. I see practical approaches to normal pointers, but they seem just bad.

Can anyone make a case with these alternatives?

+4
source share
5 answers

As already mentioned, in C ++ you have to consider ownership. However, the 3D networked multi-user FPS I'm working on now has the official rule, "No new or delete." It uses only generic and unique pointers to determine ownership, and raw pointers extracted from them (using .get() ) wherever we need to interact with the C API. The performance drop is not noticeable. I use this as an example to illustrate a minor performance hit, since games / simulations usually have the most stringent performance requirements.

It also significantly reduced the time spent debugging and fixing memory leaks. A theoretically well-designed application will never run into these problems. In real life, working with deadlines, legacy systems, or existing game engines that were poorly designed, however, they are inevitable in large projects like games ... if you don't use smart pointers. If you need to dynamically allocate, there is not enough time to develop / rewrite the architecture or debug problems associated with resource management, and you want to remove it from the ground as quickly as possible, smart pointers are a way to go and not incur a noticeable execution cost even in large games.

+5
source

Of course, you are advised to use generic and unique ptr if they own pointers. If you only need an observer, then an unprocessed pointer will be very good (the pointer is not responsible for everything that it points to).

Basically std::uniqe_ptr no overhead, and there are some of them in std::shared_ptr as it refers to the count for you, but you rarely have to save on runtime here.

In addition, there is no need for smart pointers if you can guarantee a life / ownership hierarchy by design; let's say the parent node in the tree that its children survive, although this is slightly related to whether the pointer really owns something.

+10
source

The question is rather the opposite: why should you use smart pointers? Unlike C # (and I think Obj-C), C ++ makes extensive use of value semantics, which means that if the object does not have the application life (in this case none of the smart pointers apply), you will usually use value semantics and no dynamic allocation. There are exceptions, but if you make it a point of view in terms of the semantics of the value (for example, as an int ), defining the corresponding copy constructors and assigning operators where necessary, and not distributing dynamically if the object does not have a certain lifetime, you will rarely you need to do something; everything just takes care of itself. Using smart pointers is a very exception in most well-written C ++.

+5
source

Sometimes you need to interact with the C APIs, in which case you will need to use raw pointers at least for those parts of the code.

+2
source

In embedded systems, pointers are often used to access hardware chip or memory registers at specific addresses.

Since hardware registers already exist, there is no need to dynamically allocate them. If you delete the pointer or change it, there will be no memory leaks.

Similar to function pointers. Functions are not dynamically allocated, and they have "fixed" addresses. Reassigning a function pointer or deleting it will not lead to a memory leak.

+1
source

Source: https://habr.com/ru/post/975935/


All Articles