What is the reason for the two-stage destruction of objects?

In the world of game development, I often see classes using separate methods initialize()and uninitialize()or shutdown(). This includes not only a few tutorials, but also long-standing and major real-world projects, such as some modern game engines. I recently saw a class in Cry Engine 3 that uses not only a method shutdown(), but also gets to a call this.~Foo()from it, which, based on everything I know about C ++, really cannot be considered a good design.

While I see some of the benefits that come from two-stage initialization, and there are a lot of discussions about this, I cannot understand the reasons behind the two-stage destruction. Why not use the default tools provided by C ++ in the form of a destructor, but have a separate method shutdown(), and the destructor is left empty? Why not go even further and, using modern C ++, put all the resources held by the object in smart pointers, so we don’t have to worry about manually releasing them.

Is two-stage destruction some outdated design based on principles that no longer apply or are there some good reasons to use it with standard ways to control the lifetime of objects?

+1
source share
3 answers

If you do not want to read, the bottom line is that you need exceptions to return errors from ctors, and exceptions are bad.

As Trevor and others hinted at, there are a number of reasons for this practice. You have brought a concrete example here , so let me address this.

The textbook is dedicated to the class GraphicsClass(the name does not necessarily inspire confidence), which contains the following definitions:

class GraphicsClass
{
public:
    GraphicsClass();
    ~GraphicsClass();
    bool Initialize(int, int, HWND);
    void Shutdown();
};

So it has ctor, dtor and Initialize/Shutdown. Why not condense the latter into the former? The implementation gives a few tips:

bool GraphicsClass::Initialize(int screenWidth, int screenHeight, HWND hwnd)
{
    bool result;

    // Create the Direct3D object.
    m_D3D = new D3DClass;
    if(!m_D3D)
    {
        return false;
    }

    // Initialize the Direct3D object.
    result = m_D3D->Initialize(screenWidth, screenHeight, VSYNC_ENABLED, hwnd, FULL_SCREEN, SCREEN_DEPTH, SCREEN_NEAR);
    if(!result)
    {
        MessageBox(hwnd, L"Could not initialize Direct3D", L"Error", MB_OK);
        return false;
    }

    return true;
}

, , new D3DClass, ( , new, bad_alloc) *. , D3DClass::Initialize(). , , , - , . , ctor, .

, , : ? ++ . , , . dtor, , , . , , ++ .

; , , , C ( ctors/dtors), , A B . , , # 1 - , , .

, ++ , , , " " , .

* - ! , new D3DClass new D3DClass, , , , , , .

+3

( : , ), , delete , , , .

, , , , , , , "Shutdown()" . ( , "" . , , API .)

+2

( DTOR) , :

// Passing by value to modify some things without affecting the original instances
somefunc(Foo f, Bar b)
  {
    // Behind the scene (during construction), the original instances allocated
    // and now share some resources with these copies

    // Modifying and testing here
    .
    .
    .
    // Implicit call to DTOR in 9 .. 8 .. 7
    // DTOR was called here implicitly before exiting the scope 
    // (I may not have actually wanted to free some shared resources)
  }

It would be better if I had a separate function that I can call when I want. (although I'm not sure if they relate to a particular gaming company that released the code that you referred to in your question. Having a saperate function that allows you to actually free resources gives you more flexibility and control over how and when these resources will be released.

0
source

Source: https://habr.com/ru/post/1524891/


All Articles