I am setting up a DX12 application that only flushes the buffer buffer for each frame.
This is really a barebone: no PSO, no root ... The only feature is that it expects swapChain to be executed using Present () before starting a new frame ( msdn waitable swap chain ) (I set the frame latency to 1, as well as only 2 buffers).
The first frame works well, but it immediately starts drawing a second frame, and, of course, the command manager complains that it is reset while the commands are still running on the GPU.
I could, of course, set up a fence to wait for gpu to complete before moving on to a new frame, but I thought it was a task for an object with a swap chain.
Here is the visualization procedure:
if (m_command_allocator->Reset() == E_FAIL) { throw; }
HRESULT res = S_OK;
res = m_command_list->Reset(m_command_allocator.Get(), nullptr);
if (res == E_FAIL || res == E_OUTOFMEMORY) { throw; }
m_command_list->ResourceBarrier(1,
&CD3DX12_RESOURCE_BARRIER::Transition(m_render_targets[m_frame_index].Get(),
D3D12_RESOURCE_STATE_PRESENT, D3D12_RESOURCE_STATE_RENDER_TARGET));
m_command_list->RSSetViewports(1, &m_screen_viewport);
m_command_list->RSSetScissorRects(1, &m_scissor_rect);
m_command_list->ClearRenderTargetView(get_rtv_handle(),
DirectX::Colors::BlueViolet, 0, nullptr);
m_command_list->OMSetRenderTargets(1, &get_rtv_handle(), true, nullptr);
m_command_list->ResourceBarrier(1,
&CD3DX12_RESOURCE_BARRIER::Transition(m_render_targets[m_frame_index].Get(),
D3D12_RESOURCE_STATE_RENDER_TARGET, D3D12_RESOURCE_STATE_PRESENT));
tools::throw_if_failed(m_command_list->Close());
ID3D12CommandList* ppCommandLists[] = { m_command_list.Get() };
m_command_queue->ExecuteCommandLists(_countof(ppCommandLists),
ppCommandLists);
if (m_swap_chain->Present(1, 0) != S_OK) { throw; }
m_frame_index = m_swap_chain->GetCurrentBackBufferIndex();
I perform this procedure with the expected object that I got from swapchain:
while (WAIT_OBJECT_0 == WaitForSingleObjectEx(waitable_renderer, INFINITE, TRUE) && m_alive == true)
{
m_graphics.render();
}
and I initialized swatchchain with the expected flag:
DXGI_SWAP_CHAIN_DESC1 swap_chain_desc = {};
swap_chain_desc.BufferCount = s_frame_count;
swap_chain_desc.Width = window_width;
swap_chain_desc.Height = window_height;
swap_chain_desc.Format = m_back_buffer_format;
swap_chain_desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swap_chain_desc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD;
swap_chain_desc.SampleDesc.Count = 1;
swap_chain_desc.Flags = DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT;
ComPtr<IDXGISwapChain1> swap_chain;
tools::throw_if_failed(
factory->CreateSwapChainForHwnd(m_command_queue.Get(), window_handle, &swap_chain_desc, nullptr, nullptr, &swap_chain));
I call SetFrameLatency right after creating the swapChain:
ComPtr<IDXGISwapChain2> swap_chain2;
tools::throw_if_failed(m_swap_chain.As(&swap_chain2));
tools::throw_if_failed(swap_chain2->SetMaximumFrameLatency(1));
m_waitable_renderer = swap_chain2->GetFrameLatencyWaitableObject();
And the resizing swapChain that goes with it:
tools::throw_if_failed(
m_swap_chain->ResizeBuffers(s_frame_count, window_width, window_height, m_back_buffer_format, DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT));
My question is: am I setting something wrong? or is this how the swap chain works (i.e. you also need to sync with gpu with fences before you wait until the swap chain becomes available)?
EDIT: Adding SetFrameLatency call + C ++ coloring