We have a dual-screen DirectX application that previously worked at a consistent frequency of 60 FPS (monitor synchronization) using the NVIDIA 8400GS (256 MB). However, when we changed the card to one with 512 MB of RAM, the frame rate struggles to exceed 40 FPS. (It only reaches this maximum because we use triple buffering.) Two cards from the same manufacturer (PNY). All the rest are equal, this is a Windows XP Embedded application, and we started with a new image for each card. The driver version number is 169.21.
An application is all 2D. I.E. just a bunch of textured quads and lots of pre-processed graphics (hence the need to update the memory of the card). We also have compressed animations that the CPU decodes on the fly - this is due to texture blocking. Locks are taken forever, but I also tried to have a separate system memory texture to update the processor, and then update the rendered texture using the deviceβs UpdateTexture method. There is no overall difference in performance.
Although I read every FAQ that I can find on the Internet about DirectX performance, this is still the first time I have worked on a DirectX project, so any mysterious bits of knowledge that you would have helpful. :)
One more thing while I am on the subject; when calling "Real" in swap chains, it seems that DirectX is waiting for the completion of the present, regardless of the fact that I use D3DPRESENT_DONOTWAIT both in the current parameters (PresentationInterval) and in the flags of the call itself. Since this is a two-screen application, this is a problem since the two monitors are not displayed, and I am working on it by launching real calls through threadpool. What could be the main reason for this?
source
share