If virtual function calls become a problem, there is a compile-time method that removes virtual calls using a small amount of preprocessor and compiler optimization. One possible implementation is as follows:
Declare a basic rendering with pure virtual functions:
class RendererBase { public: virtual bool Draw() = 0; };
Declare a specific implementation:
#include <d3d11.h> class RendererDX11 : public RendererBase { public: bool Draw(); private: // D3D11 specific data };
Create a RendererTypes.h header to forward the declaration of your renderer based on the type you want to use with some preprocessor:
#ifdef DX11_RENDERER class RendererDX11; typedef RendererDX11 Renderer; #else class RendererOGL; typedef RendererOGL Renderer; #endif
Also create a Renderer.h header to include the appropriate headers for your renderer:
#ifdef DX11_RENDERER #include "RendererDX11.h" #else #include "RendererOGL.h" #endif
Now wherever you use the renderer, refer to it as a Renderer type, include RendererTypes.h in your header files and Renderer.h in your cpp files.
Each of the rendering implementations should be in different projects. Then create various layout configurations for compilation using whatever visualization you choose to use. For example, you do not want to include DirectX code for Linux configuration.
In debug builds, calls to virtual functions can still be made, but in releases they are optimized because you never call calls through the base class interface. It is only used to provide a common signature for your visualizer classes at compile time.
Although this method requires a little preprocessor, it is minimal and does not interfere with the readability of your code, since it is isolated and limited by some typedef and includes. The only drawback is that you cannot switch rendering implementations at runtime using this method, since each implementation will be built for a separate executable. However, there is really no need to switch configurations at runtime.