After going through this, I will add my two cents.
It is useful to have a dedicated map for calculations, but it is definitely not necessary.
I used a development workstation with one high-end GPU for display and calculation. I also used workstations with multiple GPUs, as well as headless computing servers.
My experience is that performing computations on the GPU is excellent if the display requirements are typical of software development. In Linux settings with multiple monitors, web browsers, text editors, etc. I use about 200 MB to display from a 6 GB card - so only about 3% of the overhead. You can see that the display stutters a little while the webpage is refreshing or something like that, but the display bandwidth requirements are very small.
One technical issue that is worth noting for completeness is that the NVIDIA driver, GPU or OS firmware may have a timeout to complete the kernel on the graphic display (run NVIDIA 'deviceQueryDrv' to see the โrun-time on kernelsโ driver installation). In my experience (on Linux), with machine learning, this has never been a problem, since the timeout is a few seconds, and even when using custom kernels, synchronization in multiprocessor systems limits how much you can boot into a single kernel run. I expect that typical runs of pre-processed ops in TensorFlow will be two or more orders of magnitude below this limit.
However, there are several big advantages to having multiple cards that support computing on a workstation (regardless of whether it is used for display). Of course, there is potential for more bandwidth (if your software can use it). However, the main advantage in my experience is the possibility of lengthy experiments while developing new experiments.
Of course, you can start with one card, and then add it later, but make sure that your motherboard has a lot of space and your power supply can handle the load. If you decide to have two cards, one of which is a low-quality card designed for display, I would specifically advise not to have a low-level card as a card with CUDA support so that it is not selected by default for calculations.
Hope this helps.