These questions may seem very esoteric to most, but I would like to know more about this.
first
I wonder how long it takes for the FPGA to reconfigure itself, from the moment when its simulated circuitry shuts down to the moment when the new one will work and work.
I know that Place- & -Route is an expensive process, but this is because the P & R tools must decide where to place the components and how to route them.
Keep in mind that P & R analysis is done, and all that remains is actually reconfiguring the FPGA: is this a slow process in itself? Can this be done hundreds or thousands of times per second?
There are several implications for this possibility, which is interesting to me. To name 2, this may allow us to serve FPGAs for several simultaneous βclientsβ (just as the GPU is capable of transmitting material for several different programs) or provide extremely finely tuned circuits for long crunching processes with well-defined, but numerous stages of high-asynchronous processing ( I think: complex Haskell programs).
second
The flip side that I would like to ask is whether the FPGA can be partially reconfigured in real time, while the simulated circuit works and works if the parts that are being reconfigured are disabled, of course.
Some interesting consequences may arise from this possibility, for example, resolution for real-time reconfigurable buses, hardware emulation of neural networks, etc.
Are such things widely researched now? And how much can they be investigated in the future?
source share