In the field of cluster computing, especially when working with large-scale solutions, there are cases when the work distributed between many systems (and many many processor cores) should be performed in a fairly predictable time frame. The operating system and the stack of software may introduce some variability during the execution of these "pieces" of work. This variability is often called the "OS Jitter". link
The interrupt delay, as you said, is the time between the interrupt signal and the input to the interrupt handler.
Both concepts are orthogonal to each other. However, in practice, more interruptions typically mean more OS jitter.
source share