Measure overhead when switching context in GPU

There are many ways to measure processor context switching costs. It seems to have few resources to measure the cost of switching the GPU context. CPU context switching and GPU are different.

GPU scheduling is based on warp scheduling. To calculate the GPU context switching overhead, I need to know the warp time with context switching and warping without context switching, as well as the subtraction to get the overhead.

Am I confused about how to measure warp time with context switching? Does anyone have any ideas for measuring?

+4
source share
1 answer

I don’t think it really makes sense to talk about “overhead” when switching context to GPU.

On the processor, context switching is performed in software, by means of a function in the kernel called a "scheduler." A scheduler is a regular code, a sequence of machine instructions that a processor must execute, and the time taken to start the scheduler does not waste time on “useful” work.

, , , , , , , " " - . .

. SuperUser.

+1

Source: https://habr.com/ru/post/1544836/


All Articles