Linux - The difference between migrations and switches?

/proc/<PID>/sched at the schedule statistics in /proc/<PID>/sched , you can get this output:

 [ horro@system ~]$ cat /proc/1/sched systemd (1, #threads: 1) ------------------------------------------------------------------- se.exec_start : 2499611106.982616 se.vruntime : 7952.917943 se.sum_exec_runtime : 58651.279127 se.nr_migrations : 53355 nr_switches : 169561 nr_voluntary_switches : 168185 nr_involuntary_switches : 1376 se.load.weight : 1048576 se.avg.load_sum : 343837 se.avg.util_sum : 338827 se.avg.load_avg : 7 se.avg.util_avg : 7 se.avg.last_update_time : 2499611106982616 policy : 0 prio : 120 clock-delta : 180 mm->numa_scan_seq : 1 numa_pages_migrated : 296 numa_preferred_nid : 0 total_numa_faults : 34 current_node=0, numa_group_id=0 numa_faults node=0 task_private=0 task_shared=23 group_private=0 group_shared=0 numa_faults node=1 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=2 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=3 task_private=0 task_shared=11 group_private=0 group_shared=0 numa_faults node=4 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=5 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=6 task_private=0 task_shared=0 group_private=0 group_shared=0 numa_faults node=7 task_private=0 task_shared=0 group_private=0 group_shared=0 

I tried to find out what are the differences between migrations and switches, some answers here and here . Summarizing these answers:

  • nr_switches : number of context switches.
  • nr_voluntary_switches : the number of voluntary switches, i.e. the thread is blocked and, therefore, another thread is picked up.
  • nr_involuntary_switches : the scheduler pushed the thread because there is another hungry thread ready to start.

So what are migrations ? Are these concepts related or not? Are migrations related to kernels and switches inside the kernel?

+5
source share
1 answer

Migration is when a thread, usually after a context switch, receives a scheduled schedule on a different processor than previously planned.

EDIT 1 :

Read more about Wikipedia on migration: https://en.wikipedia.org/wiki/Process_migration

Here is the kernel code incrementing the counter: https://github.com/torvalds/linux/blob/master/kernel/sched/core.c#L1175

 if (task_cpu(p) != new_cpu) { ... p->se.nr_migrations++; 

EDIT 2 :

A thread can go to another CPU in the following cases:

  • During exec()
  • During fork()
  • While waking up a stream.
  • If the thread merge mask has changed.
  • When the current processor goes offline.

For more information, check out the set_task_cpu() , move_queued_task() , migrate_tasks() functions in the same source file: https://github.com/torvalds/linux/blob/master/kernel/sched/core.c

Listed below are the policy planners in select_task_rq() , which depend on the scheduler class you are using. The main version of the policeman:

 if (p->nr_cpus_allowed > 1) cpu = p->sched_class->select_task_rq(p, cpu, sd_flags, wake_flags); else cpu = cpumask_any(&p->cpus_allowed); 

Source: https://github.com/torvalds/linux/blob/master/kernel/sched/core.c#L1534

So, to avoid migration, set the processor affinity mask for your threads using the sched_setaffinity(2) system call or the corresponding POSIX API pthread_setaffinity_np(3) .

Here is the definition of select_task_rq () for a completely fair scheduler: https://github.com/torvalds/linux/blob/master/kernel/sched/fair.c#L5860

The logic is quite complicated, but basically we either select the sibling idle CPU or find the least loaded new one.

Hope this answers your question.

+7
source

Source: https://habr.com/ru/post/1270314/


All Articles