What does the hyperspace alert in OProfile mean?

When using the OProfile statistical execution profiler to render the callgraph profile for my C application, it includes the following warning several times. The warning for me is rather cryptic:

warning: dropping hyperspace sample at offset 1af9 >= 2be8 for binary /home/myuser/mybinary 

I use OProfile in a virtualized Xen environment, for example:

 modprobe oprofile timer=1 opcontrol --no-vmlinux opcontrol --start (wait for profiling data to accumulate) opcontrol --stop opreport --session-dir=/var/lib/oprofile --exclude-dependent --demangle=smart \ --symbols /home/myuser/mybinary --callgraph 

Full output of the last command:

 Overflow stats not available CPU: CPU with timer interrupt, speed 0 MHz (estimated) Profiling through timer interrupt warning: dropping hyperspace sample at offset 84d0 >= 79a0 for binary /home/myuser/mybinary warning: dropping hyperspace sample at offset 7ac0 >= 79a0 for binary /home/myuser/mybinary warning: dropping hyperspace sample at offset 7d90 >= 79a0 for binary /home/myuser/mybinary warning: dropping hyperspace sample at offset 7ac0 >= 79a0 for binary /home/myuser/mybinary warning: dropping hyperspace sample at offset 7d90 >= 79a0 for binary /home/myuser/mybinary warning: dropping hyperspace sample at offset 8210 >= 79a0 for binary /home/myuser/mybinary samples % symbol name ------------------------------------------------------------------------------- 

After that, he prints the plausible call graph data.

What does hyperspace warning mean? What causes this? Does this affect profiling results? How can i fix this?

+4
source share
1 answer

Maynard Johnson explains this warning in a message on the mailing list :

There have been cases where samples recorded by the oprofile kernel driver seem to be due to incorrect binary code, especially if the sampling rate is very high or when profiling is performed (since callgraph profiling, like high sampling frequency, also leads to very high kernel driver overheads oprofile and overflow of its internal fetch buffers). I suspect you are facing. Unfortunately, this is a very insidious mistake, and no one has yet been able to find the root cause. The kernel driver reports an overflow of its internal buffers and the respondents log prints them. For convenience, starting with oprofile 0.9.5, opreport will also issue a warning when it detects non-zero overflow counts and suggests reducing the sampling interval.

I suggest looking in your /var/lib/oprofile/samples/oprofiled.log and looking for overflow statistics for the above profile run (log entries have a timestamp). If you see only a few or very small percentages (say, less than 3%), you can simply ignore the anomalies. In general, to avoid / limit such things, you should profile at the lowest sampling rate, especially when you are performing callgraph profiling. So what do I mean by "practical"? Well, as with any sample-based profiler, oprofile is statistical in nature. And the more data you have, the more confidence in the data. Thus, for 100% certainty, you should (theoretically) have a profile with an account value of "1". Not too practical, as your machine may seem blocked, since most of the work you do is record samples. To profile loop events, you can probably use a counter worth a few million or so (on today's processors) and still pretty confident about the date. For other events, it really depends on the frequency of their occurrence.

+5
source

Source: https://habr.com/ru/post/1402037/


All Articles