How to test multi-core processors

I am looking for ways to run micro tests on multi-core processors.

Context:

Around the same time, desktop processors introduced out-of-order execution, which had a big impact on performance; they, perhaps not by accident, also introduced special instructions for obtaining very accurate timings. An example of these instructions is rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more accurate than ever allowing a system call, allowing programmers to microcontrol their hearts, better or worse.

On an even more modern processor with several cores, some of which sleep for a while, the counters are not synchronized between the cores. We are told that rdtsc no longer safe to use for benchmarking, but I must have been napping when alternative solutions were explained to us.

Question:

Some systems may save and restore a performance counter and provide an API call to read the correct amount. If you know that this call is for any operating system, let us know in response.

Some systems may allow you to disable the kernel, leaving only one start. I know what Mac OS X Leopard does when the right preferences panel is installed in the Developer Tools. Do you think that making rdtsc safe for use again?

More context:

Suppose I know what I'm doing when trying to make a micro benchmark. If you think that if optimization cannot be measured by time for the entire application, it’s not worth optimizing, I agree with you, but

  • I cannot start the whole application until the alternative data structure is finished, which will take a lot of time. In fact, if the micro benchmark was not promising, I would decide to abandon the implementation now;

  • I need numbers to publish in a publication whose term I do not control.

+4
source share
2 answers

On OSX (ARM, Intel, and PowerPC) you want to use mach_absolute_time( ) :

 #include <mach/mach_time.h> #include <stdint.h> // Utility function for getting timings in nanoseconds. double machTimeUnitsToNanoseconds(uint64_t mtu) { static double mtusPerNanosecond = 0.0; if (0.0 == mtusPerNanosecond) { mach_timebase_info_data_t info; if (mach_timebase_info(&info)) { // Handle an error gracefully here, whatever that means to you. // If you do get an error, something is seriously wrong, so // I generally just report it and exit( ). } mtusPerNanosecond = (double)info.numer / info.denom; } return mtu * mtusPerNanosecond; } // In your code: uint64_t startTime = mach_absolute_time( ); // Stuff that you want to time. uint64_t endTime = mach_absolute_time( ); double elapsedNanoseconds = machTimeUnitsToNanoseconds(endTime - startTime); 

Note that there is no need to limit one core for this. The operating system handles the fix required behind the scenes for mach_absolute_time( ) to give sluggish results in a multi-core (and multi-cell) environment.

+2
source

Kernels return the correct synchronized values ​​for "rtdsc". If you have a machine with multiple disks, you must fix the process in one socket. It's not a problem.

The main problem is that the scheduler makes the data unreliable. There is some performance API for Linux kernel> 2.6.31, but I have not looked at it. Windows> Vista does an excellent job here, use QueryThreadCycleTime and QueryProcessCycleTime.

I'm not sure about OSX, but AFAIK "mach_absolute_time" does not adjust the scheduled time.

+1
source

Source: https://habr.com/ru/post/1309193/


All Articles