Recent Intel chips (Ivy Bridge and up) have instructions for generating (pseudo) random bits. RDSEEDoutputs the “true” random bits generated by the entropy collected from the sensor on the chip. RDRANDoutputs bits generated by a pseudo-random number generator, seeded by a true random number generator. According to Intel documentation , it RDSEEDruns slower, since entropy collection is expensive. Thus, it is RDRANDoffered as a cheaper alternative, and its output is quite safe for most cryptographic applications. (This is similar /dev/randomcompared /dev/urandomto Unix systems.)
I was interested to know the difference in performance between the two instructions, so I wrote code to compare them. To my surprise, I believe that there is practically no difference in performance . Can someone give an explanation? The following are code and system data.
Benchmark
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <x86intrin.h>
#define BUFSIZE (1<<24)
int main() {
unsigned int ok, i;
unsigned long long *rand = malloc(BUFSIZE*sizeof(unsigned long long)),
*seed = malloc(BUFSIZE*sizeof(unsigned long long));
clock_t start, end, bm;
start = clock();
for (i = 0; i < BUFSIZE; i++) {
ok = _rdrand64_step(&rand[i]);
}
bm = clock() - start;
printf("RDRAND: %li\n", bm);
start = clock();
for (i = 0; i < BUFSIZE; i++) {
ok = _rdseed64_step(&seed[i]);
}
end = clock();
printf("RDSEED: %li, %.2lf\n", end - start, (double)(end-start)/bm);
free(rand);
free(seed);
return 0;
}
System Information
- Intel Core i7-6700 CPU @ 3.40 GHz
- Ubuntu 16.04
- gcc 5.4.0
source
share