Is it practical on a small supercomputer?

I study WEP and as part of this, I play with the RC4 algorithm. I am trying to decide whether it is possible to create a reverse table for writing (although the big one ... I have no place, and I am not going to write it). To do this, I decided to check how many matching outputs are in the first 10 bytes. This will help me decide how well the reverse table works.

Of course, 64-bit RC4 encryption has 2 ^ 64 possible keys, so this would mean a comparison of ~ 2 ^ 128. In addition, 10 bytes should be created for each comparison, which is approximately 265 cycles. (256 for initializing RC4, 10 for the bytes themselves).

Before business:

On a supercomputer with 100 cores, could you perform about 2 ^ 135 calculations in 20 days?

(20 days is the limit until they forget me). I could only get 8, or I could get 400+, but I assume 100 cores.

If that means anything, my program is written in Java. http://pastie.org/2118864

+6
source share
5 answers

An interesting question, and it is very difficult to answer correctly. Scalability is one of those who “try and see” things most of the time.

It should be noted that due to other factors, you will get less than linear scaling with multi-core systems .

Say your program can compare n keys per second. Thus, an ideal (i.e. linear) 100-core system will compute 100n keys per second. It would take (2^135/100n)/86400 days to compare all the keys (the worst case scenario, realistic would be half this)

If n is 1000, it will take 5041220250680569829087031221211 days, which is about 100 thousand million times more than some estimates of the age of the universe.

So, I'm going to say ... no :) Cryptographic algorithms are designed for these types of attacks. Additionally, Java will be the last language to choose when writing such an application: p

+4
source

In an ideal world, it is around:

2 135 operations ÷ 20 days ÷ 24 hours / day ÷ 60 min / hour ÷ 60 sec / min ÷ 100 cores = 10 32 operations per second per core (Hz / core), if my math is not turned off.

You will need 10 cores of 32 Hz, which perform one calculation per cycle. Usually this requires a few. It's ... not very accessible at the moment, to say the least. The best you can achieve with a supercomputer is probably around a common area of ​​~ 10 GHz = 10 10 Hz / core, if you're lucky.

(And all this ignores Amdahl’s law ...)

+4
source

These numbers are a little fiction. They basically have to say. Mathematics is overly optimistic to facilitate its work.

  • A single core can handle 4 billion (2 32 ) operations per second (this is an extremely crude optimistic figure)

  • since there are 86,400 seconds (rounded to 2 17 ) per day

  • and 20 days (rounded to 2 5 )

  • and 100 cores (rounded to 2 7 )

then ... 2 32 * 2 17 * 2 5 * 2 7 == 2 (32 + 17 + 5 + 7) == 2 61 ... like this:

Not by chance. Even remotely close. The amount of computation left is so overwhelming that I don’t even understand what it really is. Sorry: -)

(With the above pictures, it will take 2 79 days ...)

+3
source

People do not understand how big a number can be.

2 ^ 135 approx. 4e40, ok, 43556142965880123323311949751266331066368.

Suppose you have a computer capable of running 1 exaflop, much faster than everything we have today. Therefore, if he was capable of one of these calculations in EVERY floating point operation, then he could do 10 18 of them per second. It still leaves you 4e22 seconds. There are approximately 31536000 seconds per year, so your small business will still need more than 1 year15 years.

Well, depending on who you are talking to, the universe is somewhere between 6,000 years and 13 billion years or so.

Java or not, the answer is no. (Still skynet?)

+3
source

There are at least 2 ^ 240 atoms in a universe, so you don’t even need half of them to calculate it even with one calculation per day. Again, didn’t Bill Gates once say: "Who will ever need more than half the atoms in the universe?"

+1
source

Source: https://habr.com/ru/post/891332/


All Articles