When running a spark job in an AWS cluster, I believe that I correctly modified my code to distribute both the data and the algorithm that I use. But the conclusion is as follows:
[Stage 3:> (0 + 2) / 1000]
[Stage 3:> (1 + 2) / 1000]
[Stage 3:> (2 + 2) / 1000]
[Stage 3:> (3 + 2) / 1000]
[Stage 3:> (4 + 2) / 1000]
[Stage 3:> (5 + 2) / 1000]
[Stage 3:> (6 + 2) / 1000]
[Stage 3:> (7 + 2) / 1000]
[Stage 3:> (8 + 2) / 1000]
[Stage 3:> (9 + 2) / 1000]
[Stage 3:> (10 + 2) / 1000]
[Stage 3:> (11 + 2) / 1000]
[Stage 3:> (12 + 2) / 1000]
[Stage 3:> (13 + 2) / 1000]
[Stage 3:> (14 + 2) / 1000]
[Stage 3:> (15 + 2) / 1000]
[Stage 3:> (16 + 2) / 1000]
Am I correctly interpreting 0 + 2/1000 as only one dual-core processor that performs one of 1000 tasks at a time? With 5 nodes (10 processors), why don't I see 0 + 10/1000?
source
share