So, I played in parallel with running threads in parallel and controlled their behavior based on API documentation and other supported materials that I read.
I create two parallel threads and start distinct()where the stream is ordered, and the one where it is unordered. Then I print the results with forEachOrdered()(to make sure that I see the resulting order of the stream call after executing a separate line) and can clearly see that the unordered version does not support the original order, but with a large dataset it will obviously improve performance.
There API notes indicating that the operations limit()and skip()must also be carried out more effectively in parallel, when the flow is disordered, instead of removing the first elements n, you can get any nelements. I tried to simulate this in the same way as described above, but the result when it worked in parallel with ordered and disordered threads is always the same. In other words, when I print the result after the execution limit, even for an unordered (parallel) stream, is it still always selected for the first n elements?
Can anyone explain this? I tried changing the size of my input dataset and the value n, and that didn't make any difference. I would think that it captures any n elements and optimizes parallel performance? Has anyone really seen this in practice and could offer a solution that demonstrates this behavior sequentially?
source
share