In the first case, you read one file at a time, byte-by-block, block-by-block. This is as fast as disk I / O, provided the file is not very fragmented. When you are done with the first file, the disk / OS detects the beginning of the second file and continues with a very efficient, linear read of the disk.
In the second case, you constantly switch between the first and second files, forcing the disk to search from one place to another. This extra search time (approximately 10 ms) is at the root of your confusion.
Oh, and you know that disk access is single-threaded, and your task is related to I / O, so there is no way to split this task into multiple streams if you are reading from one physical disk? Your approach can only be justified if:
each thread, in addition to reading from a file, also performed some intensive or blocking processor operations, an order of magnitude slower than I / O.
files are located on different physical disks (another partition is not enough) or on some RAID configurations
you are using an SSD
source share