Technique for infinitely long pipes

There are two really easy ways to let one program send a data stream to another:

  • Unix pipe, or TCP socket, or something like that. This requires constant attention from the consumer program or the producer program. Even increasing the buffers of their typical tiny default values ​​is still a huge problem.
  • Regular files - the producer program adds O_APPEND, the consumer simply reads all the new data available at his discretion. This does not require any synchronization (while disk space is available), but Unix files only support truncation at the end, not at the beginning, so it will fill the disk until both programs leave.

Is there an easy way to have this in both directions, with data stored on disk until they are read and then freed? Obviously, programs can communicate through a database server or something like that, and do not have this problem, but I'm looking for something that integrates well with the regular Unix pipeline.

+3
source share
4 answers

A relatively simple manual solution.

You could have the producer create the files and keep writing until you reach a certain size / number of records, no matter what suits your application. Then the manufacturer closes the file and launches a new one with a consistent naming algorithm.

, , , , .

+3

- , . outfile.1, outfile.2 .. . , - .

, , - .

+1

socat. tcp, fifo, pipe, stdio .

, .

0

I don’t know anything, but it’s not too difficult to write a small utility that takes a directory as an argument (or uses $ TMPDIR); and uses select / poll to multiplex between reading from stdin, paging to a number of temporary files, and writing to stdout.

0
source

Source: https://habr.com/ru/post/1756863/


All Articles