Convert stream to add a line to each line

I create a child process as follows:

const n = cp.spawn('bash'); n.stdout.pipe(process.stdout); n.stderr.pipe(process.stderr); 

I am looking for a transform stream so that I can add something like "[child process]" to the beginning of each line from the child, so I know that stdio comes from the child compared to the parent process.

So it will look like this:

 const getTransformPrepender = function() : Transform { return ... } n.stdout.pipe(getTransformPrepender('[child]')).pipe(process.stdout); n.stderr.pipe(getTransformPrepender('[child]')).pipe(process.stderr); 

Does anyone know if there is an existing conversion package, how is it or how to write it?

I have it:

 import * as stream from 'stream'; export default function(pre: string){ let saved = ''; return new stream.Transform({ transform(chunk, encoding, cb) { cb(null, String(pre) + String(chunk)); }, flush(cb) { this.push(saved); cb(); } }); } 

but I am afraid that it will not work in cases of edges - where one batch package may not contain a whole line (for very long lines).

It seems like the answer to this question is here: https://strongloop.com/strongblog/practical-examples-of-the-new-node-js-streams-api/

but with this addition: https://twitter.com/the1mills/status/886340747275812865

+5
source share
2 answers

There are only three cases that need to be handled correctly:

  • One block representing an entire line
  • One block representing multiple rows
  • One fragment representing only part of a line

Here is a description of the algorithm for solving all three situations

  • Get a piece of data
  • Scan snippet for newlines
  • As soon as a new line is found, take everything in front of it (including the new line) and send it as one line with any changes you need.
  • Repeat until the entire fragment is processed (no remaining data) or until no new rows are found (some data remains, save it later)

And here is a real implementation with a description of why it is needed, etc.

Please note that for performance reasons, I do not convert buffers to classic JS strings.

 const { Transform } = require('stream') const prefix = Buffer.from('[worker]: ') const prepender = new Transform({ transform(chunk, encoding, done) { this._rest = this._rest && this._rest.length ? Buffer.concat([this._rest, chunk]) : chunk let index // As long as we keep finding newlines, keep making slices of the buffer and push them to the // readable side of the transform stream while ((index = this._rest.indexOf('\n')) !== -1) { // The `end` parameter is non-inclusive, so increase it to include the newline we found const line = this._rest.slice(0, ++index) // `start` is inclusive, but we are already one char ahead of the newline -> all good this._rest = this._rest.slice(index) // We have a single line here! Prepend the string we want this.push(Buffer.concat([prefix, line])) } return void done() }, // Called before the end of the input so we can handle any remaining // data that we have saved flush(done) { // If we have any remaining data in the cache, send it out if (this._rest && this._rest.length) { return void done(null, Buffer.concat([prefix, this._rest]) } }, }) process.stdin.pipe(prepender).pipe(process.stdout) 
+5
source

This work is in progress:

https://github.com/ORESoftware/prepend-transform

but it is designed to solve the problem at hand:

 import pt from 'prepend-transform'; import * as cp from 'child_process'; const n = cp.spawn('bash'); n.stdout.pipe(pt('[child stdout]')).pipe(process.stdout); n.stderr.pipe(pt('[child stderr]')).pipe(process.stderr); 
+2
source

Source: https://habr.com/ru/post/1269046/


All Articles