Memory problems on knex.js when using threads

I am trying to export the entire sqlite3 database table to CSV using knex.js. Since the table can contain up to 300,000 rows, I use streams to not have memory problems. But if I look at the memory usage in my application, it is up to 800 MB or I have a "out of memory" error.

How can I handle a large query result using knex.js in sqlite3 database?

Below is the sample code:

knex.select().from(table).stream(function (stream) { var stringifier = stringify(opts); var fileStream = fs.createWriteStream(file); var i = 0; stringifier.on('readable', function() { var row; while (row = stringifier.read()) { fileStream.write(row); console.log("row " + i++); //debug } }); fileStream.once('open', function(fd) { stream.pipe(stringifier); }); }); 

EDIT

It seems that the knex.js threads for the sqlite3 database are "fake" threads. Below is the source code of the stream function for sqlite3 in knex:

 Runner_SQLite3.prototype._stream = Promise.method(function(sql, stream, options) { /*jshint unused: false*/ var runner = this; return new Promise(function(resolver, rejecter) { stream.on('error', rejecter); stream.on('end', resolver); return runner.query(sql).map(function(row) { stream.write(row); }).catch(function(err) { stream.emit('error', err); }).then(function() { stream.end(); }); }); }); 

We see that it is waiting for the request to complete before creating the stream from the result array.

VERSION:

  • Knex.Js 0.7.5
  • node 0.12

Thanks for your help.

+6
source share
1 answer

I think there are no solutions. I use a limit and an offset so that all the data is step by step with knex.js and I write each line to the write stream. An example implementation for those who want:

  exportTable: function(table, writeStream) { var totalRows; var rowLimit = _config.ROW_LIMIT; return DatabaseManager.countAll(table).then(function(count) { totalRows = count[0]['count(*)']; var iterations = new Array(Math.ceil(totalRows / rowLimit)); return Promise.reduce(iterations, function(total, item, index) { return _knex.select().from(table).limit(rowLimit).offset(index * rowLimit).map(function(row) { writeStream.write(row); }).catch(function(err) { return Promise.reject(err); }); }, 0).then(function() { return Promise.resolve(); }).catch(function(err) { return Promise.reject(err); }); }).catch(function(err) { console.log(err); return Promise.reject(err); }); } 
+3
source

Source: https://habr.com/ru/post/984199/


All Articles