I write a client-server application as follows: client (C #) ↔ server (scrollable, ftp proxy and additional functionality) ↔ ftp server
The server has two classes: my own protocol class, inherited from the LineReceiever protocol, and FTPClient from twisted.protocols.ftp.
But when the client sends or receives large files (10 Gb - 20 Gb), the server catches a MemoryError. I do not use any buffers in my code. This happens when, after a call, the transport.write (data) data is attached to the internal buffer of the authors of the reactor (correct me if I am wrong).
What should I use to avoid this problem? Or should I change the approach to the problem?
I found out that for large flows I have to use the interfaces IConsumer and IProducer. But finally, it will call the transfer.write method and the effect will be the same. Or am I wrong?
UPD:
Here is the logic of file upload / download (from ftp via Twisted server to a client on Windows):
The client sends some headers to the Twisted server and then starts sending the file. Twisted Server headers and after that (if necessary) call setRawMode() , open an ftp connection and receive / send bytes from / to the client and after all closed connections. Here is the part of the code that downloads the files:
FTPManager Class
def _ftpCWDSuccees(self, protocol, fileName): self._ftpClientAsync.retrieveFile(fileName, FileReceiver(protocol)) class FileReceiver(Protocol): def __init__(self, proto): self.__proto = proto def dataReceived(self, data): self.__proto.transport.write(data) def connectionLost(self, why = connectionDone): self.__proto.connectionLost(why)
The main class of the proxy server:
class SSDMProtocol(LineReceiver) ...
After the SSDMProtocol parsing headers (call obSSDMProtocol ), it calls a method that opens an ftp connection ( FTPClient from twisted.protocols.ftp ) and sets the FTPManager _ftpClientAsync field object and calls _ftpCWDSuccees(self, protocol, fileName) with protocol = obSSDMProtocol and when byte files Gets the dataReceived(self, data) a FileReceiver object.
And when self.__proto.transport.write(data) is called, the data is added to the internal buffer faster than sending back to the client, so the memory runs out. Maybe I can stop reading when the buffer reaches a certain size and resume reading after the buffer is sent to the client? or something like that?