From the customers point of view, it is easy. You can use httplib low-level interface - putrequest , putheader , endheaders , and send - send whatever you want to the server in pieces of any size.
But you also need to indicate where your file ends.
If you know the total file size in advance, you can simply include the Content-Length header, and the server will stop reading the request body after this large number of bytes. Then the code may look like this.
import httplib import os.path total_size = os.path.getsize('/path/to/file') infile = open('/path/to/file') conn = httplib.HTTPConnection('example.org') conn.connect() conn.putrequest('POST', '/upload/') conn.putheader('Content-Type', 'application/octet-stream') conn.putheader('Content-Length', str(total_size)) conn.endheaders() while True: chunk = infile.read(1024) if not chunk: break conn.send(chunk) resp = conn.getresponse()
If you do not know the total size in advance, the theoretical answer is chunked transfer encoding . The problem is that although it is widely used for answers, it seems less popular (albeit just as well defined) for queries. The stock of HTTP servers may not be able to process it out of the box. But if the server is also under your control, you can try to manually parse the pieces from the request body and reassemble them into the source file.
Another option is to send each piece as a separate request (with Content-Length ) over the same connection. But you still need to implement custom logic on the server. In addition, you need to maintain state between requests.
Posted on 2012-12-27. Theres a nginx module that converts interleaved requests to regular ones. It may be useful if you do not need real streaming (start processing the request before the client sends it).
source share