Let's start with a simple example. The HTTP data stream arrives in the following format:
MESSAGE_LENGTH, 2 bytes MESSAGE_BODY, REPEAT...
I am currently using urllib2 to extract and process streaming data, as shown below:
length = response.read(2) while True: data = response.read(length) DO DATA PROCESSING
It works, but since all messages are 50-100 bytes in size, the above method limits the size of the buffer each time it reads, which can hurt performance.
Can separate streams be used to retrieve and process data?
, , . , , , , . -, , , .
, "", "", , , .: -)
, , , , Python , . , , , , , , .
, , URL-, , , , URL- , "". , URL-, -, URL- "outqueue", , .
, NFS, ( ), , ( ).
.
, , .
httplib Python 2.2.3 , , ( select() httplib).
, , ( , httplib- chunked http-, read()).
statemachine, , , Queue.Queue, .
, (zlib.ADLER32) . 40 / HTTP/chunked.
Source: https://habr.com/ru/post/1720652/More articles:Accessing the SimpleXML Object Attribute - objectDoes Visual Studio 2010 support jQuery? - jqueryDataBinding with DataRow - problems - c #How can I use multiple schemas to connect to a database in Eclipse DTP? - eclipseCreate and create a clean "ActionScript project" using the Flex SDK for Linux - flexHow to check sql query syntax? - javaDifference between QuickPraph from CodePlex and CodeProject - comparisonBlack border around symbols when drawing. Image in transparent bitmap - c ++UUID namespace and name. How to get it? - namespacesindex directory in .htaccess - conditionalAll Articles