Is it possible to use socket objects with Python multiprocessing? socket.close () doesn't seem to work

I am writing a server that uses multiprocessing.Process for each client. socket.accept () is called in the parent process, and the connection object is specified as an argument to the process.

The problem is that when socket.close () is called, the socket does not close. The recv () client should return immediately after calling close () on the server. This happens when using threading.Thread or just process requests in the main thread, however when using multiprocessing, client recv seems to hang forever.

Some sources indicate that socket objects should be separated as descriptors with multiprocessing .Pipes and multiprocess.reduction, but this does not seem to matter.

EDIT: I am using Python 2.7.4 on a 64-bit version of Linux.

The following is an example implementation demonstrating this problem.

server.py

import socket from multiprocessing import Process #from threading import Thread as Process s = socket.socket() s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', 5001)) s.listen(5) def process(s): print "accepted" s.close() print "closed" while True: print "accepting" c, _ = s.accept() p = Process(target=process, args=(c,)) p.start() print "started process" 

client.py

 import socket s = socket.socket() s.connect(('', 5001)) print "connected" buf = s.recv(1024) print "buf: '" + buf +"'" s.close() 
+4
source share
1 answer

The problem is that the socket is not closed in the parent process. Therefore, it remains open and causes the symptom that you are observing.

Immediately after the child process is canceled to handle the connection, you must close the parent process with a "copy of the socket", for example:

 while True: print "accepting" c, _ = s.accept() p = Process(target=process, args=(c,)) p.start() print "started process" c.close() 
+10
source

Source: https://habr.com/ru/post/1488078/


All Articles