I am writing a server that uses multiprocessing.Process for each client. socket.accept () is called in the parent process, and the connection object is specified as an argument to the process.
The problem is that when socket.close () is called, the socket does not close. The recv () client should return immediately after calling close () on the server. This happens when using threading.Thread or just process requests in the main thread, however when using multiprocessing, client recv seems to hang forever.
Some sources indicate that socket objects should be separated as descriptors with multiprocessing .Pipes and multiprocess.reduction, but this does not seem to matter.
EDIT: I am using Python 2.7.4 on a 64-bit version of Linux.
The following is an example implementation demonstrating this problem.
server.py
import socket from multiprocessing import Process #from threading import Thread as Process s = socket.socket() s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', 5001)) s.listen(5) def process(s): print "accepted" s.close() print "closed" while True: print "accepting" c, _ = s.accept() p = Process(target=process, args=(c,)) p.start() print "started process"
client.py
import socket s = socket.socket() s.connect(('', 5001)) print "connected" buf = s.recv(1024) print "buf: '" + buf +"'" s.close()
source share