I used the Server Sent Event API in a Django application to stream live updates from my backend to the browser. The backend is pubsub Redis. My view of Django is as follows:
def event_stream(request): """ Stream worker events out to browser. """ listener = events.Listener( settings.EVENTS_PUBSUB_URL, channels=[settings.EVENTS_PUBSUB_CHANNEL], buffer_key=settings.EVENTS_BUFFER_KEY, last_event_id=request.META.get('HTTP_LAST_EVENT_ID') ) return http.HttpResponse(listener, mimetype='text/event-stream')
And the events.Listener class, which I return as an iterator, looks like this:
class Listener(object): def __init__(self, rcon_or_url, channels, buffer_key=None, last_event_id=None): if isinstance(rcon_or_url, redis.StrictRedis): self.rcon = rcon_or_url elif isinstance(rcon_or_url, basestring): self.rcon = redis.StrictRedis(**utils.parse_redis_url(rcon_or_url)) self.channels = channels self.buffer_key = buffer_key self.last_event_id = last_event_id self.pubsub = self.rcon.pubsub() self.pubsub.subscribe(channels) def __iter__(self):
I can successfully send events to the browser using this setting. However, it seems that disconnect calls in the listener are finally never called. I assume that they are still in the camp, waiting for messages from the pub. When clients disconnect and reconnect, I see the number of connections to my Redis instance and never go down. Once it reaches 1000, Redis begins to worry and consume the entire available processor.
I would like to be able to detect when the client is no longer listening and closing the Redis connection at that time.
Things I thought of:
- Connection pool But since redis-py README states: "You cannot pass PubSub or Pipeline objects between threads."
- An intermediate tier for handling connections, or possibly just outages. This will not work because the middleware process_response () method is called too early (before the HTTP headers are even sent to the client). I need to call something when the client disconnects while I am in the middle of streaming content.
- request_finished and get_request_exception signals. The first, like process_response () in middleware, seems to fire too soon. The second is not called when the client disconnects the middle thread.
Final wrinkle: In production I use Gevent so that I can leave while opening many connections. However, this connection leakage problem occurs if I use a simple old "managed server", or Gevent monkeypatched runningerver, or Gunicorn gevent staff.
source share