Python SQLAlchemy - "MySQL Server Gone"

Let's look at the following snippet -

@event.listens_for(Pool, "checkout") def check_connection(dbapi_con, con_record, con_proxy): cursor = dbapi_con.cursor() try: cursor.execute("SELECT 1") # could also be dbapi_con.ping(), # not sure what is better except exc.OperationalError, ex: if ex.args[0] in (2006, # MySQL server has gone away 2013, # Lost connection to MySQL server during query 2055): # Lost connection to MySQL server at '%s', system error: %d # caught by pool, which will retry with a new connection raise exc.DisconnectionError() else: raise engine = create_engine('mysql://user: puss123@10 .0.51.5/dbname', pool_recycle = 3600,pool_size=10, listeners=[check_connection]) session_factory = sessionmaker(bind = engine, autoflush=True, autocommit=False) db_session = session_factory() ... some code that may take several hours to run ... db_session.execute('SELECT * FROM ' + P_TABLE + " WHERE id = '%s'" % id) 

I thought registering the checkout_connection function in the checkout event would solve it, but it didnt now the question is how can I say that SQLAlchemy handles connection drops, so every time I call execute (), it checks if the connection is available and if he has not yet initiated it again?

---- ---- UPDATE

SQLAlchemy Version - 0.7.4

---- ---- UPDATE

 def checkout_listener(dbapi_con, con_record, con_proxy): try: try: dbapi_con.ping(False) except TypeError: dbapi_con.ping() except dbapi_con.OperationalError as exc: if exc.args[0] in (2006, 2013, 2014, 2045, 2055): raise DisconnectionError() else: raise engine = create_engine(CONNECTION_URI, pool_recycle = 3600,pool_size=10) event.listen(engine, 'checkout', checkout_listener) session_factory = sessionmaker(bind = engine, autoflush=True, autocommit=False) db_session = session_factory() 

session_factory is sent to each newly created thread

 class IncidentProcessor(threading.Thread): def __init__(self, queue, session_factory): if not isinstance(queue, Queue.Queue): raise TypeError, "first argument should be of %s" (type(Queue.Queue)) self.queue = queue self.db_session = scoped_session(session_factory) threading.Thread.__init__(self) def run(self): self.db_session().execute('SELECT * FROM ...') ... some code that takes alot of time ... self.db_session().execute('SELECT * FROM ...') 

now that the execution is done after a long period of time, I get the error message "MySQL server is gone"

+6
source share
3 answers

There was talk about this, and this document describes the problem quite nicely, so I used my recommended approach for handling such errors: http://discorporate.us/jek/talks/SQLAlchemy-EuroPython2010.pdf

It looks something like this:

 from sqlalchemy import create_engine, event from sqlalchemy.exc import DisconnectionError def checkout_listener(dbapi_con, con_record, con_proxy): try: try: dbapi_con.ping(False) except TypeError: dbapi_con.ping() except dbapi_con.OperationalError as exc: if exc.args[0] in (2006, 2013, 2014, 2045, 2055): raise DisconnectionError() else: raise db_engine = create_engine(DATABASE_CONNECTION_INFO, pool_size=100, pool_recycle=3600) event.listen(db_engine, 'checkout', checkout_listener) 
+10
source

Try the pool_recycle argument in create_engine .

From the documentation :

Connection timeouts

MySQL has a function to automatically close connections, for connections that have been idle for eight or more hours. To get around this, enter the pool_recycle parameter, which controls the maximum age of any connection:

engine = create_engine ('mysql + mysqldb: // ...', pool_recycle = 3600)

+3
source

You can try something like this:

 while True: try: db_session.execute('SELECT * FROM ' + PONY_TABLE + " WHERE id = '%s'" % incident_id) break except SQLAlchemyError: db_session.rollback() 

If the connection is gone, it will throw an exception, the session will rollback d, it will try again, probably succeed.

-1
source

Source: https://habr.com/ru/post/951024/


All Articles