Gearman + SQLAlchemy - keep playing MySQL stream

I have a python script that installs several working ones. They call for some methods in the SQLAlchemy models that I have that are also used by the Pylons application.

Everything works fine for an hour or two, then the MySQL thread is lost and all queries fail. I cannot understand why the stream is lost (I get the same results on three different servers) when I define such a low value for pool_recycle. Also, why not create a new connection?

Any ideas of things to research?

import gearman import json import ConfigParser import sys from sqlalchemy import create_engine class JSONDataEncoder(gearman.DataEncoder): @classmethod def encode(cls, encodable_object): return json.dumps(encodable_object) @classmethod def decode(cls, decodable_string): return json.loads(decodable_string) # get the ini path and load the gearman server ips:ports try: ini_file = sys.argv[1] lib_path = sys.argv[2] except Exception: raise Exception("ini file path or anypy lib path not set") # get the config config = ConfigParser.ConfigParser() config.read(ini_file) sqlachemy_url = config.get('app:main', 'sqlalchemy.url') gearman_servers = config.get('app:main', 'gearman.mysql_servers').split(",") # add anypy include path sys.path.append(lib_path) from mypylonsapp.model.user import User, init_model from mypylonsapp.model.gearman import task_rates # sqlalchemy setup, recycle connection every hour engine = create_engine(sqlachemy_url, pool_recycle=3600) init_model(engine) # Gearman Worker Setup gm_worker = gearman.GearmanWorker(gearman_servers) gm_worker.data_encoder = JSONDataEncoder() # register the workers gm_worker.register_task('login', User.login_gearman_worker) gm_worker.register_task('rates', task_rates) # work gm_worker.work() 
+4
source share
1 answer

I saw this by all the rules for Ruby, PHP and Python, regardless of the database library used. I could not find how to fix this β€œcorrect” way of using mysql_ping, but there is a SQLAlchemy solution as described here better http://groups.google.com/group/sqlalchemy/browse_thread/thread/9412808e695168ea/c31f5c967c135be0

As someone in this thread points out, setting the reuse option to True is equivalent to setting it to 1. The best solution would be to look for your MySQL connection timeout value and set the recycle threshold to 80% of it.

You can get this value from the live set by looking at this variable http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_connect_timeout

Edit: I needed a little to find the official documentation on using pool_recycle http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html?highlight=pool_recycle

+3
source

Source: https://habr.com/ru/post/1338308/


All Articles