I have a table in the database mapped to the SQLAlchemy ORM module (I have the scoped_session variable) I want several instances of my program (not just streams, but also from multiple servers) to work in one table and NOT work with the same data. therefore, I encoded a manual "string locking" mechanism to ensure that each line is processed in this method. I use "Full lock" in the table, and "row-lock":
def instance: s = scoped_session(sessionmaker(bind=engine) engine.execute("LOCK TABLES my_data WRITE") rows = s.query(Row_model).filter(Row_model.condition == 1).filter(Row_model.is_locked == 0).limit(10) for row in rows: row.is_locked = 1 row.lock_time = datetime.now() s.commit() engine.execute("UNLOCK TABLES") for row in row: manipulate_data(row) row.is_locked = 0 s.commit() for i in range(10): t = threading.Thread(target=instance) t.start()
The problem is that when some instances are started, several threads collapse and produce this error (each):
sqlalchemy.exc.DatabaseError: (raised as a result of an AutoFlush request; consider using the session.no_autoflush block if this thread happens prematurely) (DatabaseError) 1205 (HY000): Waiting for a lock timeout; try reloading transaction 'UPDATE my_daya SET row_var = 1}
Where to catch? What makes my non-UNLOCK DB table successful?
Thanks.
source share