What you offer has serious performance issues. (Imagine that someone is doing a volume insertion of 1000 lines - he will create 1000 processes based on python and quickly bring your server to work.) This is much simpler without problems: (WARNING: unverified pseudocode)
CREATE TABLE log_table( datetime update_time, varchar() valore); CREATE TRIGGER `notifica_cambiamenti` AFTER UPDATE ON `valore` -> FOR EACH ROW BEGIN -> -> insert into log_table(now(), NEW.valore); -> END; -> $$
Now you can run an asynchronous job (or a cron job) that periodically processes the log table and deletes the lines that it saw: DELETE FROM log_table WHERE update_time < $LAST_ROW_SEEN .
Extended Note. This will work fine until you need to process many jobs per second or reduce latency without having to poll the database 100 times per second. In this case, you should not use the SQL database. Use a real lineup such as AMPQ, Redis / Resque, Celery, etc. The client inserts the row into SQL, and then throws the job into the job queue. You can have many workers processing jobs in parallel.
source share