CREATE TRIGGER sys_exec and python

my python on / usr / esercizi / is:

#!/usr/bin/python import datetime now = datetime.datetime.now() aa = now.strftime("%Y-%d-%m %H:%M | %S") out_file = open("/usr/esercizi/test.txt","w") out_file.write("La data di oggi \n\n") out_file.write(aa) out_file.close() 

made for testing purposes I like that it is called from TRIGGER:

 mysql> CREATE TRIGGER `notifica_cambiamenti` AFTER UPDATE ON `valore` -> FOR EACH ROW BEGIN -> -> SET @exec_var = sys_exec(CONCAT('python /usr/esercizi/tre.py ', NEW.valore)); -> END; -> $$ Query OK, 0 rows affected (0.06 sec) 

the table has only two columns: id and valore. every time you change valore you should run tre.py

I also give:

chown mysql: mysql tre.py | and chmod 777 tre.py

The OK request seems to indicate the absence of syntax errors but nothing happens in the file: test.txt

What am I doing wrong?

+4
source share
2 answers

Your problem is resolved, just do the following that I did for your problem ...

 CREATE TABLE log_table( datetime update_time, varchar() valore); 

I just created the table above in which updated values ​​will be saved using a trigger.

Now I have defined the trigger as follows.

  DELIMITER ;; CREATE TRIGGER `notifica_cambiamenti` AFTER UPDATE ON `valore` FOR EACH ROW BEGIN INSERT INTO log_table SET update_time = NOW(), valore = NEW.valore); END;; 
+1
source

What you offer has serious performance issues. (Imagine that someone is doing a volume insertion of 1000 lines - he will create 1000 processes based on python and quickly bring your server to work.) This is much simpler without problems: (WARNING: unverified pseudocode)

 CREATE TABLE log_table( datetime update_time, varchar() valore); CREATE TRIGGER `notifica_cambiamenti` AFTER UPDATE ON `valore` -> FOR EACH ROW BEGIN -> -> insert into log_table(now(), NEW.valore); -> END; -> $$ 

Now you can run an asynchronous job (or a cron job) that periodically processes the log table and deletes the lines that it saw: DELETE FROM log_table WHERE update_time < $LAST_ROW_SEEN .

Extended Note. This will work fine until you need to process many jobs per second or reduce latency without having to poll the database 100 times per second. In this case, you should not use the SQL database. Use a real lineup such as AMPQ, Redis / Resque, Celery, etc. The client inserts the row into SQL, and then throws the job into the job queue. You can have many workers processing jobs in parallel.

0
source

Source: https://habr.com/ru/post/1481204/


All Articles