Mysql has its own EVENT functionality to avoid complex cron interactions when much of what you plan on is sql related and less related to files. See the manual page here . We hope that the following will read as a brief overview of the important steps and things that need to be considered, and tested testing, too.
show variables where variable_name='event_scheduler'; +
ooops, the event planner does not turn on. Nothing will come of it.
SET GLOBAL event_scheduler = ON; -- turn her on and confirm below
show variables where variable_name='event_scheduler'; +
Testing scheme
create table theMessages ( id int auto_increment primary key, userId int not null, message varchar(255) not null, updateDt datetime not null, key(updateDt) -- FK not shown ); -- it is currently 2015-09-10 13:12:00 -- truncate table theMessages; insert theMessages(userId,message,updateDt) values (1,'I need to go now, no followup questions','2015-08-24 11:10:09'); insert theMessages(userId,message,updateDt) values (7,'You always say that ... just hiding','2015-08-29'); insert theMessages(userId,message,updateDt) values (1,'7 day test1','2015-09-03 12:00:00'); insert theMessages(userId,message,updateDt) values (1,'7 day test2','2015-09-03 14:00:00');
Create 2 events, 1st run daily, 2nd run every 10 minutes
Ignore what they are actually doing (playing against each other). The point is the time difference approaches and schedules .
DELIMITER $$ CREATE EVENT `delete7DayOldMessages` ON SCHEDULE EVERY 1 DAY STARTS '2015-09-01 00:00:00' ON COMPLETION PRESERVE DO BEGIN delete from theMessages where datediff(now(),updateDt)>6;
...
DELIMITER $$ CREATE EVENT `Every_10_Minutes_Cleanup` ON SCHEDULE EVERY 10 MINUTE STARTS '2015-09-01 00:00:00' ON COMPLETION PRESERVE DO BEGIN delete from theMessages where TIMESTAMPDIFF(HOUR, updateDt, now())>168;
Show event statuses (different approaches)
show events from so_gibberish; -- list all events by schema name (db name) show events; -- <--------- from workbench / sqlyog show events\G;` -- <--------- I like this one from mysql> prompt *************************** 1. row *************************** Db: so_gibberish Name: delete7DayOldMessages Definer: root@localhost Time zone: SYSTEM Type: RECURRING Execute at: NULL Interval value: 1 Interval field: DAY Starts: 2015-09-01 00:00:00 Ends: NULL Status: ENABLED Originator: 1 character_set_client: utf8 collation_connection: utf8_general_ci Database Collation: utf8_general_ci *************************** 2. row *************************** Db: so_gibberish Name: Every_10_Minutes_Cleanup Definer: root@localhost Time zone: SYSTEM Type: RECURRING Execute at: NULL Interval value: 10 Interval field: MINUTE Starts: 2015-09-01 00:00:00 Ends: NULL Status: ENABLED Originator: 1 character_set_client: utf8 collation_connection: utf8_general_ci Database Collation: utf8_general_ci 2 rows in set (0.06 sec)
Random things to consider
drop event someEventName; - <----- a good thing to know about
cannot use an alias and use in where where in 1 line, therefore
select id,DATEDIFF(now(),updateDt) from theMessages where datediff(now(),updateDt)>6;
specify 168 hours in 1 week
select id,TIMESTAMPDIFF(HOUR, updateDt, now()) as `difference` FROM theMessages; +
The link to the manual page shows quite a bit of flexibility with the choice of intervals shown below:
interval:
quantity {YEAR | QUARTER | MONTH | DAY | HOUR | MINUTE | WEEK | SECOND | YEAR_MONTH | DAY_HOUR | DAY_MINUTE | DAY_SECOND | HOUR_MINUTE | HOUR_SECOND | MINUTE_SECOND}
Concurrency
Insert any concurrency measures necessary so that multiple events (or multiple failures of the same event) do not trigger amok data.
Set and forget
Remember that at the moment, because you will forget it so that these events continue to shoot. So create solid code that will just work, even when you forget. Most likely you will.
Your special requirements
You need to determine which rows you want to delete first from the table so that it respects the primary key restrictions. Just copy them in the correct order inside the obvious area using the CREATE EVENT statement, which can be massive.