I am using SQLite 3.7.2 for Windows. My database is used to store log data that is generated 24/7. The scheme is mainly:
CREATE TABLE log_message(id INTEGER PRIMARY KEY AUTOINCREMENT, process_id INTEGER, text TEXT); CREATE TABLE process(id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT);
The log_message.process_id field displays process.id , thereby associating each log message with the process from which it comes.
Now, sooner or later, the database will become too large, and I would like to delete old records (those with the lowest log_message.id values) until the database falls to a given size (for example, 1 GB). For this I am doing now
PRAGMA page_count; PRAGMA page_size;
after each of several log messages to get the size of the database. If it exceeds my limit, I just delete the fraction (right now: 100 messages) of the log messages as follows:
BEGIN TRANSACTION; DELETE FROM log_message WHERE id IN (SELECT id FROM log_message LIMIT 100); DELETE FROM process WHERE id IN (SELECT id FROM PROCESS EXCEPT SELECT process_id FROM log_message); COMMIT; VACUUM;
The final DELETE removes all unregistered entries from the process table. I repeat this process until the file size is accepted again.
This is due to at least two problems:
- The deletion approach of 100 log messages is rather random; I made this number based on several experiments. I would like to know the number of entries that I must delete in advance.
- VACUUM callbacks can take quite a while (the SQLite homepage says that VACUUM can take up to half a second per MB on Linux, I think it won't be faster on Windows).
Does anyone have any other suggestions on how to do this?
source share