I have a database table that is getting too large (several hundred million rows) that needs to be optimized, but before I get into splitting it, I thought that I would ask about suggestions.
Here is the usage:
0. The table contains about 10 columns with a length of about 20 bytes each.
INSERTS are executed at a speed of hundreds of times per second.
SELECT statements are executed based on the 'a' column (where a = 'xxxx') several times per hour.
DELETE statements are executed based on a DATE column. (delete if date is older than 1 year) usually once a day.
The key requirement is to speed up the INSERT and SELECT statements and the ability to save historical data for 1 year ago without blocking the entire table when deleted.
I would suggest that I should have two indexes: one for the column 'a', and the other for the date field. Or can both be optimized?
Will there be a necessary compromise between the speed of choice and the speed of removal?
Does the only solution share? What are good strategies for splitting such a table?
I am using a PostgreSQL 8.4 database.
user286545
source
share