Can I run Vacg PostgreSQL every 1-2 minutes?

I am looking at various databases supporting MVCC for the upcoming project, and PostgreSQL came to my radar.

My program requirements include a sequence like this:

  • Read some information from the current version of the database, change 80-90% of the data and write it into one or more transactions (imagine something like updating the grid in Game of Life in Conway, where both old and new state of the grid is required).

  • Wait 1-2 minutes after fixing. During this time, customers can issue new data messages.

  • Repeat.

Databases will be limited to something like 2-4 GB.

~ 90% of changes are updates to existing objects, ~ 5% - new objects and ~ 5% will be deleted.

So my question is: can I intelligently run the regular VACUUM command as step 1.5 every 1-2 minutes and have PostgreSQL to keep up with the potentially 2-3 + GB of changes made each time?

+4
source share
1 answer

I believe Postgres should do its best in this scenario. The scenario is unusual, that a manual vacuum between huge updates seems like a reasonable option.

Consider if you can make so that instead of huge updates you generate a new set of tables, analyze them (necessary!), And then, with the power of transactional ddl, discard the old ones and rename the new ones in their place. This should ease your worries about VACUUM.

In such a scenario, you must do some serious tuning. In particular, look at shared_buffers, checkpoint related parameters, and vacuum related parameters. Also, remember benchmarking with realistic workloads.

+5
source

Source: https://habr.com/ru/post/1401257/


All Articles