I believe Postgres should do its best in this scenario. The scenario is unusual, that a manual vacuum between huge updates seems like a reasonable option.
Consider if you can make so that instead of huge updates you generate a new set of tables, analyze them (necessary!), And then, with the power of transactional ddl, discard the old ones and rename the new ones in their place. This should ease your worries about VACUUM.
In such a scenario, you must do some serious tuning. In particular, look at shared_buffers, checkpoint related parameters, and vacuum related parameters. Also, remember benchmarking with realistic workloads.
source share