As with this question , I set up a backup system based on dumpdatafor my database. This setting is akin to running a cron script that invokes dumpdataand moves the backup to a remote server, for easy use loaddatato restore the database. However, I'm not sure if this works well with migrations . loaddatanow has a switch ignorenonexistentto handle remote models / fields, but it cannot allow cases where columns have been added with one-time default values or apply code RunPython.
As I see, there are two problems:
- Mark each output file with the
dumpdatacurrent version of each application - Combining fixtures into a migration path
I'm at a dead end on how to solve the first problem without introducing tons of overhead. Would it be enough to save an additional file to a backup containing the mapping {app_name: migration_number}?
The second problem, which I think is easier when the first is solved, as the process is approximately equal:
- Create a new database
- Starting migration forward to the appropriate point for each application
- Call
loaddatawith this fixture file - Run the rest of the migration
There is some code in this question (related to the error message), which, I think, can be adapted for this purpose.
Since these are fairly regular / large database snapshots, I don’t want to save them as a data transfer cluttering up the migration directory.