How to upgrade two databases with different schemas

Our company has a really old obsolescence system with such a poor database design (without foreign keys, columns with serialized PHP arrays, etc. :(). We decided to rewrite the system from scratch with a new database schema.

We want to rewrite the system in parts. Thus, we will divide the old monolithic application into many smaller ones.

The problem is that we want to have live data in two databases. Old and new scheme. I would like to ask you if any of you know best practices on how to do this.

What do we think:

  • synchronizing asynchronous data with a message queue
  • create a REST API on the new system and use an outdated application instead of db calls
  • some table replication

Thank you very much

+5
source share
3 answers

I had to face a similar problem in the past. There was a system that did not have support, but people used it because it had some functions (security holes) that allowed them certain functions. However, they also needed new features.

I selected the tables in which the new system was involved, and I created several triggers for cross-updating the tables, so when I created the register in the old system, the trigger created a copy in the new system and spread. If you design this system correctly, you will have two systems working simultaneously in real time.

The disadvantage is that while both systems are running, the system will slow down because you need to maintain the integrity of the two databases in each operation.

0
source

I would start by adding a database tier to accept API calls from the business tier, and then write for both the old schema and the new one. This adds complexity up, but helps ensure that data remains in sync.

This will require a legacy system change to invoke the API instead of issuing SQL statements. If they did not have the forethought to do this initially, you may not be able to accept my approach. But you have to do it in the future.

Triggers may or may not work. In older versions of MySQL, there can only be one trigger of this type in this table. This forces you to combine unrelated things into one trigger.

Replication can solve some changes - Engine, datatypes, etc. But this does not help with splitting one table into two. Be careful with trigger replication and trigger effect (between Master and Slave). In general, the stored procedure should be performed on the Wizard, allowing the effect to be replicated to the subordinate. But maybe you should think about how to start a trigger on a slave. Or different triggers on two servers.

Another thought is to make the transformation in stages. With careful planning of circuit changes and the use of triggers compared to code changes compared to the database level, you can perform partial conversions one at a time, sometimes without a big glitch, to update everything at the same time (fingers crossed). A simple example: (1) change the code to dynamically process a new or old scheme, (2) change the scheme, (3) clear the code (delete the processing of the old scheme).

0
source

Performing a database migration can be a tedious task, given the complexity of the data and the structure of the tables, which are, of course, without any restrictions or proper design. But considering that your legacy application is doing its job, the amount of corrupted data used will be minimal.

For this issue, I would suggest a db migration task that converts all old obsolete data into a new form. And develop a new application. Benefits.

1) No need to store 2 different applications.

2) No need to change the code in an outdated application, which can become messy.

3) Moving DB will give us the opportunity to fix any damaged data (if necessary).

Migrating a DB can be impractical in all scenarios, but if you can do it in less effort than making changes to the database synchronization, a new api for an outdated application - I would suggest going for it.

0
source

Source: https://habr.com/ru/post/1266000/


All Articles