Whenever you change databases, you have to be very careful how you transfer data. Some data formats support type consistency, while others do not.
Then there are only data formats that cannot process your circuit. For example, CSV does a great job of processing data when it is one row for each record, but how do you imagine the built-in array in CSV? It really is not possible, JSON is good at it, but JSON has its problems.
The simplest example is JSON and DateTime. JSON does not have a specification for storing DateTime values, they can end as ISO8601 dates, or perhaps UNIX Epoch timestamps, or really anything that a developer can think of. What about Longs, Doubles, Ints? JSON does not distinguish, it does all the lines, which can lead to a loss of precision if not deserialized correctly.
This makes it very important that you choose the right translator. Generally, you should minimize your own decision. This means loading drivers for both databases, reading a record from one, translating and writing to another. This is the best way to be absolutely sure that errors are handled properly for your environment, types are stored sequentially, and that the code correctly transfers the circuit from the source to the destination (if necessary).
What does all this mean to you? This means a lot of work for you. Someone may already have rolled something wide enough for your business, but I have already found in the past that it is best for you to do it yourself.
source share