For people who break monolithic applications into microservices, how do you deal with the problem of database breaks. The typical applications that I worked on make a great integration with the database for performance and simplicity.
If you have two tables that are logically different (limited contexts if you do), but you often perform aggregate processing on large volumes of this data, then in a monolith you are more than likely to avoid object orientation and use your standard base function instead JOIN data to process the data in the database before returning the aggregated view back to your application level.
How do you justify the separation of such data into microservices, where presumably you will need to "attach" the data through the API, and not in the database.
I read the book by Sam Newman of Microservices and in the chapter on the separation of the Monolith he gives an example of "Foreign Key Violations", where he admits that sharing the API will be slower - but he continues to say if your application is fast enough anyway, does it matter Is it slower than before?
Does it seem a little pale? What do people worry? What methods did you use to ensure compatibility of API connections?
database integration microservices
Martin Bayly Apr 21 '15 at 2:34 2015-04-21 02:34
source share