Connecting microservices and databases

For people who break monolithic applications into microservices, how do you deal with the problem of database breaks. The typical applications that I worked on make a great integration with the database for performance and simplicity.

If you have two tables that are logically different (limited contexts if you do), but you often perform aggregate processing on large volumes of this data, then in a monolith you are more than likely to avoid object orientation and use your standard base function instead JOIN data to process the data in the database before returning the aggregated view back to your application level.

How do you justify the separation of such data into microservices, where presumably you will need to "attach" the data through the API, and not in the database.

I read the book by Sam Newman of Microservices and in the chapter on the separation of the Monolith he gives an example of "Foreign Key Violations", where he admits that sharing the API will be slower - but he continues to say if your application is fast enough anyway, does it matter Is it slower than before?

Does it seem a little pale? What do people worry? What methods did you use to ensure compatibility of API connections?

+84
database integration microservices
Apr 21 '15 at 2:34
source share
5 answers
+18
Apr 21 '15 at 2:51 on
source share

It is normal for services to have replicated copies of certain read-only help data from other services.

Given that when trying to reorganize a monolithic database into microservices (as opposed to overwriting), I would

  • create a database schema for the service
  • create versioned * views ** in this scheme to provide data from this scheme to other services
  • joins against these read-only views

This will allow you to independently modify the data / table structure without disturbing other applications.

Instead of using views, I could also consider using triggers to replicate data from one schema to another.

This will be a gradual progress in the right direction, the seams of your components, and the transition to REST can be done later.

* opinions can be expanded. If an urgent change is required, create version v2 of the same view and delete the old version when it is no longer needed. ** or Table Functions, or Sprocs.

+6
Jun 17 '15 at 11:18
source share

CQRS --- Team Request Aggregation Pattern is the answer to this question, according to Chris Richardson. Let each microservice update its own data model and generate events that update a materialized view that has the required connection data from earlier microservices. This MV can be any NoSql DB or Redis, or an elastic search that is optimized on demand. This technique leads to possible consistency, which is certainly not bad and allows you to avoid connections on the application side in real time. Hope these are the answers.

+3
Feb 12 '18 at 13:40
source share

In Microservices, you create diff. read models, for example, if you have two differences. limited context, and someone wants to search on both data, then someone should listen to events from both limited contexts and create an application-specific view.

In this case, more space will be required, but no connections will be needed and no connections will be made.

+1
Feb 09 '18 at 10:40
source share

I would separate the solutions for the field of use, say, from the operating and reporting.

I think that for microservices that provide data for individual forms that require data from other microservices (this is an operational case), it is best to use API connections. You will not go for large amounts of data, you can do data integration in the service.

Another case is when you need to perform large queries on a large amount of data in order to perform aggregation, etc. (Case Reporting). In this regard, I would think about maintaining a common database - similar to your original schema and updating it with events from your microservice databases. In this shared database, you can continue to use your stored procedures, which will save you effort and support database optimization.

+1
Apr 03 '18 at 7:45
source share



All Articles