SQL Server Bidirectional Transactional Replication. Is this a good precedent?

We have a problem with scaling with SQL server. This is largely due to several reasons: 1) poorly designed data structures, 2) hard work and business processing logic - all this is done in T-SQL. This was verified by the Microsoft SQL guy from Redmond, whom we hired to analyze on our server. We literally solve problems by constantly increasing the team's timeout, which is funny and not a good long-term solution. Since then, we have compiled the following strategy and many phases:

Step 1: Drop the hardware / software to stop the bleeding.

This includes a few different things, such as a caching server, but what I would like to ask everyone here is specifically related to the implementation of bi-directional transactional replication on the new SQL server. We have two use cases:

  • We thought about launching long (and table / row locks) SELECTs in this new SQL “processing box” and moving them to the cache level and using the user interface they read from the cache. These SELECTs generate reports and also return results online.

  • Most of the business logic is in SQL. We have several LONG queries for SELECT, INSERT, UPDATE and DELETE that execute processing logic. The end result is actually just completely filled with INSERT, UPDATE, and DELETE after processing is complete (many cursors). The idea would be to balance the load between the two servers.

I have a few questions:

  • Are these good use cases for bidirectional transactional replication?

  • I need to make sure that this solution will “just work” and not have to worry about conflicts. Where will conflicts arise as part of this decision? I read several articles about updating the increment in your testis to prevent conflicts, which makes sense, but how does it handle UPDATE / DELETE or other places where conflicts can occur?

  • What other problems can I accomplish and we need to keep track of?

  • Is there a better solution to this problem?

Step 2: Rewrite the logic in .NET, where it should be, and optimize the SQL stored procedures to perform only set-based operations, as it should be.

This will obviously take some time, so we wanted to see if there were any preliminary steps that we could take to stop the pain our users are experiencing.

Thanks.

+4
source share
1 answer

Imho bi-directional replication is very far from "it will work." Preventing update conflicts requires sophisticated planning, ensuring that all this “processing” is carefully organized so that it never works with overlapping data. Master-master replication is one of the most complex removal solutions.

Keep this in mind: you imagine a solution that provides a cheap 2x-scale version without changing the code. such a solution would be very useful; one would expect it to be deployed everywhere. But nowhere to be seen.

I recommend that you look for a lot of blogs and articles describing errors and warnings about the (much more popular) deployment of master-master MySQL (for example, If You Must Deploy replication with multiple masters, read this first ), judge for yourself, the problem is what is it worth.

I do not have all the details that you do, but I would focus on the application. If you just want to throw money at a problem in the short term, I would make sure that cheap scaling is exhausted before considering scaling (SSD / Fusion drives, more RAM). Also examine the isolation level of the snapshot / read the captured snapshot first if locking is the main concern.

+5
source

Source: https://habr.com/ru/post/1447612/


All Articles