The optimal solution for a large number of queries on a single database table

We have a system in which customers are provided with a product based on the first priority.

Our product table contains an incremental primary key that started from zero, which we use to track how many products have been distributed, that is, the user reserves the product and receives allocation 1, the next user receives 2, etc.

The problem is that potentially hundreds of thousands of users will gain access to the system in any hour. All of them will hit this table.

Since we need to ensure that only one product is allocated to each client and keep track of how many products have been allocated, we use row locking for each client accessing the system to ensure that they are written to the table before the next client accesses the system - that is, compliance with the first-order rule of the first order.

We are concerned about the bottleneck, which is the processing time for each query included in SQL Server 2008 Enterprise Edition and the row lock.

We cannot use multiple servers, because we need to ensure the integrity of the primay key, so anything that requires replication will not work.

Does anyone know of good solutions that are especially effective at processing a large number of queries in a single database table?

A bit more info:
In this table, there are essentially only two fields - ID and CustomerID. The solution is to give away a million products free of charge - therefore, the expectation of high demand and why using a primary primary key as a key makes sense to us - as soon as the key reaches a million, more customers will not be able to register. In addition, the products are all different, so distributing the right key is important, for example. the first 100 customers introduced receive a product with a higher value than the next 100, etc.

+6
source share
1 answer

Firstly, to remove the key generation problem, I would generate them all in advance. These are just 1 m lines, and that means you don’t have to worry about managing the key generation process. It also means that you don’t have to worry about creating too many rows by accident, because as soon as you populate the table, you will only do UPDATE, not INSERT.

One important question here: are all 1 m elements identical or not? If they are, then it does not matter in which order the keys are (or even if they have an order), since clients send requests, you simply "try" to UPDATE the table like this:

UPDATE TOP(1) dbo.Giveaway -- you can use OUTPUT to return the key value here SET CustomerID = @CurrentCustomerID WHERE CustomerID IS NULL IF @@ROWCOUNT = 0 -- no free items left PRINT 'Bad luck' ELSE PRINT 'Winner' 

If, on the other hand, 1m elements are different from each other, you need a different solution, for example. element 1 - X, elements 2-10 - Y, 11-50 - Z, etc. In this case, it is important to assign clients to the keys in the order in which they are sent, so you should probably look at the queuing system, possibly using Service Broker. Each client adds a request to the queue, then the stored procedure processes them one at a time and assigns them a free MAX key, and then returns the details of what they won.

+5
source

Source: https://habr.com/ru/post/911827/


All Articles