DBMS data balancing

Our internal system is built on SQL Server 2008 with a 40-table 6NF schema. Most FK tables are for 3 others, the key is few to 7. The system will ultimately support 100 employees working with 10 thousand clients and storing 100,000,000 transaction records - access to prime time should peak at a speed of 1000 lines per second.

Is there any reason to believe that this depth of RDBMS interconnection could overload a system built using modern equipment with sufficient RAM? I am trying to assess whether we need to adjust our design or the direction / goals of the project before we get to the final stage of development (in a couple of months).

+3
source share
1 answer

In terms of an SQl server, you describe a small database. With the right design, SQL Server can process terrabytes data.

This does not guarantee that your current design may work well. There are many ways to create poorly executed t-SQL and many incorrect database design options.

If I were you, I would download the test data twice as much as you expect from the tables, and then start testing your code. Stress testing can also be a good idea. It is much easier to fix database performance problems before they go into production. Far, much easier!

+3

Source: https://habr.com/ru/post/1716151/


All Articles