How to denormalize a highly normalized database system?

I want to introduce database denormalization into a highly normalized system.

I have a large set of databases that have evolved over a decade and are under increasing load, so I'm looking to improve performance and possibly reduce the complexity of some queries.

It is not uncommon to make 10 joins to perform any given task in a stored procedure. I was told that over 6 stinks.

Should I keep the table structure as is and provide some materialized views or denormalized cache tables.

Some best practice tips or a push in the right direction will help.

thanks

+3
source share
5 answers

You do not say what the problem is. Is this performance? If so, on which tables?

Are these really the connections that cause the problem? Or are these stored procedures? You do not know (or at least do not speak).

Best Practice: Find out where your bottlenecks are first before trying to solve a problem that you have not yet been diagnosed.


: , , . , . , sps , . , update t set c = c + 1 where id = n.

, , declare cursor for "select c from t where id = n" for update;, , , select c into @c; @c = c + 1; update t set c = @c where current of cursor; .

, , , , . . ( , ); , . .

+11

  • - .
  • .
  • .
  • .
  • . ? , (, ), .
+2

. . . , , , .

+1

, 10 - .

, . ( SQL), .

If you are faced with the denormalization problem, I would not add new "cache tables". This is really a fix for the problem. I would like to continue and formulate a complete database denormalization plan with a new optimized schema.

0
source

I agree with Elijah. Make sure that you compare everything that you are going to change.

Also, depending on your installation, indexed views may be an option.

0
source

Source: https://habr.com/ru/post/1707188/


All Articles