In general, the more complex your data model, the greater the impact on performance that you will experience. However, if your database is not very large, your hardware resources are very minimal, or your queries are very complex, you probably do not mind adding coercive relationships to your database. This is obviously a subjective statement, but “acceptable performance” is a very subjective concept that will vary from project to project.
The heart of your colleague’s argument is correct, and here are a few reasons why:
- Each time you write a new record containing a foreign or primary key, the database must verify that none of the key restrictions are violated. Key columns are also indexed, so indexes must be updated as records are added.
- Each time you delete a record that contains a foreign key or refers to it, constraints are checked, and the deletion can be cascaded to linked tables. Indexes should also be updated when records are deleted.
- CRUD operations of all kinds are significantly slowed down as more and more tables are combined into queries. The more tables, the more records you need to combine and the slower the execution.
So this is why these arguments are mostly irrelevant:
- Indexing significantly reduces query execution time, especially if it is performed well. It is important to index tables in such a way as to use a query structure that will run against it.
- If your database hardware is not bare, the operations necessary to ensure data integrity and relationship constraints are likely to run much faster on the back than the front. This is especially true if constraint checks are performed in the client application than on the server.
- Client-based data integrity checks are much more error prone than database restrictions. Yes, if your code is perfect, it will work just as well, but the RDBMS software is designed for that kind of thing, and it's incredibly easy to implement.
- Client-based data integrity checks can lead to data synchronization problems. Think of two people in different places trying to change a unique record. But perhaps the possible concurrency data will be sufficient if your main problem is related to the speed of coverage.
It all depends on your RDBMS and project specifications, but these are good rules of thumb. In general, I would say that if your database is not so large that the forced relationship becomes excessively slow, or your model is so simple that the relationship is meaningless (in this case, why are you using RDBMS?), It’s better to include integrity data and relationship constraints.
source share