For databases, choosing the right data type affects performance?

And if so, why? I mean tinyint is faster to search than int?

If so, what are the practical differences in performance?

+4
source share
7 answers

Depending on the data types, yes, that really matters.

int vs. tinyint will not have a noticeable difference in speed, but it will affect the size of the data. Assuming tinyint is 1 byte and int is 4, 3 bytes are stored on each line. after a while it is added.

Now, if it was an int versus varchar , then there would be a bit of a drop as things like sorts would be much faster for integer values ​​than string values.

If this is a comparable type, and you don’t press the spot very hard, go with the one that is simpler and more reliable.

+6
source

Theoretically, yes, tinyint is faster than int. But good database design and proper indexing have a much more significant impact on performance, so I always use int to simplify the design.

+3
source

I would venture that in this case there are no practical differences in performance. Storage capacity is a more significant factor, but even then it does not really matter. Is the difference maybe 2 bytes? After 500,000 lines, you almost used extra megabytes. I hope you do not infringe on megabytes if you work with a lot of data.

+3
source

Choosing the right data type can improve performance. In many cases, the practical difference may not be very large, but poor choices can definitely affect. Imagine a 1000 char character field is used instead of a varchar field when you are going to store only a string of several characters. This is a bit of an extreme example, but you would definitely be much better off using varchar. You probably will never notice a performance difference between int and tinyint. Your overall database design (normalized tables, good indexes, etc.) will have a much greater effect.

+2
source

Of course, choosing the right data types always helps speed things up.

take a look at this article, it will certainly help you: http://www.peachpit.com/articles/article.aspx?p=30885&seqNum=7

+1
source

The performance rating depends on the size of your model and usage. Although looking at storage space in these modern times is almost not a problem, you might need to think about performance:

Database engines tend to store data in broken nodes. By default, Sql server has 8k pages, page size of Oracle 2k and MySql 16k? Not so big for any of these systems. Whenever you perform an operation on a data bit (field and string), its entire page is extracted from db and placed in memory. When your data is smaller (tiny int vs. int), you can put more individual rows and data elements per page, and so the likelihood that you need more pages is reduced, and overall performance is accelerated. So yes, using the smallest possible representation of your data will certainly affect performance because it allows the efficient movement of db.

+1
source

One way that affects performance is not to require you to convert it to the correct type to manipulate data. This is true when someone uses varchar, for example, instead of the datetime data type, and then they need to be converted to date math. It can also affect performance by providing a smaller record (so you should not define everything with a maximum size), which affects how pages are stored and retrieved in the database.

Of course, using the right data type can also help data integrity; you cannot save a date that does not exist in the datetime field, but you can in the varchar field. If you use float instead of int, your values ​​are not limited to integer values, etc. If you are talking about float, it is usually bad to use it if you intend to make mathematical quotes, because you get rounding errors, because this is not the exact type.

+1
source

Source: https://habr.com/ru/post/1306707/


All Articles