How significant is the performance limitation when using Int64 / bigint instead of Int32 / int in a C # 4 / T-SQL2008 application under 32-bit Windows XP?

For my science project, I am developing (in C # 4 and T-SQL) an application that is potentially designed to handle very large numbers of very simple records that perform simple operations with them (a scientific modeling mechanism rather than linear time-series cruncher). I would like to use 64-bit integers as primary keys for better capacity.

I am going to integrate using Entity Framework, POCO collections and T-SQL arrays and stored procedures in practice.

I am going to store a database on SQL Server 2008 and access it from several application instances at the same time for distributed processing.

SQL Server and application instances will run on 32-bit Windows XP systems, sometimes on completely 64-bit inattentive hardware.

What penalties will I face for using 64-bit integer types as primary keys?

+3
source share
1 answer

As long as you keep reading and writing these numbers (i.e. without arithmetic, just database queries), the performance hit will be minor. It will be like using 2 intas parameters instead of 1.

, . 3 , int s. LOT , . 2- 64- 32- - , , , 3 .

, , ? .

+3

Source: https://habr.com/ru/post/1745778/


All Articles