Efficiency of passing size_t as an argument

Since it size_tcan be 32-bit or 64-bit depending on the current system, would it be better to always pass size_t to the function as a link or constant, so that it is always 4 bytes? (if it is 8 bytes, you would have to make a copy). The many open source codes that I looked at do not do this, however, if their compiler supports 64-bit integers, these 64-bit integers are always passed as references. Why don't they do this for size_t? I wonder what your opinion is.

+3
source share
5 answers

It is usually customary to pass all primitive types by value, because the operations necessary to copy them usually represent only one assembly instruction. Thus, passing size_tby value is preferable to passing size_tby reference.

+13
source

In most implementations, size_tobject pointers and object references are exactly the same size.

Think of it this way: it size_tcan contain the size of any object, and you can use it char*to access any byte in any object, so it is understood that size_tthey char*should be close in size. Thus, your idea does not make sense in most implementations.

+6
source

. , 32-, 64-, .

, .

+3

size_t , . , , , , , CPU.

; size_t ( , size_t ). 64- ABI 64- , .

+3

The problem with passing by reference is that the compiler will need to store the value in memory and pass the address of that stored value as a link. In a 64-bit architecture, calling conventions allow you to transfer much more information in registers (6 registers) without storing values ​​in memory, so you forbid optimization by passing small values ​​by reference.

There are more questions in this question, you can start with:

http://cpp-next.com/archive/2009/08/want-speed-pass-by-value/

http://en.wikipedia.org/wiki/X86_calling_conventions#x86-64_Calling_Conventions

+1
source

Source: https://habr.com/ru/post/1789931/


All Articles