Why is universal Int used, which switches between 32 and 64 bits better than explicitly using Int32 and Int64

New Apple Swift documentation says:

Int

In most cases, you do not need to select a specific integer size for use in your code. Swift provides an additional integer type Int, which is the same size as the native word size of current platforms:

On a 32-bit platform, Int is the same size as Int32. On a 64-bit platform, Int is the same size as Int64. If you don't need to work with a specific size of integers, always use Int for integer values ​​in your code. This contributes to code coherence and compatibility. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647 and is large enough for many integer ranges.

I understand that when using the APIs defined with "Int", you must use them.

But for my own code, I was always strict in using the correct bit types in C using the stdint header. I thought I would try to reduce the ambiguity. However, the people at Apple are pretty smart, and I wonder if I'm missing something because that is not what they recommend.

+4
source share
3 answers

This question has not received widespread approval.

The advantage of using a general type of tolerance . The same piece of code will compile and execute regardless of the size of the platform word. In some cases, it can be faster.

The advantage of using specific types is accuracy . There is no room for ambiguity, and the exact possibilities of this type are known in advance.

There is no firm, true answer. If you adhere to either side of the question for any purpose, you will sooner or later make an exception.

+9
source

, . API , word-size int, , .

, (, ) , . .

Int64 CoreData CloudKit . . Apple , .

, Swift ++, , . int Swift .

, Int16 Swift,

struct Int16 : SignedInteger {
    var value: Builtin.Int16
    init()
    init(_ v: Builtin.Int16)
    init(_ value: Int16)
    static func convertFromIntegerLiteral(value: Int16) -> Int16
    typealias ArrayBoundType = Int16
    func getArrayBoundValue() -> Int16
    static var max: Int16 { get }
    static var min: Int16 { get }
}
0

, (16 , 32 , 64 ), . ? :

int64 ret. 0 . 64- , 32- , 32- . 16- , .

If you chose naked int, the SDK would make your variable 32-bit wide on 32-bit devices, 64-bit wide on 64-bit machines, 16-bit wide on 16-bit machines, etc. Everything is in order with the rest of the system (libraries, frameworks, regardless of what you work with). Therefore, it is not only better in the system, but also better ported to other platforms.

But my example is only valid for returned variables, i.e. variables that take a very limited number of values ​​...

0
source

Source: https://habr.com/ru/post/1543635/


All Articles