How are primitive data types made in C #?

How do System.float , System.int and other types of primitives work? I never understood how to create primitive structures, and I wonder if I can make my own numeric type.

+49
c # primitive-types
Nov 20 '17 at 8:09
source share
2 answers

Assuming we're talking about a C # compiler that targets the Common Language Infrastructure (CLI), like almost everything, it mainly uses primitive types open by the CLI.

There are three levels of support:

  • Genuine primitives, with their own representation and instructions in IL
  • Numeric types that the C # compiler has special knowledge, but which are not part of the CLI - basically, System.Decimal . This is also part of the Common Type (CTS) system, which means that if you create a const decimal in C #, you can still use it in VB, for example. But there is still no direct support for IL.
  • Other numeric types, such as BigInteger - you can write your own.

The middle base of the second bullet allows C # to have decimal literals and decimal constants, none of which are possible for the third bullet. For example, BigInteger does not support the language, so you cannot write:

 // This isn't valid BigInteger bigInteger = 123456789012345678901234567890; 

Instead, you have to parse the string representation. Similarly, you cannot have const BigInteger .

(Theoretically, it would be possible to have a type with support in C #, but not in CTS. I do not know such types.)

+67
Nov 20 '17 at 8:12
source share

Primitive types, such as int and float , are supported directly by the processor, and the .net platform and languages ​​have the same built-in support for them. Thus, you cannot create your own language-level primitives.

+7
Nov 21 '17 at 8:47
source share



All Articles