Why are unsigned variables not used more often?

It seems that unsigned integers would be useful for method parameters and class members, which should never be negative, but I don't see many people write code this way. I tried this myself and found the need to throw from int to be a little annoying ...

Anyway, what do you think of this?

Duplicate

Why is Array Length an Int and not a UInt?

+5
c #
Jan 29 '09 at 1:51
source share
12 answers

Using standard ones probably avoids casting to unsigned versions. In your code, you can probably keep the difference in order, but many other inputs and third-party libraries will not, and thus casting will simply infuriate people!

+4
Jan 29 '09 at 1:55
source share

The idea that unsigned will prevent you from having problems with methods / members that should not deal with negative values ​​is somewhat wrong:

  • now you need to check large values ​​("overflow") in case of error
    • whereas how could you check <= 0 with signed
  • use only one signed int in your methods and you will return to the "signed" square :)

Use unsigned when dealing with bits. But do not use bits today in any case, except that you have a lot of them, that they fill a few megabytes, or at least a small built-in memory.

+6
Jan 29 '09 at 2:00
source share

I can’t remember exactly how C # performs its implicit conversions, but in C ++ the conversion extension is done implicitly. Unsigned is considered to be wider than signed, and this leads to unexpected problems:

int s = 5; unsigned int u = 25; // s > u is false int s = -1; unsigned int u = 25; // s > u is **TRUE**! Error error! 

In the above example, s is crowded, so the value will be something like 4294967295. This has caused me problems earlier, I often have return -1 methods to say β€œno match” or something like that, and with an implicit conversion he just does not do what I think should.

After some time, programmers almost always learned to use signed variables, except in exceptional cases. Compilers these days also generate warnings for this, which is very useful.

+3
Jan 29 '09 at 2:01
source share

One reason is that public methods or properties related to unsigned types are not CLS compliant.

You will almost always see this attribute applied to .Net assemblies, as various wizards enable it by default:

[build: CLSCompliant (true)]

So basically, if your assembly includes the attribute above, and you are trying to use unsigned types in your public interface with the outside world, you will get a compilation error.

+3
Jan 29 '09 at 2:04
source share

no real need. Declaring something as unsigned to say that numbers must be positive is a failed verification attempt.

In fact, it would be better to have only one class of numbers that would represent all numbers.

You need to use some other technique to check the numbers, because, as a rule, a restriction is not just positive numbers, it is a range of values. It is usually best to use the most unrestricted method for representing numbers, and then if you want to change the rules for valid values, you change the JUST rules for checking NOT types.

+2
Jan 29 '09 at 2:07
source share

For simplicity. Modern software includes a sufficient number of conversions and conversions. There is the advantage of sticking to as many data types as possible to reduce complexity and ambiguity about the right interfaces.

+2
Jan 29 '09 at 2:10
source share

Unsigned data types carry over from the old days when the account was a premium. So now we really do not need them for this purpose. Combine this with casting and they are a bit cumbersome.

+1
Jan 29 '09 at 1:57
source share

It is impractical to use an unsigned integer, because if u assigned negative values ​​to it, all the infernal breakthrough will lose. However, if you insist on doing it right, try using Spe #, declare it as a whole (where you would use uint) and attach an invariant to it, saying that it can never be negative.

+1
Jan 29 '09 at 2:18
source share

You are right that it would probably be better to use uint for things that should never be negative. In practice, however, there are several reasons:

  • int is "standard" or "default", it has a lot of inertia.
  • You have to use annoying / ugly explicit casts everywhere to get back to int, which you probably need to do a lot of because of the first point in the end.
  • When you return an int, what happens if your uint value overflows it?
  • In most cases, people use ints for things that are never negative, and then use -1 or -99 or something similar for error codes or uninitialized values. This is a bit lazy, perhaps, but again this is what uint is less flexible with.
0
Jan 29 '09 at 2:00
source share

The biggest reason is that people are usually too lazy or not too careful to know where they fit. Something like size_t can never be negative, so unsigned is correct.

Casting from signed to unsigned can be fraught with danger, although due to the peculiarities of how signed bits are processed by the underlying architecture.

0
Jan 29 '09 at 2:02
source share

You do not need to manually throw it, I do not think so.

In any case, this is because for most applications this does not matter - the range for both is large enough for most purposes.

-one
Jan 29 '09 at 1:54
source share

This is because they are not compatible with CLS, which means that your code may not work as you expect in other implementations of the .NET Framework or even in other .NET languages ​​(if not supported).

In addition, it is not compatible with other systems if you try to transfer them. Like a web service to be used by Java, or a call to the Win32 API

See what SO> message for reason as well

-one
Jan 29 '09 at 2:20
source share



All Articles