Why does this integer overflow occur?

I wrapped a dll method that has an integer as an out parameter in a web service. In testing, I found that when I was expecting -1, I got 65535 instead. I realized that the dll uses 16-bit integers, and I specified a standard 32-bit .NET integer when referencing an external dll in my code . this was quickly fixed by specifying a 16-bit integer, and all is well.

My question is, why did this happen? I could understand that overflow is occurring if I try to set a 32-bit integer from a 16-bit integer, but I'm not sure why this is happening the other way around. It is clear that my understanding of this type of casting between types is a bit lacking, so any guidance would be greatly appreciated.

+3
source share
2 answers

The 16-bit integer "-1" has all 16 bits. If you set the lower 16 bits of a 32-bit integer, the value is 65535. For an explanation of the internal representation of negative ints, see this article .

+6
source

This happened due to casting types.

In fact, you are not sending 16-bit integers to the call stack - they are still 32-bit. So the DLL returned exactly:

0x0000ffff

If you pass this, for example, sint16this -1, but if it is 32-bit, this 65535.

+2
source

Source: https://habr.com/ru/post/1703678/


All Articles