I wrapped a dll method that has an integer as an out parameter in a web service. In testing, I found that when I was expecting -1, I got 65535 instead. I realized that the dll uses 16-bit integers, and I specified a standard 32-bit .NET integer when referencing an external dll in my code . this was quickly fixed by specifying a 16-bit integer, and all is well.
My question is, why did this happen? I could understand that overflow is occurring if I try to set a 32-bit integer from a 16-bit integer, but I'm not sure why this is happening the other way around. It is clear that my understanding of this type of casting between types is a bit lacking, so any guidance would be greatly appreciated.
source
share