From time to time, especially when you make 64-bit assemblies of some code base, I notice that there are many cases when integer overflows are possible. The most common case is that I am doing something like this:
// Creates a QPixmap out of some block of data; this function comes from library A QPixmap createFromData( const char *data, unsigned int len ); const std::vector<char> buf = createScreenShot(); return createFromData( &buf[0], buf.size() ); // <-- warning here in 64bit builds
The fact is that std::vector::size() beautifully returns size_t (which is 8 bytes in 64-bit assemblies), but the function accepts an unsigned int (which still remains only 4 bytes in 64-bit assemblies). Therefore, the compiler warns correctly.
If possible, I am trying to fix signatures in order to use the correct types in the first place. However, I often encounter this problem when combining functions from different libraries that I cannot change. Unfortunately, I often resort to some reasoning like “Okay, no one will ever take a screenshot generating more than 4 GB of data, so why bother” and just change the code to make
return createFromData( &buf[0], static_cast<unsigned int>( buf.size() ) );
So, the compiler closes. However, this seems really evil. Thus, I am considering the possibility of a kind of runtime statement that at least gives a good error in the debug assembly, as in:
assert( buf.size() < std::numeric_limits<unsigned int>::maximum() );
This is already a bit nicer, but I wonder: how do you deal with such a problem, that is: whole overflows that are "almost" impossible (in practice). I assume that this means that they do not occur for you, they do not occur for QA - but they explode in the face of the client.