Why is an overload function that takes an int value preferable to an unsigned char?

Consider this program:

#include <iostream> using namespace std; void f(unsigned char c) { cout << c << endl; } void f(int c) { cout << c << endl; } int main() { f('a'); } 

This infers 97 , assuming an overload of f() selected, which takes an int value. I find it strange; wouldn't intuitively << 24> better match char ?

+5
source share
1 answer

wouldn't intuitively <<20> fit char better?

Well, I think, but not according to the Standard. According to [conv.prom]p1 :

A value of an integer type other than bool , char16_t , char32_t or wchar_t , whose integer conversion rank is less than int, can be converted to prval of type int if int can represent all values ​​of the source type; [...]

Now three types of characters have the same rank, and a signed type has a rank always less than int . This is a combination of [conv.rank]p1.6 and [conv.rank]p1.2

  • The rank of the signed integer type must be greater than the rank of any signed integer type with a smaller size.

  • [...]

  • The rank of char must be equal to the ranks of signed char and unsigned char .

In principle, each character always has a lower rank than int , and all of them can be represented in int , so overloading with unsigned char no better, because it will be associated with converting from char to unsigned char instead of advertising.

If you change your overload to take a char , then there will be an exact match, and, of course, the β€œcorrect” overload (in your eyes) will be chosen.

+8
source

Source: https://habr.com/ru/post/1271016/


All Articles