Why does WinAPI use int (32 bits) for type BOOL?

// <windef.h> typedef int BOOL; 

Isn't that a waste of memory since int is 32 bits?

Just in case, I was mistaken, I tried sending normal bool* to a function that required bool* and did not work until I used typedef int.

+6
source share
5 answers

Wow, slow down a bit there. First of all, I am sure that programmers have used 4 int bytes for boolean variables since the beginning of x86 programming. (There used to be no type like bool ). And I will risk assuming that the same typedef is in Windows 3.1 <Windows.h> .

Secondly, you need to understand a little more about architecture. You have a 32-bit machine, which means that all processor registers are 4 bytes or 32 bits wide. Therefore, for most memory accesses it is more efficient to store and receive 4 byte values ​​than for a 1-byte value.

If you have four 1-byte logical variables packed in one 4-byte memory block, three of them do not correspond to DWORD (4 bytes). This means that the CPU / memory controller actually needs to do more work to get the value.

And before you start breaking up MS to make this a "wasteful" typedef. Consider this: under the hood, most compilers (most likely) still use the bool data type as a 4-byte int for the same reasons that I just mentioned. Try it in gcc and see the map file. I bet I'm right.

+19
source

First, the type used in the system API should be as independent of the language as possible, because this API will be used by many programming languages. For this reason, any "conceptual" types that may either not exist in some languages, or may be implemented differently in other languages, are out of the question. For example, bool is suitable for this category. In addition, it is very useful to maintain a minimum number of interface types in the system API. Everything that can be represented by int must be represented by int .

Secondly, your statement about this "waste of memory" makes no sense. To become a "waste of memory", you would need to create an aggregate data type that includes an extremely large number of bool elements. There are no such data types in the Windows API. If you created such a wasteful data type in your program, this is actually your mistake. Meanwhile, the Windows API in no way forces you to store your booleans in the bool type. For this purpose you can use bytes and even bits. In other words, bool is a pure interface. An object of type bool usually does not take up any long-term memory if you use it correctly.

+13
source

The processor is 32 bits and has a special flag when it works with a zero integer, making testing for 32-bit booleans really, really, really fast.

Testing for 1 bit or a single-byte logical value will be many times slower.

If you are worried about memory space, you can worry about 4 byte bool variables.

Most programmers, however, are more concerned about performance, and therefore the default is to use the faster 32-bit bool.

You might be able to optimize your compiler for memory usage if that bothers you.

+3
source

Historically, BOOL used as the type anything-not-0 = TRUE. For example, the dialog procedure returned BOOL , which could contain a lot of information. Signed below from Microsoft Documentation :

 BOOL CALLBACK DlgProc(HWND hwndDlg, UINT message, WPARAM wParam, LPARAM lParam) 

The result of the signature and the function combined several problems, which is why in the modern API instead

 INT_PTR CALLBACK DialogProc( _In_ HWND hwndDlg, _In_ UINT uMsg, _In_ WPARAM wParam, _In_ LPARAM lParam ); 

This newfangled declaration should remain compatible with the old. This means that INT_PTR and BOOL must be the same size. This means that in 32-bit programming, BOOL is 32 bits.

In the general case, since BOOL can be any value, not just 0 and 1, it is a very difficult idea to compare a BOOL with TRUE . And although it works to compare it with FALSE , it is also generally bad practice, because it can easily give people the impression that a comparison with TRUE will be fine. In addition, because it is completely optional.

By the way, in the Windows API there are more logical types, in particular VARIANT_BOOL , which is 16 bits, and where the logical TRUE is represented as all 1 bit patterns, i.e. -1 as the sign value of & hellip;

This is an additional reason why it is not practical to compare directly with a logical FALSE or TRUE.

+2
source

Most of the answers seem to be misinformed. Using 4 bytes for a boolean is no faster than using 1 byte. The x86 architecture can read 1 byte as fast as 4, but 1 byte is less memory. One of the biggest performance threats is memory usage. Use too much memory and you will have more cache errors and you will have a slower program. These things do not really matter if you are dealing with only a few (hundreds!) Logical values, but if you have a lot of them, using less memory is the key to improving performance. In the case of a massive array, I would recommend 1 bit instead of 1 byte, since the additional logic to mask this bit is not significant if it saves 87% of the memory. You often see this practice with flag bit fields.

The answer to this question is certainly just a "legacy of reasons." That is, "Do not touch things that are not broken." Changing a line of code, for example, for minor optimization, can lead to hundreds of other problems that no one wants to deal with.

0
source

Source: https://habr.com/ru/post/918613/


All Articles