Standard C looks weird when it comes to defining a byte. However, you have a couple of guarantees.
- Byte will always be char size
- sizeof (char) always returns 1
- A byte will be at least 8 bits in size.
This definition is not well connected with older platforms where bytes are 6 or 7 bits long, but this means that BYTE*, and char * guaranteed to be equivalent.
Multiple zeros are required at the end of a Unicode string, because there are valid Unicode characters starting with a zero (zero) byte.
As for simplifying code reading, this is completely a matter of style. This code seems to be written in the style used by the old C Windows code, which I definitely didn't like. There are probably many ways to make it more understandable to you, but there is no clear answer to making it more clear.
Swiss source share