Should a 16-bit microprocessor use a short data type instead of an int?

I read that using short vs int actually creates inefficiencies for the compiler in that it needs to use the int data type regardless of the fact that C is a whole ad. Is this true for 16-bit microprocessors?

Another question: if I have an array of 1s and 0s, is it more efficient to use uint8_t or unsigned char in this 16-bit microprocessor? Or there is still a problem with it being converted back to int ..

Please help me sort out this dirty problem. Thanks!

+4
source share
4 answers

In Blackfin, this is probably not a simple answer to the fact that 32 or 16-bit types will generate better performance because it supports 16, 32 and 64-bit commands and has two 16-bit MAC addresses. It will depend on the operations, but I suggest you trust your compiler optimizer to make such decisions, it knows more about the time and schedule of processor instructions than you probably care.

This means that in your compiler, int and short are the same size anyway. Consult the documentation, with the sizeof test, or look at the limits.h heading for numerical ranges that will determine the width or various types.

If you really want to limit the size of the data type, use stdint.h types such as int16_t .

stdint.h also defines the fastest integer types of minimum width , such as int_fast16_t , this guarantees a minimum width, but will use a larger one if it is faster on your target. This is perhaps the most portable way to solve your problem, but it relies on the developer making the right decisions about the appropriate types of use. On most architectures, it is little or no different, but on RISC and DSP architectures, which may not be the same. It may also not be the case that the specific size is the fastest under any circumstances, and this is probably especially true for Blackfin.

In some cases (when large amounts of data are transferred from external memory), the fastest size is likely to correspond to the width of the data bus.

+2
source
  • Is this really a problem? On most 16-bit systems I've heard about, int and short end up with the same size (16 bits), so in practice there should be no difference.

  • If uint8_t exists on the system, it will be synonymous with unsigned char . unsigned char will be the smallest unsigned type available on the system. If it is no more than 8 bits, there will be no uint8_t . If it is less than 8 bits, then this violates the standard. There will be no difference in effectiveness, since it needs to be defined in terms of another.

Finally, do you really need to worry about these microscopic differences? If you do this, you need to look into the assembly or (more likely) the profile and see which one is faster.

+5
source

On a 16-bit or larger processor, if you don't care how much memory it will take, use 'int' instead of 'short' or 'signed char'. If you do not need storage requirements or migration behavior, use "unsigned int" instead of "unsigned short" or "unsigned char". On an 8-bit processor, types

By the way, on some processors "unsigned short" is much slower than "unsigned int", because the C standard requires unsigned wrapper operations. If the unsigned short variable "foo" is stored in a register, the typical ARM compiler generation code is for "foo + = 1;" will generate one instruction to execute the increment, and two instructions to trim the value to 65535 [BTW, the optimizing compiler, which noticed that โ€œfooโ€ can never exceed 65536, could shave the instruction, but I don't know if any real compilers]. A signed "short" would not have to be slower than a "signed int", since none of the truncations is required by the standard; I'm not sure if compilers will skip truncation for signed types.

+2
source

I conclude that in projects that rely on byte sizes, it looks like this:

 typedef uint8 char typedef uint16 short typedef uint32 long 

For any data types.

Any conversion issues should be clarified at compile time. They will be associated with the processor and compiler.

Of course, adding 2 32-bit numbers on a 16-bit processor will entail some compromise. There can also be fun things when you dive into memory loading, depending on the width of the memory word and whether you can load from any address, or if bytes should be loaded from a given boundary

In short , YMMV and optimize after profiling.

0
source

Source: https://habr.com/ru/post/1339721/


All Articles