Bitwise operations are still practical?

Wikipedia, the only true source of knowledge, states:

On most older microprocessors, bitwise operations are slightly faster than addition and subtraction operations, and are usually significantly faster than multiplication and division operations. On modern architectures, this is not so: bitwise operations tend to have the same speed as an addition (although even faster than multiplication).

Is there a practical reason to study bitwise operations, or is it just what you learned for theory and curiosity?

+6
source share
9 answers

Bitwise operations are worth exploring because they have many applications. not their main use to replace arithmetic operations. Cryptography, computer graphics, hash functions, compression algorithms, and network protocols are just some examples where bitwise operations are extremely useful.

The lines that you quoted in the Wikipedia article just tried to give some clues about the speed of bitwise operations. Unfortunately, the article does not provide some good sample applications.

+12
source

Bitwise operations are still useful. For example, they can be used to create “flags” using a single variable and save the number of variables that you would use to indicate different conditions. As for performance in arithmetic operations, it is better to leave the compiler optimized (if you are not a kind of guru).

+11
source

They are useful for understanding how a binary file “works”; otherwise no. In fact, I would say that even if bitwise hacks are faster in this architecture, it is the compiler’s task to use this fact - not yours. Write what you mean.

+4
source

Of course (to me) the answer is yes. The fact that the add command is currently running as fast as or or and means that ... but or not add and you will use it when you need it (not of course, but just to do or , ...). Improvements in the speed of instructions, such as adding, splitting, etc., just mean that now you can use them and worry less about performance impacts, but now it’s true, as in the past, that you won’t change any add for a few bitwise operations!

+1
source

The only time it makes sense to use them is that you actually use your numbers as battlevectors. For example, if you are modeling some equipment, and the variables are registers.

If you want to do arithmetic, use arithmetic operators.

+1
source

Depends on your problem. If you control the hardware, you need ways to set single bits to an integer.

Buy a PCI OGD1 card (open graphics card) and talk to her using libpci. http://en.wikipedia.org/wiki/Open_Graphics_Project

+1
source

It is true that in most cases, when you multiply an integer by a constant, which is the power of two, the compiler optimizes it to use a bit shift. However, when the shift is also a variable, the compiler cannot subtract it unless you explicitly use the shift operation.

+1
source

Funny nobody thought it necessary to mention the ctype [] array in C / C ++ - also implemented in Java. This concept is extremely useful in processing languages, especially when using different alphabets or when analyzing sentences.

ctype [] is an array of 256 short integers, and in each integer there are bits representing different types of characters. For example, ctype [; A '] - ctype [' Z '] have bits set to indicate that they are uppercase letters of the alphabet; ctype ['0'] - ctype ['9'] have bits to indicate that they are numeric. To see if the character x is alphanumeric, you can write something like "if (ctype [x] and (UC | LC | NUM))", which is somewhat faster and much more elegant than writing "if (" A "= x <= 'Z' || .... '.

Once you start thinking bit by bit, you will find many places to use it. For example, I had two text buffers. I wrote one to the other, replacing all occurrences of FINDstring with REPLACEstring when I went. Then, for the next find-replace pair, I simply switched the buffer indices, so I always wrote from the [in] buffer to buffer [out]. 'in' started as 0, 'out' as 1. After copying was complete, I simply wrote 'in ^ = 1; out ^ = 1; '. And after processing all the notes, I just wrote the [out] buffer to disk, without the need to know what "out" at that time was.

If you think this is low level, think that certain mental errors, such as deja-wu and his twin-jamais-vu, are caused by brain-bit errors!

+1
source

To work with IPv4 addresses, a bit operation is often required to find out if the ad-hoc network address is in the routed network or should be redirected to the gateway, or if the ad-hoc node is part of a network that is allowed or prohibited by firewall rules. To detect the broadcast address of the network, bit operations are required.

IPv6 addresses require the same basic bit-level operations, but since they are so long, I’m not sure how they are implemented. I would promise the money that they are still implemented using bit operators on pieces of data corresponding to the size of the architecture.

0
source

Source: https://habr.com/ru/post/891891/


All Articles