How do binary numbers relate to my everyday programming?

I am trying very hard to develop a much deeper understanding of programming in general. I understand the definition of a textbook as “binary,” but what I don’t understand is exactly the same with my daily programming?

The concept of "binary numbers" versus ... well ... of the "correct" numbers is completely lost in me, despite my best attempts to explore and understand the concept.

I am the one who originally taught me how to program by creating silly little adventure games in early DOS Basic and C, and now currently does most (and all) of my work in PHP, JavaScript, Rails, and other "websites", languages. I find that so much of this logic is abstracted into these higher-level languages ​​that in the end I feel that I am missing many of the tools needed to continue developing and writing better code.

If someone could point me towards a good, solid practical training resource or explain it here, it would be very appreciated.

I'm not so much looking for a “definition” (I’ve read the wikipedia page several times now), but a few more directions on how I can incorporate this new knowledge about which binary numbers are my daily programming, if at all. First of all, I write in PHP, so links to this language would be very useful.

Edit: As stated ... binary is a representation of a number, not another system at all. So, to reconsider my question, what are the benefits (if any) of using the binary representation of numbers, not just ... numbers.

+6
source share
8 answers

Binary trees (one of your tags), especially binary search trees , are practical for some everyday programming scenarios (e.g. sorting).

Binary numbers are necessary for calculating the basics, but are more rarely used in higher-level languages.

Binary numbers are useful in understanding boundaries, such as the largest unsigned number of different widths (for example, 2 ^ 32 - 1 for a 32-bit one), or the largest and smallest number of signed numbers for two additions (a system is usually used). For example, why is the smallest signed 32-bit number 32 + 32, but the largest 2 ^ 31 - 1? Even strange at first glance, - (- 2 ^ 31) (denying the smallest number), gives itself. (Hint, try with 2-bit numbers since the analysis is the same).

Another is the basic theory of information . How many bits do I need to represent 10,000 possibilities (log 2 10,000, rounded)? This also applies to cryptography, but you probably don't get it yet.

Do not expect daily use of binary code, but you are developing a basic understanding for these and other reasons.

If you are exploring pack and bitwise operators, you might find other use cases. In particular, many programmers do not know when they can use XOR (which can be understood by looking at the truth table involving two binary digits).

+11
source

Here is a short story to help you understand, and I will get to your question at the end.

Binary is a little strange because we are used to using the base number system 10. This is because people had 10 fingers when they ran out, they had to use a stick, sock or something else to represent 10 fingers. This does not apply to all cultures, although some of the hunter populations (for example, the Australian aborigine) used a basic system with 5 numbers (one hand), since the production of large numbers was not necessary.

In any case, the reason for base 2 is important in the calculation, because the circuit can have two states: low voltage and high voltage; think of it as a switch (on and off). Put 8 of these switches together and you have 1 byte (8 bits). The best way to think a bit: 1 = on and 0 = off, which exactly matches how it is presented in binary format. Then you might have something like this 10011100, where 1 is high voltage and 0 is low voltage. Early computers used physical switches that the operator could turn on and off to create a program.

Currently, you rarely have to use a binary number in modern programming. The only exceptions that I can think of are bitwise arithmetic, which are very fast and effective ways to solve certain problems or, possibly, some form of hacking a computer. All I can offer is to learn the basics of this, but don't worry about using it in everyday programming.

+4
source

There are two ways to use binary (compared to regular) numbers.

Due to the correct word, maybe not:

  • The binary is stored as compact bytes, say 4 bytes for an integer, 8 B for a double. Is SQL INT or DOUBLE . It is regularly stored as text, bytes per digit. SQL VARCHAR .

But in our case:

  • Representation in another numbering base: 101 binary = 1 * 4 + 0 * 2 + 1 * 1 = 5.

This lends itself to sophisticated yes / no state encoding:

Given 1 | x = 1 1 | x = 1 and 0 | x = x 0 | x = x (or, binary +) and 0 & x = 0 and 1 & x = x (and, binary *)

 $sex_male = 0: $sex_female = 1; $employee_no = 0*2; $employee_yes = 1*2; $has_no_email = 0*4; $has_email = 1*4; $code = $sex_female | $employee_no | $has_email; if (($code & $sex_female) != 0) print "female"; 
+3
source

For me, one of the biggest hits of the binary representation of numbers is the difference between floating point values ​​and our “ordinary” (base or decimal) notion of fractions, decimals, and real numbers.

The vast majority of fractions cannot be accurately represented in binary format. Something like 0.4 seems like it's not hard to imagine; he got only one place after the decimal, is it the same as two fifths or 40%, which is so difficult? But most software environments use binary floating point and cannot represent this number exactly! Even if the computer displays 0.4, the actual value used by the computer does not correspond to 0.4. This way you get all kinds of non-intuitive behavior when it comes to rounding and arithmetic.

Please note that this “problem” is not unique to binary. For example, using our own base-10 decimal notation, how do we represent one third? Well, we can't do it for sure. 0.333 is not quite the same as one third. 0.333333333333 is not one third. We can get closer, and the more numbers you allow us to use, the closer we can get. But we can never be exactly because it will require an infinite number of numbers. This is fundamentally what happens when binary floating point does what we do not expect: there is no infinite number of binary digits (bits) on the computer to represent our number, and therefore it cannot get it exactly, but it gives us the most intimate what he can.

+2
source

experience rather than a solid answer:

in fact, you don’t actually need the binary, since it is currently pretty abstracted in programming (depending on what you are programming). binary is more commonly used in systems design and networking.

some of the things my school colleagues do in their majors:

  • processor instruction sets and operations (op codes)
  • network and data transfer
  • hacking (especially the memory is "fake". more hexadecimal, but still related)
  • memory allocation (in the assembly we use hex, but sometimes binary)

you need to know how these "regular numbers" are represented and understood by the machine - therefore, all these "conversion lessons", such as hexadecimal, binary, binary and octal, machines only read binary files.

+1
source

With Python, you can explore bitwise operations and command line manipulations. Personally, I used bit operations to study the obscure compression algorithm used in packet radio communications.

+1
source

Interest Ask. Although you are a “modest web guy,” I have to say that it’s great that you are curious how the binary effect affects you. Well, to help, I would suggest raising a low-level language and playing with it. Something like C programming and / or Assembly. Regarding the use of PHP, try to study the PHP source code and its implementation. Here are quality links to binary / hexadecimal http://maven.smith.edu/~thiebaut/ArtOfAssembly/artofasm.html Good luck and happy learning :)

+1
source

As a web guy, you undoubtedly understand the importance of unicode. Unicode is presented in hexadecimal format when viewing character sets that are not supported by your system. Hexidecimal also appears in RGB values ​​and memory addresses. Hexideciaml, among many things, is a shorthand for writing long binary characters.

Finally, binary numbers work as the basis of truth: 1 is true, and 0 is always false.

Go to the book on digital funds and try your hand at logical logic. You will never look at if a and not b or c again!

0
source

Source: https://habr.com/ru/post/907078/


All Articles