The buffer is an integer. Could not understand this line of code.

I am looking for help in understanding this line of code in npm moudle hash-index .

The purpose of this module is to be a function that returns the hash file sha-1 of the input mode with the second argument passed.

The specific function in this module that I don't understand is the one that takes a buffer as an input and returns an integer:

var toNumber = function (buf) { return buf.readUInt16BE(0) * 0xffffffff + buf.readUInt32BE(2) } 

I cannot understand why these specific buffer offsets are chosen and what the purpose of multiplying by 0xffffffff .

This module is really interesting to me, and any help in understanding how it converts buffers to integers will be very grateful!

+5
source share
2 answers

It prints the first UINT32 buffer (Unsigned Integer 32) in the buffer.

First, it reads the first two bytes (UINT16) of the buffer using Big Endian, and then multiplies it by 0xFFFFFFFF.

Then it reads two bytes (UINT32) in the buffer and adds them to the multiplied number - the result is a number built from the first 6 bytes of the buffer.

Example: Consider [Buffer BB AA CC CC DD ...]

 0xbb * 0xffffffff = 0xbaffffff45 0xbaffffff45 + 0xaaccccdd = 0xbbaacccc22 

And as for the offsets, he chose it that way:

The first time it reads from byte 0 byte 1 (hides the type - UINT16)

the second time, it reads from byte 2 to byte 5 (converts to type - UINT32)

So, to summarize, he builds a number from the first 6 bytes of the buffer using the large end notation and returns it to the calling function.

Hope that answers your question.

Wikipedia Big Andian Record

EDIT

As someone noted in the comments, I was completely wrong that 0xFFFFFFFF is a left shift of 32, it's just a multiplication of numbers. I guess this is some kind of internal protocol for calculating the correct left buffer header, which matches what they expect.

EDIT 2

Looking at the function in the original context, I came to this conclusion:

This function is part of the hash stream and works this way:

The main stream accepts string input and the maximum number for the hash output, then it accepts string input, connects it to the SHA-1 hash function.

Hashing SHA-1 returns a buffer, it takes this buffer and applies hash indexing on it, as can be seen from the following code fragment:

 return toNumber(crypto.createHash('sha1').update(input).digest()) % max 

He also uses modulu to make sure the returned hash index does not exceed the maximum possible hash.

+2
source

Multiplying by 2 is equivalent to shifting the bits to the left by 1, so the goal of multiplying by 2 ^ 16 is the equivalent of shifting the bits remaining 16 times.

Here's a similar question already answered: Bitwise logic in C

0
source

Source: https://habr.com/ru/post/1238564/


All Articles