It prints the first UINT32 buffer (Unsigned Integer 32) in the buffer.
First, it reads the first two bytes (UINT16) of the buffer using Big Endian, and then multiplies it by 0xFFFFFFFF.
Then it reads two bytes (UINT32) in the buffer and adds them to the multiplied number - the result is a number built from the first 6 bytes of the buffer.
Example: Consider [Buffer BB AA CC CC DD ...]
0xbb * 0xffffffff = 0xbaffffff45 0xbaffffff45 + 0xaaccccdd = 0xbbaacccc22
And as for the offsets, he chose it that way:
The first time it reads from byte 0 byte 1 (hides the type - UINT16)
the second time, it reads from byte 2 to byte 5 (converts to type - UINT32)
So, to summarize, he builds a number from the first 6 bytes of the buffer using the large end notation and returns it to the calling function.
Hope that answers your question.
Wikipedia Big Andian Record
EDIT
As someone noted in the comments, I was completely wrong that 0xFFFFFFFF is a left shift of 32, it's just a multiplication of numbers. I guess this is some kind of internal protocol for calculating the correct left buffer header, which matches what they expect.
EDIT 2
Looking at the function in the original context, I came to this conclusion:
This function is part of the hash stream and works this way:
The main stream accepts string input and the maximum number for the hash output, then it accepts string input, connects it to the SHA-1 hash function.
Hashing SHA-1 returns a buffer, it takes this buffer and applies hash indexing on it, as can be seen from the following code fragment:
return toNumber(crypto.createHash('sha1').update(input).digest()) % max
He also uses modulu to make sure the returned hash index does not exceed the maximum possible hash.