Interestingly, such an interesting question had no interesting answer (sorry for the tautology).
If you consider this as a theoretical question, then you need this link (there is even a hash function superfast written for you and ready to work):
http://www.kfki.hu/~kadlec/sw/netfilter/ct3/
The practical question may be different. If your hash table is a reasonable size, you still have to handle conflicts (with linked lists). So ask yourself, which use case will take place at the end? If your code will work in some isolated ecosystem, and the IP address will be abcd , c and d will be the most volatile numbers, and d will not be empty (if you do not work with networks) therefore a hash table of 64 thousand buckets and cd like hash may well be satisfactory?
Another use case is to monitor TCP connections, where the client uses an ephemeral port that is randomly assigned by the kernel (isn't that perfect for hashing?). The problem is a limited range: something like 32768-61000, which makes the least significant byte more random than the most significant byte. So you can XOR the most significant byte with the most volatile byte in the IP address, which can be zerro ( c ) and use it as a hash in your 64K table.
source share