There is no trick to access a random element (or first or last) of a given hash object.
If you need to iterate over hash objects, you have several options:
the first should complement the hash with another data structure that you can slice (for example, a list or zset). If you just add items to the hash (and iterate to remove them), a list is enough. If you can add / remove / update elements (and iterations to remove them) then zset is required (mark the timestamp as an estimate). Both zset lists can be sliced ββ(lrange, zrange, zrangebyscore), so it is easy to iterate over them with fragments and synchronize both data structures.
the second should complement the hash with another data structure that supports similar pop operations, such as a list or set (lpop, rpop, spop). Instead of iterating on a hash object, you can push all objects from the secondary structure and maintain the hash object accordingly. Again, both data structures need to be synchronized.
the third is to break the hash object into many parts. This is actually efficient memory, because your keys are stored only once, and Redis can use ziplist memory optimization .
So, instead of saving your hash as:
myobject -> { key1:xxxx, key2:yyyyy, key3:zzzz }
you can save:
myobject:<hashcode1> -> { key1:xxxx, key3:zzzz } myobject:<hashcode2> -> { key2:yyyy } ...
To calculate an additional hash code, you can apply any hash function on your keys, which offers a good distribution. In the above example, we assume that key1 and key3 have the same hashcode1 value, and key2 has hashcode2.
Here you can find more information about such data structures:
Redis 10 times more memory usage than data
The output power of the hash function must be calculated so that the number of elements per hash object is limited to a given value. For example, if we want to have 100 elements per hash objects, and we need to store 1M elements, we need 10K power. To limit the power, it is enough to simply perform a modulo operation for a common hash function.
The advantage is that it will be compact in memory (using ziplist), and you can easily destroy the hash objects by pipelining hgetall + del on all of them:
hgetall myobject:0 ... at most 100 items will be returned, process them ... del myobject:0 hgetall myobject:1 ... at most 100 items will be returned, process them ... del myobject:1 ...
Thus, you can iterate over a piece with a block with a grain size, which is determined by the output power of the hash function.