How are non-blocking data structures possible?

I am having trouble understanding how any data structure can be "unblocked."

Let's say you make a "non-blocking" hash table. At some point, your hash table becomes too populated, so you need to step over to a large table.

This means that you need to allocate memory, which is a global resource. Thus, it seems that you should get some sort of lock to prevent global corruption of the heap ... regardless of possible problems with your own data structure!

But then this means that every other thread should block while you allocate your memory ...

What am I missing here?
(How) can you allocate memory without blocking another thread that does the same?

+6
source share
4 answers

Two examples for non-blocking designs are optimistic design and transactional memory .

The idea behind this is that in most cases the lock is redundant since two OPs can be executed simultaneously without interrupting each other. However, sometimes when 2 OPs occur at the same time, and because of this, the data becomes corrupted - you can return to the previous state and try again.

There may still be locks in these constructs, but the data lock time is much shorter and limited only by the critical time in which the OP action occurs.

+3
source

Just for some definitions, additional information, and to highlight terms without blocking , without blocking and without waiting , I recommend reading the following article (I will not copy the relevant passages here for too long):

Definitions without blocking, without blocking and without waiting

+2
source

Most strategies have one big picture. They use the compare and replace operation (CAS) in a loop until it succeeds.

For example, consider a stack implemented with a linked list. I chose the implementation of a linked list because it is easy to do at the same time as CAS, but there are other ways to do this. I will use a C-like pseudocode.

Push(T item) { Node node = new Node(); // allocate node memory Node initial; do { initial = head; node.Value = item; node.Next = initial; } while (CompareAndSwap(head, node, initial) != initial); } Pop() { Node node; Node initial; do { initial = head; node = initial.Next; } while (CompareAndSwap(head, node, initial) != initial); T value = initial.Value; delete initial; // deallocate node memory return value; } 

In the above code, CompareAndSwap is a non-blocking atomic operation that replaces a value in a memory address with a new value and returns the old value. If the old value does not match the expected value, you scroll through the loop and try it again.

+2
source

All that non-blocking means that you will never wait forever, and not that you will never wait. As long as your heap is also implemented using a non-blocking algorithm, you can implement other non-blocking algorithms on top of it.

0
source

Source: https://habr.com/ru/post/916979/


All Articles