How does the next send queue guarantee resource protection?

//my_serial_queue is a serial_dispatch_queue dispatch_async(my_serial_queue, ^{ //access a shared resource such as a bank account balance [self changeBankAccountBalance]; }); 

If I send 100 tasks, each of which gets access and changes the balance of the bank account, I understand that a sequential queue will execute each task in sequence, but are these tasks performed sequentially when using dispatch_async?

What if the task number 23, which I send asynchronously to the sequential queue, takes a lot of time? Will task 24 start only when task 23 is completed, or will task 24 begin before task 23 is completed? If so, is it possible for task No. 24 to have an incorrect bank account balance when it starts its work, thereby depriving data integrity?

Thanks!!

+2
source share
2 answers

man dispatch_queue_create says: "All memory entries executed by a block sent to a sequential queue are guaranteed to be displayed for subsequent blocks sent to a single queue." Thus, sequential queues are a good way to serialize access to a mutable state to avoid race conditions.

Perform these tasks also sequentially when using dispatch_async?

Yes. A queue defines an execution policy, not how you put a queue in a block.

In other words, if the queue is sequential, the sequence with an asynchronous or synchronizing process does not change this. The only difference: wait for this block to complete before continuing with the rest of the program? dispatch_async = no, dispatch_sync = yes.

What should I do if task number 23, which I send asynchronously to a sequential queue, takes a very long time to complete?

Nothing changes. A sequential queue always waits for the completion of a previously allocated block (# 23) before reconciling and executing the next block (# 24). If queue stopping is a problem, you should implement a timeout inside your block code.

+4
source

Yes, a dedicated sequential queue is a great way to synchronize access to some resource shared by multiple threads. And, yes, with a sequential queue, each task will wait for the completion of the previous one.

Two observations:

  • Although this sounds like a very inefficient process, it implicitly underlies any synchronization method (be it a queue-based or block-based approach) that aims to minimize concurrent updates to the shared resource.

    But in many scenarios, sequential queue technology can provide significantly better performance than other common methods, such as simple mutex locking, NSLock or the @synchronized directive. For a discussion of alternative synchronization methods, see the "Synchronization" Section in the Thread Programming Guide. For a discussion of using queues instead of locks, see “Exclude Lock Code” in the “Migrating from Threads” section of the Concurrency Programming Guide.

  • A variant of the sequential queue template is to use a reader-writer template, where you create a parallel GCD queue:

     queue = dispatch_queue_create("identifier", DISPATCH_QUEUE_CONCURRENT); 

    Then you read with dispatch_sync , but write with dispatch_barrier_async . Pure efficiency lies in allowing concurrent read operations, but ensures that writes are never performed at the same time.

    If your resource allows simultaneous reads, then a read / write pattern can provide additional performance enhancement over a sequential queue.

So, in short, while task No. 24 seems to be ineffective, wait for task No. 23, which is inherent in any synchronization technique in which you try to minimize parallel updates to a shared resource. And GCD serial queues are a surprisingly efficient mechanism, often better than many simple locking mechanisms. In some cases, a reader / writer template may offer even more performance improvements.


My initial answer below was in response to the original question, which was misleading under the heading "How is the concurrency serial dispatch queue guaranteed?" Looking back, this was just an accidental use of the wrong terms.


This is an interesting choice of words: "How does the sequential sending of concurrency guarantee?"

There are three types of queues , sequential, parallel, and main queue. The sequential queue will, as the name implies, not start the next block sent before the end of the previous one. (Using your example, this means that if task 23 takes a long time, it does not start task 24 until it is completed.) Sometimes this is crucial (for example, if task 24 depends on the results of task 23 or if both tasks 23 and 24 try access the same shared resource).

If you want these different sending tasks to be performed simultaneously with each other, you use a parallel queue (either one of the global parallel queues that you get through dispatch_get_global_queue , or you can create your own parallel queue using dispatch_queue_create with the DISPATCH_QUEUE_CONCURRENT option). In a parallel queue, many of your submitted tasks can run simultaneously. The use of parallel queues requires some caution (in particular, synchronization of shared resources), but can bring significant performance benefits when properly implemented.

And as a compromise between these two approaches, you can use queues of operations that can be simultaneously parallel, but in which you can also limit how many operations in the queue will be performed simultaneously at any given time by setting maxConcurrentOperationCount . A typical scenario in which you will use this is to perform background network tasks where you do not want to have more than five simultaneous network requests.

For more information, see the Concurrency Programming Guide .

+7
source

Source: https://habr.com/ru/post/1264035/


All Articles