C # which will let me reset

I have a class that implements the Begin / End Invocation template, where I originally used ThreadPool.QueueUserWorkItem () for streaming work. Work on the stream is not performed, but processing takes a little time, so the work itself does not just stop.

Now I have a side effect when someone using my class calls Begin multiple times (with a callback) to do a lot of processing, so ThreadPool.QueueUserWorkItem creates a ton of threads to process. This in itself is not bad, but there are cases when they want to refuse processing and start a new process, but they are forced to wait for the completion of their first request.

Since ThreadPool.QueueUseWorkItem () does not allow me to cancel threads, I am trying to find a better way to queue for work and possibly use the explicit FlushQueue () method in my class to allow the caller to refuse to work in my queue.

Anyone have suggestions on a thread pattern that fits my needs?

Edit: I am currently targeting environment 2.0. Currently, I think that the line of consumers / producers can work. Anyone have any thoughts on how to flush the queue?

Edit 2 Clarification of the problem: Since I use the Begin / End pattern in my class every time the caller uses Begin with callback, I create a completely new thread in the thread pool. This call does very little processing, not where I want to cancel. These are incomplete jobs in the queue that I want to stop.

The fact that ThreadPool will create 250 threads per processor by default means that if you ask ThreadPool to queue a large number of elements with QueueUserWorkItem (), you will create a huge number of parallel threads that you cannot stop.

The caller is able to push the CPU 100% not only with the job, but also with the creation of the job due to the way I queued the threads.

I was thinking, using the Producer / Consumer sample, I could put these threads in turn to allow me to reduce the number of threads that I create in order to avoid a spike in the processor creating all the parallel threads. And I could let the calling class clear all jobs in the queue when they refuse requests.

I'm currently trying to implement this, but realized that it was a good place when someone said about this code, or you won’t be able to reset it because of this, or resetting is not the right term that you mean.

+4
source share
4 answers

EDIT My answer does not apply since OP uses 2.0. Leaving and switching to CW for anyone reading this question and using 4.0

If you use C # 4.0 or you can take on one of the earlier versions of parallel frameworks, you can use their built-in undo support. This is not as simple as canceling a stream, but the structure is much more reliable (canceling a stream is very attractive, but also very dangerous).

Reed made a great article about this, you should take a look at

+1
source

The method I used in the past, although of course, not the best practice, is to dedicate an instance of the class to each thread and have an interrupt flag in the class. Then create a ThrowIfAborting method for the class that is periodically called from the thread (especially if the thread runs in a loop, just name it at each iteration). If the flag is set, ThrowIfAborting simply throws an exception, which falls into the main method for the stream. Just make sure you clean your resources when you interrupt.

+1
source

I decided that I consider your specific problem using the wrapper class around 1+ BackgroundWorker instances.

Unfortunately, I cannot publish my entire class, but here the basic concept is limited along with this.

Use : You simply create an instance and call RunOrReplace (...) when you want to cancel your old worker and start a new one. If the old worker was busy, he will be asked to cancel, and then another worker is used to immediately fulfill your request.

 public class BackgroundWorkerReplaceable : IDisposable { BackgroupWorker activeWorker = null; object activeWorkerSyncRoot = new object(); List<BackgroupWorker> workerPool = new List<BackgroupWorker>(); DoWorkEventHandler doWork; RunWorkerCompletedEventHandler runWorkerCompleted; public bool IsBusy { get { return activeWorker != null ? activeWorker.IsBusy; : false } } public BackgroundWorkerReplaceable(DoWorkEventHandler doWork, RunWorkerCompletedEventHandler runWorkerCompleted) { this.doWork = doWork; this.runWorkerCompleted = runWorkerCompleted; ResetActiveWorker(); } public void RunOrReplace(Object param, ...) // Overloads could include ProgressChangedEventHandler and other stuff { try { lock(activeWorkerSyncRoot) { if(activeWorker.IsBusy) { ResetActiveWorker(); } // This works because if IsBusy was false above, there is no way for it to become true without another thread obtaining a lock if(!activeWorker.IsBusy) { // Optionally handle ProgressChangedEventHandler and other features (under the lock!) // Work on this new param activeWorker.RunWorkerAsync(param); } else { // This should never happen since we create new workers when there none available! throw new LogicException(...); // assert or similar } } } catch(...) // InvalidOperationException and Exception { // In my experience, it safe to just show the user an error and ignore these, but that going to depend on what you use this for and where you want the exception handling to be } } public void Cancel() { ResetActiveWorker(); } public void Dispose() { // You should implement a proper Dispose/Finalizer pattern if(activeWorker != null) { activeWorker.CancelAsync(); } foreach(BackgroundWorker worker in workerPool) { worker.CancelAsync(); worker.Dispose(); // perhaps use a for loop instead so you can set worker to null? This might help the GC, but it probably not needed } } void ResetActiveWorker() { lock(activeWorkerSyncRoot) { if(activeWorker == null) { activeWorker = GetAvailableWorker(); } else if(activeWorker.IsBusy) { // Current worker is busy - issue a cancel and set another active worker activeWorker.CancelAsync(); // Make sure WorkerSupportsCancellation must be set to true [Link9372] // Optionally handle ProgressEventHandler -= activeWorker = GetAvailableWorker(); // Ensure that the activeWorker is available } //else - do nothing, activeWorker is already ready for work! } } BackgroupdWorker GetAvailableWorker() { // Loop through workerPool and return a worker if IsBusy is false // if the loop exits without returning... if(activeWorker != null) { workerPool.Add(activeWorker); // Save the old worker for possible future use } return GenerateNewWorker(); } BackgroundWorker GenerateNewWorker() { BackgroundWorker worker = new BackgroundWorker(); worker.WorkerSupportsCancellation = true; // [Link9372] //worker.WorkerReportsProgress worker.DoWork += doWork; worker.RunWorkerCompleted += runWorkerCompleted; // Other stuff return worker; } } // class 

Pro / con

This has a very low latency when starting your new execution, as new threads do not have to wait until the old ones finish.

This is due to the theoretical endless growth of BackgroundWorker objects that never get GC'd. However, in practice, the code below tries to recycle old workers, so you should usually not encounter a large pool of ideal threads. If you are worried about this because of how you plan to use this class, you can either implement a timer that runs the CleanUpExcessWorkers (...) method, or does ResetActiveWorker () this cleanup (due to the longer RunOrReplace (...) delay).

The main cost of using this is exactly why it is beneficial - it does not wait for the previous thread to exit, for example, if DoWork makes a database call and you run RunOrReplace (...) 10 times in a quick sequence, the database call cannot be canceled immediately when the thread - so you will have 10 requests, which will make them slow! This usually works fine with Oracle, which leads to slight delays, but I have no experience with other databases (to speed up the cleanup, I canceled the working Oracle report to cancel the command). The proper use of EventArgs, described below, basically solves this.

Another minor cost is that any code that runs this BackgroundWorker must be compatible with this concept β€” it must be able to safely recover from cancellation. DoWorkEventArgs and RunWorkerCompletedEventArgs have a Cancel / Canceled property that you should use. For example, if you make database calls in the DoWork method (mainly for this class), you need to periodically check these properties and perform the appropriate cleanup.

0
source

You can extend the Begin / End pattern to become a Begin / Cancel / End pattern. The Cancel method can set a cancel flag, which periodically polls the workflow. When a workflow detects a cancellation request, it can stop its operation, clear resources as necessary, and report that the operation has been canceled as part of the End arguments.

0
source

Source: https://habr.com/ru/post/1304511/


All Articles