Asynchronous processing in asp.net core and pool

I'm new to ASP.NET Core and C # in general, coming from the Java world, and I'm a little confused about how async / await keywords work: This article explains that if a method is marked as async, it means that

"this method contains a control flow that expects asynchronous operations and therefore will be rewritten by the compiler to continue the transfer style to ensure that asynchronous operations can resume this method in the right place." The whole point of asynchronous methods is that you stay in the current thread as much as possible

So, I thought it was a very cool way to transfer control between the asynchronous method and its caller, working in the same stream, implemented at the bytecode / virtual machine level. I mean, I expected below code

public static void Main(string[] args) { int delay = 10; var task = doSthAsync(delay); System.Console.WriteLine("main 1: " + Thread.CurrentThread.ManagedThreadId); // some synchronous processing longer than delay: Thread.Sleep(2 * delay) System.Console.WriteLine("main 2: " + Thread.CurrentThread.ManagedThreadId); task.Wait(); } public static async Task<String> doSthAsync(int delay) { System.Console.WriteLine("async 1: " + Thread.CurrentThread.ManagedThreadId); await Task.Delay(delay); System.Console.WriteLine("async 2: " + Thread.CurrentThread.ManagedThreadId); return "done"; } 

will write

  async 1: 1
 main 1: 1
 main 2: 1
 async 2: 1

passing control from doSthAsync to Main with the await keyword, and then back to doSthAsync using task.Wait() (and all this happens on the same thread).

However, the above code does print

  async 1: 1
 main 1: 1
 async 2: 4
 main 2: 1

which means that this is just a way to delegate the async method to a separate thread if the current thread is busy, which is almost exactly the opposite , as stated in the mentioned article:

The "async" modifier in a method does not mean that "this method is automatically scheduled to start the workflow asynchronously"

So, since the "continuation" of asynchronous methods, apparently, or at least can be scheduled to run in another thread, I have some questions that seem fundamental to C # applications scalability:

Can each asynchronous method run on a recently spawned thread, or is it a C # runtime that supports a special thread pool for this purpose, and in the latter case, how can I control the size of this pool? UPDATE: thanks @Servy. Now I know that in the case of CMD line applications this will be a stream stream stream, but I still don't know how to manage the number of these stream stream pools).

What about ASP.NET Core: how can I control the size of the thread pool that Kestrel uses to perform operations on an aync entity map or other (network) I / O? Is this the same thread pool that it uses to receive HTTP requests?
UPDATE: thanks to this and this article by Stephen Cleary. Now I understand that it will be launched “one-by-one” by default, that is: at any moment in the context of the SynchronizationContext of this HTTP request there will be exactly one thread, but this thread can be switched to another when any the async operation will be marked completed and the task will resume. I also found out that you can send any asynchronous method "from a given SynchronizationContext" to the pool thread stream using ConfigureAwait(false) . However, I do not know how to control the number of thread threads of a thread, and if this is the same pool that Kestrel uses to receive HTTP requests.

As I understand it, if all I / O operations are performed asynchronously, then the size of these pools (whether it is only one thread pool in fact or 2 separate ones) has nothing to do with the number of open outgoing connections for external resources, such as DB, not is it? (threads executing asynchronous code will continuously receive new HTTP requests, and as a result of their processing, outgoing connections will be opened (using DB or to any web server to retrieve any resource from the Internet) as fast as they can without any blockage)
This, in turn, means that if I do not want to kill my database, I must set a reasonable limit for my connection pool and make sure that the wait for an available connection is also asynchronous, right?
Assuming I'm right, I still want to limit the number of thread stream threads if some lengthy I / O operation is erroneously performed synchronously in order to prevent a large number of threads from appearing as a result of a large number of threads, threads are blocked during this synchronous operation (i.e. I believe that it is better to limit the performance of my application, and not make it generate an insane amount of threads due to such an error)

I also have a “just out of curiosity” question: why isn't async / await actually implemented to perform async tasks on a single thread? I don’t know how to implement it on Windows, but on Unix it can be implemented using either select / poll system calls or signals, so I think there is a way to achieve this on windows too, and it will be a really cool language indeed. UPDATE: if I understand correctly, this is almost how everything will work if my SynchronizationContext is not null: for example, the code will work "one-on-one-thread": at any given time there will be exactly one thread within a given SynchronizationContext. However, the way it is implemented will cause the synchronous method to wait for the async task to brake, right? (the runtime will schedule the task markup as completed to run in the same thread that is waiting for this task to complete)

Thanks!

+6
source share
2 answers

By default, when there is no SynchronizationContext or when ConfigureAwait(false) is called on the expected task, its continuation will be performed in the CLR-supported thread pool, which can be accessed using static methods from the ThreadPool class
in .net the main methods from ThreadPool for controlling its size ( setMaxThreads and setMinThreads ) are not yet implemented:
https://github.com/dotnet/cli/issues/889
https://github.com/dotnet/corefx/issues/5920
however, you can set its size statically in the project.json file, as described here:
https://github.com/dotnet/cli/blob/rel/1.0.0/Documentation/specs/runtime-configuration-file.md#runtimeoptions-section-runtimeconfigjson

kestrel has its own pool for asynchronously processing requests with libuv: it is controlled by the static ThreadCount property of the KestrelServerOptions class.

Related articles:
http://blog.stephencleary.com/2012/02/async-and-await.html
http://blog.stephencleary.com/2012/07/dont-block-on-async-code.html
https://blogs.msdn.microsoft.com/pfxteam/2012/01/20/await-synchronizationcontext-and-console-apps/
https://msdn.microsoft.com/en-us/magazine/gg598924.aspx

thanks to @Servy and @Damien_The_Unbeliever for the tips that let me find him

+3
source

await uses the value of SynchronizationContext.Current to schedule continuations. In the console application, by default there is no current synchronization context; therefore, it will simply use thread pool threads for scheduling continuations. In message loop applications, such as winforms, WPF, or winphone applications, the message loop sets the current synchronization context to one that will send all messages to the message loop, thereby launching them in the user interface thread.

ASP applications will also have the current synchronization context, but this is not a specific thread. Rather, when it is time to start the next message for the synchronization context, it receives the thread stream stream, sets it up with the request data for the correct request, and then starts this message. This means that when using the synchronization context in an ASP application, you know that no more than one operation is performed at a time, but it is not necessary that one thread process each response.

+7
source

Source: https://habr.com/ru/post/1013090/


All Articles