I'm new to ASP.NET Core and C # in general, coming from the Java world, and I'm a little confused about how async / await keywords work: This article explains that if a method is marked as async, it means that
"this method contains a control flow that expects asynchronous operations and therefore will be rewritten by the compiler to continue the transfer style to ensure that asynchronous operations can resume this method in the right place." The whole point of asynchronous methods is that you stay in the current thread as much as possible
So, I thought it was a very cool way to transfer control between the asynchronous method and its caller, working in the same stream, implemented at the bytecode / virtual machine level. I mean, I expected below code
public static void Main(string[] args) { int delay = 10; var task = doSthAsync(delay); System.Console.WriteLine("main 1: " + Thread.CurrentThread.ManagedThreadId);
will write
async 1: 1
main 1: 1
main 2: 1
async 2: 1
passing control from doSthAsync
to Main
with the await
keyword, and then back to doSthAsync
using task.Wait()
(and all this happens on the same thread).
However, the above code does print
async 1: 1
main 1: 1
async 2: 4
main 2: 1
which means that this is just a way to delegate the async method to a separate thread if the current thread is busy, which is almost exactly the opposite , as stated in the mentioned article:
The "async" modifier in a method does not mean that "this method is automatically scheduled to start the workflow asynchronously"
So, since the "continuation" of asynchronous methods, apparently, or at least can be scheduled to run in another thread, I have some questions that seem fundamental to C # applications scalability:
Can each asynchronous method run on a recently spawned thread, or is it a C # runtime that supports a special thread pool for this purpose, and in the latter case, how can I control the size of this pool? UPDATE: thanks @Servy. Now I know that in the case of CMD line applications this will be a stream stream stream, but I still don't know how to manage the number of these stream stream pools).
What about ASP.NET Core: how can I control the size of the thread pool that Kestrel uses to perform operations on an aync entity map or other (network) I / O? Is this the same thread pool that it uses to receive HTTP requests?
UPDATE: thanks to this and this article by Stephen Cleary. Now I understand that it will be launched “one-by-one” by default, that is: at any moment in the context of the SynchronizationContext of this HTTP request there will be exactly one thread, but this thread can be switched to another when any the async operation will be marked completed and the task will resume. I also found out that you can send any asynchronous method "from a given SynchronizationContext" to the pool thread stream using ConfigureAwait(false)
. However, I do not know how to control the number of thread threads of a thread, and if this is the same pool that Kestrel uses to receive HTTP requests.
As I understand it, if all I / O operations are performed asynchronously, then the size of these pools (whether it is only one thread pool in fact or 2 separate ones) has nothing to do with the number of open outgoing connections for external resources, such as DB, not is it? (threads executing asynchronous code will continuously receive new HTTP requests, and as a result of their processing, outgoing connections will be opened (using DB or to any web server to retrieve any resource from the Internet) as fast as they can without any blockage)
This, in turn, means that if I do not want to kill my database, I must set a reasonable limit for my connection pool and make sure that the wait for an available connection is also asynchronous, right?
Assuming I'm right, I still want to limit the number of thread stream threads if some lengthy I / O operation is erroneously performed synchronously in order to prevent a large number of threads from appearing as a result of a large number of threads, threads are blocked during this synchronous operation (i.e. I believe that it is better to limit the performance of my application, and not make it generate an insane amount of threads due to such an error)
I also have a “just out of curiosity” question: why isn't async / await actually implemented to perform async tasks on a single thread? I don’t know how to implement it on Windows, but on Unix it can be implemented using either select / poll system calls or signals, so I think there is a way to achieve this on windows too, and it will be a really cool language indeed. UPDATE: if I understand correctly, this is almost how everything will work if my SynchronizationContext is not null: for example, the code will work "one-on-one-thread": at any given time there will be exactly one thread within a given SynchronizationContext. However, the way it is implemented will cause the synchronous method to wait for the async task to brake, right? (the runtime will schedule the task markup as completed to run in the same thread that is waiting for this task to complete)
Thanks!